Don’t be Moral: Why Does Google Consistently Deflect Questions of Machine Ethics?

“As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world.”

Morals and the Machine,” The Economist

I was invited to a small dinner recently with representatives of Google and Google-backed advocacy and lobbying groups.  This was probably a well intentioned idea, but dinner conversation soon reverted to a quick review of the canned excuses for piracy that we are all familiar with (see the MTP “Canard du Jour” series.”

One canard goes to the heart of the YouTube case–the YouTube interface is “automatic” and Google has no responsibility for what is done with their technology.  This precept has always seemed to me to be faulty–some human created a machine that does things automatically or on a low level in the case of YouTube, autonomously.

Therefore, the designer and in this case the operator of the machine is trying to shirk responsibility for the actions of the machine she created.

My reaction to that is that the operator of the machine is responsible for the acts of the machine when operated the way it was intended.  (Note: don’t start about guns don’t kill people, etc., YouTube is acting the way it is designed and it is doing what it is supposed to do–if you doubt that, try getting an infringing file removed–not a link disabled, not “unmonetized” but actually removed permanently from YouTube–not a design flaw but a feature.)  I wish this was something that was not taught in the first week or two of law school so I could claim some credit for having a brilliant thought.  Alas, the principle is so well-trodden and clearly stated that you would have to be extraordinarily uninformed or naive to think that you would win this argument before a judge.

Joel Goodsen: Your honor, it’s not my fault–the machine I created did it all by itself!

Judge: Take off your sunglasses and sit down, Mr. Goodsen.

Now this seems obvious, doesn’t it?  But yet these erudite digerati trotted out this red herring and paraded it around the table.  As a famous man once said before doing God’s work, “Well allow me to retort!”

How can anyone avoid responsibility for their actions by creating a machine that does what they tell it to do?  Is this really a principle that we want to elevate in society given what will surely be the onslaught of drone surveillance and drone warfare, cyberattacks and hacking by machine?  Military planners are coming to grips with these issues right now and the same companies that plan on profiting from government contracts may be trying to shape public opinion in preparation for profitable drone surveillance contracts to build and operate eyes in the sky–possibly networked to all the information that Google, Facebook et al have been collecting about us for years and that they definitely plan on collecting in the future.  And profiting from.  And sharing with God knows whom.

And remember–Google and Facebook aren’t subject to Title 10 and Title 50.  What they do with the information they collect?  Well, it’s just machines doing it, you see.  Don’t get all moral about it.

So naturally they would not want moral judgements to cloud machine efficiency laid on the alter of the Great God Scale.

The Nevada Autonomous Vehicle Regulations

Let’s take a look at the regulations that apply to Google’s driverless cars, or as the State of Nevada refers to them, “autonomous vehicles”:

[A] person shall be deemed the operator of an autonomous vehicle which is operated in autonomous mode when the person causes the autonomous vehicle to engage, regardless of whether the person is physically present in the vehicle while it is engaged….For the purpose of enforcing the traffic laws and other laws applicable to drivers and motor vehicles operated in this State, the operator of an autonomous vehicle that is operated in autonomous mode shall be deemed the driver of the autonomous vehicle regardless of whether the person is physically present in the autonomous vehicle while it is engaged. (emphasis mine)

Well, no kidding.  Wouldn’t it be wonderful if this was some received wisdom that struck as though a great epiphany like Paul on the road to Damascus?  But no–it stands for the principle that you can’t avoid liability by saying that the the machine did it–that the machine you created was operating in the way you designed it, but the user engaged the technology.  Stunning, I know, but there it is.

Does it bear repeating that these Nevada regulations are the rules Google agreed to in order to test its driverless cars?  Or does it bear repeating that Google has no liability reserve for the product tort exposure they have just taken on?  Probably not, since the insiders control the company on a 10 to 1 so it doesn’t really matter what any stockholder says.

The Spring Gun Cases

The use of machines to do one’s dirty work has never worked out too well.  Legend has it that the final causal link that incited the 1775 Gunpowder Incident in colonial Williamsburg was a spring gun set by the British to protect the Williamsburg magazine.  Two were wounded by the spring gun and the ensuing riot caused Governor Dunsmore to flee the city and declare Virginia to be in rebellion.

But the case that every first year law student encounters within days of starting their Torts class (unless taught by a pamphleteer) is Bird v. Holbrook, 4 Bing. 628, 130 Eng. Rep. 911 (1825), also known as the Spring Gun Case.  Among other things, Bird stands for an important, some might say crucial, principle of the Common Law expressed in Judge Burrough’s concurrence that “No man can do indirectly that which he is forbidden to do directly.”

Don’t Be Moral

When the Googlers and near-Googlers wanted to ignore Judge Burrough’s admonishment–he is, after all, so old and never tweeted–they seemed genuinely stumped by the proposition that you can’t do indirectly that which you cannot do directly.  That this was a surprising or difficult concept seems itself to be really surprising.

For example, if Mom tells you that you can’t have any kale chips, the fact that you train your yellow lab Chuck to get them for you doesn’t mean that Mom will let you have them.

If Dad tells you that you have to go to Stanford instead of Cal, the fact that you train a machine to fake Stanford letterhead on your acceptance letter to Cal doesn’t mean you get to go.

You get the idea.  These examples are a bit humorous to call attention to what should be a simple proposition, but it’s not like I think that no one at Google knows this.  Clearly they must.  It’s that they also must think that the rest of the world is either so stupid that they won’t understand or are so wowed by technology that they’ll forget, or have some other reason for thinking they can get away with this “the machine did it, not me” kind of reasoning.

So whether it is a root supernode, YouTube or a driverless car (which is not “autonomous,” an unfortunate term used by the Nevada statutes) the fact that the machine is designed to do indirectly that which can only be done directly with substantial liability does not get the machine’s operator out of the liability exposure if the machines go wild.

This is, of course, a simple extension of the concept of taking responsibility for acts of free will, a societal norm that starts somewhere around the burning bush on Mount Herob and continues on a more or less straight line to the Nevada driverless car statutes.

Some have tried to pass off concern about these free will choices as “moral panics” (even some pen pals of Andrew McLaughlin).  You can understand why a company like Google would want its employees, consultants and fellow travelers to be out beating the blogs about how moral judgments should not be taken into account when considering activities online.  Or, to paraphrase Google’s corporate motto, Don’t Be Moral.  Google should not be surprised that their moral compass should be taken into account in judging their actions because they put good and evil into the debate nearly from the first day of the company’s existence.

By perpetuating a motto of “Don’t be Evil”, but devaluing that moral judgement when it comes to intellectual property theft as “Don’t be Moral,” Google essentially devalues human interaction with the Internet to a machine like process.  As Jaron Lanier wrote in the New York Times:

Clay Shirky, a professor at New York University’s Interactive Telecommunications Program, has suggested that when people engage in seemingly trivial activities like “re-Tweeting,” relaying on Twitter a short message from someone else, something non-trivial — real thought and creativity — takes place on a grand scale, within a global brain.  That is, people perform machine-like activity, copying and relaying information; the Internet, as a whole, is claimed to perform the creative thinking, the problem solving, the connection making. This is a devaluation of human thought.

Consider too the act of scanning a book into digital form. The historian George Dyson has written that a Google engineer once said to him: “We are not scanning all those books to be read by people. We are scanning them to be read by an A.I.”  While we have yet to see how Google’s book scanning will play out, a machine-centric vision of the project might encourage software that treats books as grist for the mill, decontextualized snippets in one big database, rather than separate expressions from individual writers.

 I think it’s pretty obvious that Google would have a difficult time explaining their massive infringement of the world’s books if there was any moral component to it.  That’s probably why they have a digitizing work force kept separate and apart from other Googlers with instructions to call security if anyone speaks to them.  (See “Epsilons at the Brave New Googleplex.”)  That’s not the act of someone who is proud of what they are doing, or who feels that what they are doing is just business.

It’s the act of someone who knows that the Spring Gun Case is still good law.  Particularly because they just agreed it is when they launched their remotely operated cars in Nevada.

See also: Google refuses to rule out face recognition technology despite privacy rows; Google Acquires Facial Recognition Technology Firm PittPatt; Why Facebook’s Facial Recognition is Creepy; Google to track ships at sea including US Navy; If we could waive a magic wand and not be subject to US law that would be great–Sergei Brin.