Even to the Edge of Doom:  Google’s non-indemnity AI indemnity for users is a cautionary tale

And I do mean tale.

There’s loose talk going around about big generative AI companies promising to “indemnify” their users against using their AI products.  When you drill down, these promises from the biggest companies in commercial history (unsurprisingly) are more nuanced—somewhere between a pinky swear and kings X.  Let’s take Google’s “indemnity,” for example.  In a headline Google may come to regret, their public messaging is pretty broad:  “Shared fate: Protecting customers with generative AI indemnification.”  Or “you’re as guilty as we are.”

Shared fate, you know.  As the Bard wrote in Sonnet 116, “Love alters not with his brief hours and weeks,  but bears it out even to the edge of doom.”  Now that’s an indemnity.

But Google’s “indemnity” is quite a different thing.  Google maybe are promising to cover losses to some people from certain third party copyright claims for AI training materials that infringe copyrights under certain limited circumstances.  Outside of the mumble tank, they definitely plan to control any litigation that might establish legal principles favoring artists.  And at that they are covering some copyright claims but not all.  So nothing to see here, right?

Wrong.  What do these Googley promises really mean?  I think you will find that far from being good guys, Google and other AI companies are doing what they always do—using their unprecedented wealth and market dominance to do what they want to do (infringe on a massive level), use copyrights as a honeypot to draw attention to their AI products, and most of all get away with it.  They are not indemnifying certain paying customers because they have warm and fuzzy feelings about their customers—they do it for the money.  And listen up because this is important:  Users like you and me are human shields.

What is an “Indemnity”? 

Let’s explore what an indemnity actually is and see if you think it really is “shared fate.”  And I can tell you that there is not an insurer on earth who wants to share your fate.  The whole point of insurance is to not share your fate.

The concept of indemnity is one of the older concepts in the law and is at the heart of liability insurance.  You’re probably familiar with the insurance marketplace that started in Lloyd’s Coffee House in 17th Century London. Lloyd’s was one of the first European markets for marine insurance, but the concept of indemnification dates back to the Phonecian sailors and Lex Rhodia, also known as “Rhodian law,” a legal code emerged on the island of Rhodes, part of the Doric Hexapolis, around 1000 BC.  

Roman merchants and shipowners faced similar risks as their Phoenician predecessors. In the Digesta seu Pandectae, a compilation of Roman laws ordered by Emperor Justinian I in 533 AD, a legal opinion by the Roman jurist Paulus discussed the Lex Rhodia. This opinion highlighted the principle of “general average,” emphasizing the need to distribute risks and losses fairly among stakeholders.   Ancient Roman law recognized “bottomry contracts,” where agreements were drawn up, and funds were deposited with money changers. These contracts allowed shipowners (in desperate straits) to borrow money for voyages, using the ship itself as collateral. If the voyage was successful, the lender received repayment with interest. If the ship encountered perils at sea, the lender bore the risk. Putarch referred to  bottomry as “the most disreputable form of money lending,” so not well accepted. Bottomry contracts laid the groundwork for modern marine insurance practices.

Sound familiar yet?  So Google didn’t think of the idea, yes?   The word “indemnity” is derived from the Latin word “indemnis”, which means “unhurt”, combining the adverb “in-” (meaning “not”) and the legal term “damnum” (meaning “damage”, hence “not worth a damn”).  Nothing shared about that fate.

So an indemnity really means that someone like a money changer or a Lloyd’s coffee drinker agrees—for a price, let’s not forget that—to make a ship owner or merchant whole up to a point—let’s not forget that either—if the ship is sunk or the goods are damaged.

The price for Google’s “indemnity” is the same as the price for your data–you get to use their products and in the case of AI, you pay them.

Of course, the Lloyd’s coffee drinker, call them the “indemnitor” because they are the ones who do the indemnifying, cannot actually prevent a harm befalling the ship owner “indemnitee” (the one who is indemnified).  That would require either magic or having an inside track on providence, and they had neither.  

And here’s another important point:  All the indemnitee has to protect them from the harm is a promise from the indemnitor to make them whole up to a point, which we would call the “policy limits”.  That promise is represented in an indemnity contract, also known as an insurance policy.  It is not open ended in terms of the causes of the harm or the cost of the harm.

For example, if the policy covered a ship (or a car) worth $X, and a cargo worth $10X, from the peril of hurricane, and the ship was sunk by an artillery barrage or missile because it was in the wrong place at the wrong time, or in addition to the insured cargo was also secretly carrying a million rounds of 5.56, Lloyds won’t cover the loss of the ship in the war zone or the cost of the lost bullets.  If the captain has to sell his house to cover the losses, Lloyds won’t cover that either.

So you can see that as soon as you introduce the concept of indemnity you also introduce the concept of limitations of several important types, not to mention the cost of the indemnity contract or insurance policy.

But now let’s say that the ship didn’t carry munitions and was destroyed by a hurricane so it was an indemnified peril and a covered loss.  Then let’s say that Lloyds said, oops, tough break for a swell guy—we had too many other ships lost in the same hurricane and we don’t have the money.  The captain, or the indemnitees might very likely be able to sue Lloyds for breach of promise. Trust me, it is very unlikely that you will be able to successfully sue Google for not indemnifying you although it is very likely that Google will fight your claim for a decade if you try. (Has Kim Dot Com been extradited yet?)

Three important concepts there to understand this loose talk about “indemnity” by generative AI companies:  Price, limits and remedy.  Let’s look at the promises being made to users if they really are promises at all.

Indemnification Approaches by Generative AI Companies

Not all generative AI companies offer “indemnity” to users; to my knowledge, OpenAI has no indemnity for users at all.  If the company offers no indemnification (and maybe even if they do) users proceed at their own risk.  It’s also important to note that these “indemnifications” only apply to some copyright infringement.  It’s entirely possible that the indemnitors plan to deploy users as human shields to broadly assert fair use defenses even if the user might not otherwise assert fair use.  (Google did something similar with YouTube.). This AI “indemnity” would give them control over litigation that would feather their nest by giving them another crack at twisting fair use to their corporate benefit.

Consider Google’s supposed “indemnity” and I think you’ll see that Mr. Lloyd would have led them out of the coffee shop by the ear.  Why? Because these are not indemnities at all.

These promises appear to induce the general public to engage in potential direct and willful copyright infringement on a grand scale.  Note that the “indemnity” promises have common elements.  First, the price of the indemnity, i.e., the premium paid, is in successfully encouraging users to engage in risky behavior.  

Google tries to accomplish this inducement in a very Googley manner with lots of “goo goo”:

At Google Cloud, we put your interests first. [Please…] This means that when you choose to work with us, we become partners [but Silicon Valley “partners” not real “partners”] on a journey of shared innovation, shared support, and shared fate. [This sentence probably qualifies as a straight up lie.] We are committed to helping you evolve as technology advances, drawing on our depth of experience to ensure you can use the latest and best technology, while keeping you safe and protected. [So we can keep exploiting your gullibility.]. When it comes to the rapidly developing world of generative AI, this is imperative. [Ya don’t say?]

In the case of Google, the AI platform’s performance of its “indemnity” is only required if (1) a particular user is (2) “challenged” (whatever that means) on (3) copyright grounds (whatever that means) while using and paying for (4) a particular Google product namely Duet AI which is embedded across various Google products, currently Workspace, Google Cloud and Vertex AI.

Google goes on to apply more limitations on your and their “shared fate”:

An important note here: you as a customer [that is, paying Google] also have a part to play [taking a page from the MLC’s “play your part” obfuscation technique]. For example, this indemnity only applies if you didn’t try to intentionally create or use generated output to infringe the rights of others, and similarly, are using existing and emerging tools, for example to cite sources to help use generated output responsibly.

Get it? Got it? Good.  That’s certainly the kind of limitations I think of when the biggest corporation in commercial history tells me our souls are intertwined.

As Kyle Wiggers wrote in TechCrunch:

In the midst of the uncertainty, you might think that generative AI vendors would stand behind their customers in the strongest terms — if for no other reason than to their allay their fears of IP-related legal challenges.

But you’d be wrong.

From the language in some terms of service agreements — specifically the indemnity clauses, or the clauses that specify in which cases customers can expect to be reimbursed for damages from third-party claims — it’s clear that not every vendor’s willing to chance a court decision forcing them to rethink their approach to generative model training, or in the worst case their business model.

Or in the words of HAL 9000, I’m sorry, I’m afraid I can’t do that.

What could possibly go wrong.