Autonomous vehicle IP protection — when HAL is driving

Each day, a fleet of lidar-guided, all-electric Chevy Bolts exits a downtown garage to roam San Francisco, attempting to blend in with people-guided vehicles and other AVs. The autonomous vehicles are actually only “semi-autonomous” — each has a human crew whose mission is to correct the car’s erroneous driving decisions on the fly, until sufficient data can at least approximate the reflexive intuition of an experienced driver.

Like a child who develops a sense for right and wrong through praise and scolding, present-day machine learning requires similar binary experiential training. The collected learning then becomes a valuable basis for guidance systems that will render vehicles truly autonomous.

But how can that very valuable intellectual property be protected? Historically, startups could obtain venture capital by trotting out a portfolio of issued patents. With the 2014 Alice decision on subject matter patentability by the U.S. Supreme Court, however, it has been problematic, to say the least, to obtain patents that are essentially based on algorithms.

The two-step “Alice” test requires examination of 1) whether the claims are directed to a patent-eligible concept or a patent-ineligible abstract idea; and 2) if directed to an abstract idea, whether the claims contain an “inventive concept” sufficient to transform the abstract idea into a patent-eligible application. Under the latter, “well-understood, routine, and conventional” activities or claim elements cannot form an inventive concept. Even if subject matter barriers can be overcome, the evolved AI may no longer bear semblance to the original expression of code, such as to raise inventorship issues. If a macaque monkey cannot hold a copyright, can a machine hold a patent?   

As a result, most companies presently guard their machine learning data as trade secrets. But trade secret protection has its drawbacks, one of which is to society at large. Unlike technology taught in patented disclosures, which could allow a new entrant to catch up (provided it licenses or designs around the patent), an AV data set of an unwilling licensor is not obtainable absent a trade secret violation or duplicating the considerable miles driven. 

Keeping AV data secret also creates a “black box” where consumers and authorities are unable to fairly and completely evaluate the proficiency/safety of the AI systems guiding the vehicles. At most, consumers will likely have to rely on publicly compiled data regarding car crashes and other reported incidents, which fail to adequately assess the underlying AI or even isolate AI as the cause (as opposed to other factors). As it is, AV developers’ “disengagement reports” — those tallying incidents where the human attendant must take over for the AI — vary widely, depending on how the developer chooses to interpret the reporting requirement.  Without comparable data, consumers are often left with nothing more than anecdotal evidence as to which AV system is the safest or most advanced.

Trade secret protection has its drawbacks, one of which is to society at large.

Relying on trade secret protection is also problematic for the owner of the data, largely because of the requirement that to be protectable, the trade secrets must be kept confidential. This can lead to a “need-to-know” access environment, hampering development and breeding paranoia. Physical security could mean preventing employees from carrying data on portable devices or working from home, instead requiring work and storage on servers isolated from external connectivity. It also could mean needing metal detectors and security screening devices and procedures to, quite literally, keep data from walking out the door. Encryption also could be used, introducing yet another layer of protection, but possibly with a productivity trade-off. And none of this is a complete guard against a mal-intended employee who abuses their access privileges.  

And what of that disgruntled employee who, instead of taking an unauthorized copy to another employer, virally transmits it over social media? Once out in public, the secrets lose their value, as present law generally does not permit actions against a company that comes across trade secrets through no fault of their own. Imagine losing your company’s valuation because your once-proprietary AV data set is now essentially public domain.

On the other hand, one might question whether the “best” AI should be kept from the public. A promise of AVs is that AI guidance and inter-vehicle communications can enhance traffic safety and optimize traffic flow.  Confining the safest, highest functioning AI to select manufacturers would mean less-than-optimal overall safety or efficiency, as “smarter” cars would need to deal with “less smart” vehicles (and human-driven ones!). At the very least, without any technical standards regulating the interaction between various AVs, each unique system will need to communicate with, and predict the behaviors of, potentially hundreds of different AIs.

All of this is to suggest that, as present-day human-driven vehicles evolve into the Nikola 9000, our IP laws and protections must likewise evolve. Just as hybrid vehicles were an early solution to “range anxiety,” perhaps some hybrid IP concept could be developed to satisfy the needs for autonomous vehicle IP protection while continuing to “promote the progress of science and useful arts” under the Constitution.