Thursday, January 10, 2019

Thoughts on Autonomous Vehicles

Autonomous vehicles are a hot topic these days, as auto makers rush to add more self-driving and vehicle communication technology to their offerings. The state of the market is somewhat reminiscent of the internet boom, where tech companies rushed to produce every type of service/product they could imagine, and any consideration for societal impact (and/or hope of thoughtful regulation) was left hopelessly behind. It also reminds me of the rush to create AI for everything, where business and government interests are rushing far ahead of any consideration of the societal effects, even though the people closest to the technology have serious concerns about the direction the development is going.

Let's get this out of the way first and foremost: Autonomous vehicles are going to kill people, and in ways which would have been avoided if humans were driving.

Why do I feel entirely confident making that claim, when most proponents of the technology advocate the opposite (ie: that autonomous vehicles will save lives, by avoiding accidents which human drivers would have been involved in)? To answer that, you have to think less conceptually about the eventual promise of the technology if everything goes to plan, and more pragmatically about how technology (and software in particular) actually gets developed.

I write software for a living. All software has bugs; the more complex the software, the more bugs it will have, statistically speaking. There are well-established methods of minimizing the number and effects of bugs, but these are expensive; NASA does a lot of things to make sure their software systems don't fail, for example. Automobile manufacturers do not do any of these things; one needs only look at the abysmal state of infotainment systems in modern vehicles, not to mention security models of connected cars, to understand how fundamentally terrible automobile manufacturers are at producing quality software.

Why is this? Well, primarily, there's no significant incentive to do so. With a NASA program, for example, its very costly and publicly embarrassing (and potentially deadly) to have a software system failure in a deployed module. As a result, NASA spends the time and money to get quality engineers, produce quality products, and engineer for reliability. In contrast, automobile manufacturers at worst pay comparatively small penalties via recalls for faulty systems; for most systems, they just let the consumer suffer with the issues. Consumers, by and large, do not demand quality software from automobile companies, and in the absence of any sort of oversight or mandate to produce such, manufacturers will always take the lowest-cost options. This is why most software in automobiles is garbage, and only what's minimally sufficient to not appear egregiously broken.

This will end badly in the rush to produce autonomous driving systems. In the absence of any sort of objective standard/test for quality, manufacturers will deploy systems with flaws, and sell them to consumers. When the inevitable errors and accidents happen, they will deflect blame, as they do for every other case where flaws in the design and/or production cause injuries and/or death. Moreover, this will continue to happen, and given past evidence of government speed to react to new technology developments, many people will probably die (in and outside of the autonomous vehicles) while the technology is being beta testing on live roads. It's inevitable, given the trajectory of how this technology is being pushed forward and deployed.

So how would I fix it, in theory?

Well, primarily, we would need to have government oversight to accomplish anything; just asking for companies to "do good" is utterly pointless. Given that, ideally the government would develop a test suite of simulated scenarios for autonomous cars to encounter, with criteria for acceptable outcomes (which would meet or exceed those outcomes which an attentive and skilled human driver would create). This should be a large suite, expanded every year with new scenarios, and be not directly available to manufacturers (so as to minimize cheating by coding to the test). The administrator for the oversight would mandate a specification to allow an autonomous UI to be evaluated against the test suite (ie: by standardizing the inputs and output, and making a pluggable test harness to evaluate systems, in a similar method to how other tests for compliance are done).

Then, the government should mandate that for approval for inclusion in a vehicle certified for sale in the US, any autonomous driving program must pass the then-current driving test suite with 100% success. That is, if you want to sell a system with controls the vehicle in any manner, it must meet or exceed the performance of a skilled human driver in every single scenario included in the test suite. That would be the uniform standard for every vehicle, every system, every year, no exceptions; if you don't pass, your system cannot go into a vehicle for sale in the US.

The government could then release the test scenario suite each year after the certification period (ie: release the old suite once a new expanded suite is in place), so manufacturers could see which tests their systems failed, and incorporate changes. Moreover, I would also encourage manufacturers to submit tests from their own independent testing for incorporation into the general test suite; this would allow manufacturers to attempt to gain advantages over competitors (ie: by incorporating tests which they knew their systems would pass), to the benefit of everyone.

Only when we get to that point will we have any hope of autonomous driving systems being actually "better" than human drivers. Until that point, there will be issues forever, and the government ultimately (through their inaction) would be culpable for the results, imho.

No comments:

Post a Comment