Substack

Monday, November 13, 2023

The AV AI choice - an illustration of why AI should be regulated

A few weeks back I had blogged here and here based on Acemoglu and Johnson's new book Power and Progress that there was nothing inevitable about technology and progress. The trajectory of technological growth with any technology or innovation is a political choice and that choice dictates whether it leads to human civilisational progress or not, and the nature of the progress. 

Such choices cannot be left to the whims and fancies of a few wealthy individuals, or the profits maximising incentives of large technology companies, as is the default now. Instead this default has to be a conscious public choice based on a vibrant political debate. 

Gillian Tett points to a teachable moment on the issue of the application of AI in autonomous vehicles (AVs). AI algorithms can be programmed to either follow preset rules or merely follow human behaviours. In the case of AVs, it would be about navigating by following preset traffic rules and transportation infrastructure, or by mimicking what other drivers are doing. 

Tett points to the first approach taken by the likes of Alphabet-owned Waymo

Waymo's vehicles have been roaming around Phoenix, Arizona for almost two years with an AI system that (roughly speaking) was developed using preset principles, such as the National Highway Transport Safety Administration rules. “Unlike humans, the Waymo Driver is designed to follow applicable speed limits,” the company says, citing its recent research showing that human drivers break speeding rules half the time in San Francisco and Phoenix. The vehicles are also trained to stop at red lights — a point that delights the NHTSA, which recently revealed that nearly 4.4m human Americans jumped red lights in 2022, and more than 11,000 people were killed between 2008 and 2021 because someone ran the lights. Unsurprisingly, this seems to make Waymo’s cars much safer than humans... But what is really interesting is that Waymo officials suspect that the presence of rule-following AVs in Phoenix is encouraging human drivers to follow rules too — either because they are stuck behind an AV or being shamed by having a bot inadvertently remind them about the traffic rules. Peer pressure works — even with robots... a (limited) study by MIT shows that the presence of AVs on a road can potentially improve the behaviour of all drivers.

Tett points to the alternative approach taken by Elon Musk's Tesla

However Elon Musk’s Tesla has taken a different tack. As Walter Isaacson’s biography of Musk notes, initially Musk tried to develop AI with preset rules. But then he embraced newer forms of generative or statistical AI (the approach used in ChatGPT). This “trains” AI systems how to drive not with preset code but by observing real human drivers; apparently 10m video clips from existing Tesla cars were used. Daval Shroff, a Tesla official, told Isaacson that the only videos used in this training were “from humans when they handled a situation well”. This means that Tesla employees were told to grade those 10m clips and only submit “good” driving examples for bot training — to train bots in good, not bad, behaviour. Maybe so. But there are reports that Tesla AVs are increasingly mimicking humans by, say, creeping across stop signs or traffic lights. Indeed, when Elon Musk live-streamed a journey he took in August in an AV, he had to intervene manually to stop it jumping a red light. The NHTSA is investigating.

As similar contexts play out across sectors, and especially immediately in areas like social media, e-commerce, and financial markets, Tett writes about the choice facing humanity

Should AI-enabled players in financial markets be programmed with preset, top-down rules? Or learn by mimicking the behaviour of humans who might “arbitrage” (ie bend) rules for profit? Who decides — and who has liability if it goes wrong?... Should this be our idealised vision of behaviour — say, a world where we all actually observe traffic rules? Or the land of “real” humans, where drivers creep through stop signs? And, crucially, who should decide?

Techno-optimists will claim that the rapid advances and transformative impact over the last three decades from digital technologies happened because innovators and markets were allowed to innovate and interact unconstrained by regulations and any regulation will stifle and even kill innovation and leave the world worse off (or as Marc Andreeson writes, "Deaths that were preventable by the AI that was prevented from existing is a form of murder"!). 

There are several responses to refute these simplistic arguments that form the core of the myth behind techno-optimism. The simplest is that it's a mere fancy wish list of a group of rich and ideologically charged individuals who are completely disconnected both physically and mentally from the lives of the overwhelming majority of their fellow earthlings, and has very little to do with historical and current real-world realities. 

It conflates the purely technical and narrow conception of progress with an infinitely broader and subjective conception of human development. As the philosopher John Gray has written, unlike the material realms of science and technology, there's nothing linear about progress in the realms of ethics and politics.

No comments: