Substack

Wednesday, October 13, 2021

The hype and reality of AI

Techno-optimists wax eloquently about the wonders of Artificial Intelligence (AI). For sure AI is the newest addition to general purpose technologies, and its impact will be transformational in its own way. But, for now, sifting the evidence from the hype, that transformation looks nowhere on the horizon. I had done a reality check on AI here drawing from a survey in The Economist.

Arguably one of the most salient and universal areas of its application is in digital advertising, which itself  now forms 48% of all advertisement spending in the US. Within digital advertising, programmatic advertising which uses AI-powered algorithms (ads bought any algorithm) makes up 89% of the spending at close to $200 bn in 2020. But it appears that programmatic advertisements may be a giant fraud masquerading as nerdy innovation. 

Programmatic ads, apart from driving traffic towards the lowest common denominator content (as Frances Haugen has indicated in the form of Facebook's use of rage and confirmation bias) also deceive ad-buyers with fake algo-based clicks and likes which ad publishers have no incentive to limit. Scott Galloway points to the problems, 

In a programmatic ad buy, the client — Nike or Nissan or Novartis, acting through an agency, the first of many middlemen — provides the ad itself and sets up criteria for who it wants to see it (e.g. 36- to 42-year-old Hispanic males with Crohn’s disease in the final year of their auto lease). Then a series of automated processes place many thousands of copies of the ad on many different websites, anywhere the algorithms believe the ad will be seen by people meeting the target profile. That’s lots of palms to be greased. Lots of opportunities for people to cheat, and enough complexity that this cheating is difficult to detect. Especially if the cheating only makes the system more money.

The basic cheat is the fake view. An ad is reported as being served to humans, when it was actually only “seen” by a bot, or by a person in a “click farm” tapping at dozens of screens, or by nothing at all. Networks of fake websites fool the algorithms into believing they are real publications. Measurements of the impact are all over the map, but we know fraud is pervasive. By one estimate, 88% of digital ad clicks are fake. Publishers and the middlemen who place ads with them tout all sorts of supposed fraud-detection technology, but industry experts say it’s largely worthless. Of course it is. These players benefit from inflated ad views — why would they suppress them? In 2008, Newsweek Media Group infected its own fraud-detection system with malware so it could charge advertisers for bot-generated traffic on some of its websites.

In fact, even the targeting advertisement USP of digital advertising may be a myth, 

study by MIT professor Catherine Tucker found that even targeting something as basic as gender was unsuccessful more than half the time (i.e., it was worse than random). A Nielsen analysis of a household-income-adjusted ad campaign found that only 25% of its ads were reaching the right households. As much as 65% of location-targeted ad spend is wasted. Plaintiffs in a class-action suit against Facebook have alleged its targeting algorithm’s “accuracy” was between 9% and 41%, and quoted internal Facebook emails describing the company’s targeting as “crap” and “abysmal.”

This is a powerful indictment of sales of programmatic advertisements,

Digital ad fraud could be a $150 billion business by 2025, which would make it the largest criminal enterprise after the drug trade — and it fuels the same digital criminal underground responsible for industrial espionage, ransomware, and identity theft. We need externally imposed and enforced industry standards on transparency in advertising. Expecting these conflicted middlemen to self-regulate is (generously) naïve. And we should consider taxing algorithms that serve ads and content. We tax cigarettes and alcohol to suppress their use and fund policies to address some of their externalities. Programmatic ad buying, similar to other media buys, can be good/bad, and that’s a component of business.

The damaging effects of programmatic advertisements (and other technology solutions) may well be only a manifestation of the deep malaise that afflicts corporate sector, where profit maximisation at all costs is the only objective. In her testimony to the Congress, Frances Haugen has highlighted how Facebook prioritised social interaction on its platforms by choosing to maximise online engagement even at the cost of harming users through addiction, bullying, eating disorders and the like. She testified,

Facebook knows that content that elicits an extreme reaction from you is more likely to get a click, a comment or reshare. They prioritise content in your feed so you will give little hits of dopamine to your friends, and they will create more content... A pattern of behaviour that I saw on Facebook was that often problems were so understaffed [that] there was an implicit discouragement from having better detection systems.

Mark Zuckerberg has once held that AI tools would be "scalable way" to identify harmful content. But in a nod to the deficiencies of Haugen has demanded that Facebook ramp up its content moderation team. A recent Bloomberg article highlighted this,

Those tools do a good job at spotting nudity and terrorist-related content, but they still struggle to stop misinformation from propagating. The problem is that human language is constantly changing. Anti-vaccine campaigners use tricks like typing “va((ine” to avoid detection, while private gun-sellers post pictures of empty cases on Facebook Marketplace with a description to “PM me.” These fool the systems designed to stop rule-breaking content, and to make matters worse, the AI often recommends that content too. Little wonder that the roughly 15,000 content moderators hired to support Facebook’s algorithms are overworked. Last year a New York University Stern School of Business study recommended that Facebook double those workers to 30,000 to monitor posts properly if AI isn’t up to the task. Cathy O’Neil, author of Weapons of Math Destruction has said point blank that Facebook’s AI “doesn’t work.” Zuckerberg for his part, has told lawmakers that it’s difficult for AI to moderate posts because of the nuances of speech.

The same article has this take on the other high-profile area of AI-application, self-driving cars, through the struggles of its biggest proponent, Elon Musk,

In 2019 he told Tesla investors that he “felt very confident” there would be one million Model 3 on the streets as driverless robotaxis. His timeframe: 2020. Instead, Tesla customers currently have the privilege of paying $10,000 for special software that will, one day (or who knows?) deliver fully-autonomous driving capabilities. Till then, the cars can park, change lanes and drive onto the highway by themselves with the occasional serious mistake. Musk recently conceded in a tweet that generalized self-driving technology was “a hard problem.”

And this,

AI has also been falling short in healthcare, an area which has held some of the most promise for the technology. Earlier this year a study in Nature analyzed dozens of machine-learning models designed to detect signs of COVID-19 in X-rays and CT scans. It found that none could be used in a clinical setting due to various flaws. Another study published last month in the British Medical Journal found that 94% of AI systems that scanned for signs of breast cancer were less accurate than the analysis of a single radiologist. “There’s been a lot of hype that [AI scanning in radiology] is imminent, but the hype got ahead of the results somewhat,” says Sian Taylor-Phillips, a professor of population health at Warwick University who also ran the study.

For sure, AI will get better with time. But its present status is all hype and limited substance. 

On the Facebook issue, see this Wall Street Journal investigation series on Facebook. Is Facebook and the Zuckerberg-Sandberg duo, the most hated company and executive team in corporate USA?

No comments: