News

Risk of ‘industrial capture’ looms over AI revolution

There’s a colossal shift going on in artificial intelligence — but it’s not the one some may think. While advanced language-generating systems and chatbots have dominated news headlines, private AI companies have quietly entrenched their power. Recent developments mean that a handful of individuals and corporations now control much of the resources and knowledge in the sector — and will ultimately shape its impact on our collective future.

The phenomenon, which AI experts refer to as “industrial capture”, was quantified in a paper published by researchers from the Massachusetts Institute of Technology in the journal Science earlier this month, calling on policymakers to pay closer attention. Its data is increasingly crucial. Generative AI — the technology underlying the likes of ChatGPT — is being embedded into software used by billions of people, such as Microsoft Office, Google Docs and Gmail. And businesses from law firms to the media and educational institutions are being upended by its introduction.

The MIT research found that almost 70 per cent of AI PhDs went to work for companies in 2020, compared to 21 per cent in 2004. Similarly, there was an eightfold increase in faculty being hired into AI companies since 2006, far faster than the overall increase in computer science research faculty. “Many of the researchers we spoke to had abandoned certain research trajectories because they feel they cannot compete with industry — they simply don’t have the compute or the engineering talent,” said Nur Ahmed, author of the Science paper.

In particular, he said that academics were unable to build large language models like GPT-4, a type of AI software that generates plausible and detailed text by predicting the next word in a sentence with high accuracy. The technique requires enormous amounts of data and computing power that primarily only large technology companies like Google, Microsoft and Amazon have access to. Ahmed found that companies’ share of the biggest AI models has gone from 11 per cent in 2010 to 96 per cent in 2021.

A lack of access means researchers cannot replicate the models built in corporate labs, and can therefore neither probe nor audit them for potential harms and biases very easily.

The paper’s data also showed a significant disparity between public and private investment into AI technology. In 2021, non-defence US government agencies allocated $1.5bn to AI. The European Commission planned to spend €1bn. Meanwhile, the private sector invested more than $340bn on AI in 2021.

“There is such a concentration of wealth and investment in a very narrow set of techniques,” said Alex Hanna, director of research at the Distributed AI Research Institute and a former member of Google’s Ethical AI team.

She pointed to investment data from PitchBook showing that the majority of the money for generative AI in the past six years has gone to start-ups like Anthropic, Inflection, Character.ai and Adept AI, and larger efforts like OpenAI that are building their own large models. In 2019, OpenAI pivoted from a non-profit into a profitmaking enterprise with a $1bn investment from Microsoft, citing a need “to rapidly increase our investments in compute and talent.” 

The consequences of this shift are manifold. It means public alternatives to corporate AI tech, such as models and data sets, are becoming increasingly scarce. And new applications are likely to be commercially driven rather than in the broader public interest, several researchers pointed out. Hanna, whose work is funded by non-profits, agrees. “If you want to work on specialised AI tasks like ensuring biodiversity, or climate science or agriculture, there is not a lot of appetite for that,” she said.

Meredith Whittaker, president of encrypted app Signal, has compared the situation with the US military’s dominance over scientific research during the cold war in a seminal paper in 2021. “It is here, in these darker histories, that we confront the steep cost of capture — whether military or industrial,” she wrote. “And its perilous implications for academic freedom . . . capable of holding power to account.” 

Researchers and policy experts concur on the diagnosis, but not on the solutions — some like Ahmed believe governments should set up academia-only data centres to allow researchers to run experiments, but others like Whittaker believe that would further concentrate power among those who own infrastructure like cloud services. But they all agree on the one thing policymakers simply can’t do: turn a blind eye.

madhumita.murgia@ft.com

Articles You May Like

Utah governor completes action on bond-related bills
Fascism has changed, but it is not dead
Oppenheimer in flip flops
Washington state’s massive deal latest to test BAB redemption
‘Apple is going to war’: US lawsuit adds to iPhone maker’s antitrust woes