The writer is founder of Sifted, an FT-backed site about European start-ups
Technology, they say, is about turning the magical into the mundane. A decade ago, digital assistants such as Siri, Alexa and Cortana seemed like astonishing inventions. Nowadays, Microsoft’s chief executive, Satya Nadella, dismisses them as “dumb as a rock”. How quickly will today’s much-hyped generative AI models become similarly humdrum?
On Tuesday, the San Francisco-based research company OpenAI released GPT-4, its latest content-generation model, demonstrating nifty new features, such as helping to calculate a tax return. OpenAI’s launch of its uncannily plausible — if unnervingly flawed — ChatGPT chatbot in November caused a sensation. But in several significant ways, GPT-4 is even more impressive.
The new model is more accurate and powerful and has greater reasoning capabilities. ChatGPT struggles to answer the question: what’s the name of the daughter of Laura’s mother? But, as the philosopher Luciano Floridi found when experimenting, the new GPT-4 model gives the correct answer (Laura, in case you’re wondering) when told the question is a logic puzzle.
Moreover, GPT-4 is a multimodal model, combining both text and images. At the launch event, Greg Brockman, OpenAI’s co-founder, quickly turned a photograph of a handwritten note into a functioning website containing some awful dad jokes. “Why don’t scientists trust atoms?” GPT-4 asked. “Because they make up everything.”
The applications of these generative AI models are seemingly limitless, which explains why venture capital investors are pouring money into the sector. These models are also seeping into all kinds of existing digital services. Microsoft, a big investor in OpenAI, has embedded GPT-4 in its Bing search engine. The payments company Stripe is using it to help detect online fraud. Iceland is even employing GPT-4 to improve local language chatbots. That is surely worth it just to preserve the lovely Icelandic word for computer: tölva, meaning number prophetess.
Big companies, such as Microsoft and Google, will be the first to deploy these systems at scale. But some start-ups see opportunities in arming the smaller battalions. Josh Browder, who runs the robolawyer company DoNotPay, which contests parking tickets, says GPT-4 will be a powerful new tool to help users counter automated systems. His company is already working on embedding it into an app to issue one-click lawsuits against nuisance robocallers. The technology could also be used to challenge medical bills or cancel subscriptions. “My goal is to give power back to the people,” Browder tells me.
Alongside the positive uses of generative AI, however, there are many less visible abuses. Humans are susceptible to the so-called Eliza effect, or falsely attributing human thoughts and emotions to a computer system. This can be an effective way to manipulate people, warns Margaret Mitchell, researcher at the Hugging Face AI company.
Machine learning systems, which can synthesise voices and generate false personalised emails, have already contributed to a surge in imposter scams in the US. Last year, the Federal Trade Commission recorded 36,000 reports of people being swindled by criminals pretending to be friends or family. They can also be used for generating disinformation. It is perhaps telling that China’s regulators have instructed their tech companies not to offer ChatGPT services, seemingly for fear of losing control over information flows.
Much remains mysterious about OpenAI’s models. The company accepts that GPT-4 exhibits societal biases and hallucinates facts. But the company says it spent six months stress-testing GPT-4 for safety and has introduced guardrails through a process known as reinforcement learning from human feedback. “It’s not perfect,” Brockman said at the launch. “But neither are you.”
Furious rows over the training of these models seem inevitable. One researcher has been periodically testing ChatGPT’s “bias” by prompting it to answer political-orientation questions. Initially, ChatGPT fell in the left-libertarian quadrant but has since been moving towards the neutral centre as the model has been tweaked. But, in an online post, the AI researcher David Rozado argues it will be hard to eliminate pervasive societal biases and blind spots reflected on the internet. “Political biases in state of the art AI systems are not going away,” he concludes.
Elon Musk, a founder of OpenAI who later quit the company, has repeatedly criticised “woke AI” and is exploring whether to launch a less restrictive model, according to The Information. “What we need is TruthGPT,” he tweeted. Such rows over bias are only a foretaste of far bigger fights to come.