News

Beware the ‘bad-ish’ actor when it comes to AI

During Google’s big I/O showcase event for developers — a protracted, glistening flex of the company’s new AI muscles — one of the keynote speakers dwelt on the risks posed by “bad actors”.

The phrase, in the context of an otherwise self-consciously optimistic event, came with a balance of real and abstract threat. There was enough menace in the use of the term bad actor to reassure the audience that Google’s human brains have duly considered the dangers of AI expanding very rapidly beyond the point of realistic control, but without enough specificity about the threats to dull the party spirit.

The mainstreaming of generative AI may indeed place ever more powerful weapons of mischief in the hands of scam artists, disinformation merchants and other unambiguously bad actors. We are right to fear this, and Google was right to break off, as it did, and acknowledge the tension that now exists at a company of this importance between what it can and should release on to the market.

But Google’s tone made it look likely, at least for now, that the company will proceed on the basis that ordinary people can be trusted with quite a lot of generative AI. It may be underestimating, though, the mundane villainy of the bad-ish actor: those who do not actively seek out the dark potential of technology, but will definitely use it if it is just sitting there ready to exploit.

The problem was that as each of Google’s new AI offerings flashed up on the screens, the risks felt less abstract and more real. The fact that Google, Microsoft and other tech titans are making a consumer and enterprise battleground of AI means that commercial competition has now in effect been instructed and freed to do what it does best: put as much as it legally can in our hands as quickly as possible. This means that the tools required to be a casual (but also very efficient) bad-ish actor will be ever more available.

There were two moments that stood out. In one, Google’s executives demonstrated an AI-empowered translation software it was currently testing and which — by the company’s own recognition — looks a lot like a user-friendly, highly powerful generator of deepfake footage. The Google division head admitted as much, describing the need for guardrails, watermarking and other safety measures that may, in reality, prove hard to enforce.

Video of a speaker talking in one language is played; their words are transcribed, translated and rendered back by the AI as audio in another language. The tone and lilt of the translated voice is adjusted to more closely mimic the speaker’s and then the software re-dubs that over the original video. Spookily — though not yet perfectly — the AI then manipulates the tapes so that the new words are synchronised with the speaker’s lips. Remarkable stuff — but also not terribly hard to imagine how the power to very quickly make people look like they were saying something they never did could be useful to both our bad and bad-ish actors.

In another demo, Google executives showed off the company’s AI-powered Magic Editor — essentially a very quick and easy-to-use Photoshop-type tool that looks like it will allow even the not especially techie to alter photos and by implication change the history of an event or encounter with a couple of jabs of the finger.

The company’s scenario was inevitably benign, and began with a photo of a tourist in front of a waterfall. Happy memories but — oops! — a prominent handbag strap she would rather erase. Jab! It instantly vanished. She wished the weather had been better on that trip. Jab! The sky was no longer granite-clouded but gloriously blue. If only she had been closer to the waterfall and with her arm at a different angle. Jab! She had moved.

Nobody could begrudge this notional tourist the right to rewrite reality a little. But the uses to which a bad-ish actor could put this cast it all in a more dubious light. Not everyone will immediately see how they can benefit from these instant powers of retrospective manipulation of the visual record, but just having that ability in your pocket will make a very large number of people airbrush-curious.

Since the launch of ChatGPT, Google and others have little choice but to get involved in this early experimental, three-way clash between humanity, AI and trillion dollar companies. Google’s guiding principle in this, its chief executive Sundar Pichai said last week, would be a “bold and responsible” stance. This is OK, but it feels like a place holder until the world gets a proper sense of how many bad-ish actors are out there.

leo.lewis@ft.com

Articles You May Like

Munis steady before massive calendar; outflows cause some concern
Singapore gives top-level briefings to reassure foreign banks on stability
Summer or autumn? Rishi Sunak’s election date dilemma
Ukraine to increase long-range strikes in Russia, says UK defence chief
Greece and Spain under pressure to provide Ukraine with air defence systems