Investing

ESG concerns are growing as artificial intelligence becomes more popular. What investors need to know

Userba011d64_201 | Istock | Getty Images

Wall Street has eagerly rallied around companies making notable strides in artificial intelligence. However, several investors warn that the increasingly widespread deployment of AI has opened a “Pandora’s box” of environmental, social and corporate governance, or ESG, concerns.

Generative AI models — ChatGPT being the most prominent example — have already been implemented in technical roles, such as financial analytics and drug development, as well as more human-facing sectors such as customer service and marketing

Amid the quick rise and implementation of AI across industries, some investors worry that the potential ESG downsides haven’t been adequately considered and safeguarded against. 

Investors have called for more transparency and data from companies on how they’re using and investing in AI technology. The current lack of sufficient data from U.S. companies means the space is currently “the Wild West,” as described by Thomas Martin, a senior portfolio manager who runs ESG strategy at Globalt. 

“If you’re an ESG-focused investor, you’re dependent on the information that you get. The companies aren’t providing that yet, except the things that will make you imagine things. You can’t base an evaluation based on something you’re imaging, or don’t know if it’s true or accurate, or when it’s coming,” Martin said. “There has to be information that’s out there that comes from the companies themselves and how they’re using [AI].”

Lack of transparency and safeguards

Investors and analysts have noted that ESG regulatory guidelines for AI are notably laxer in the U.S. than in the E.U. and in Asia. Meanwhile, in South Korea, the government’s post-Covid Digital New Deal initiative includes national guidelines for AI ethics to promote ethics and responsibility when developing artificial intelligence. 

Researchers have also sought to quantify fairness and bias in AI models through various socio-ethnic parameters. For example, Stanford’s artificial intelligence index report score for bias across AI models. It founda “counterintuitive” correlation between fairness and bias: models that scored better on fairness metrics demonstrated stronger gender bias, and less-gender biased models were more toxic.

Technology’s moving so quickly, and I think this is the most disruptive from a social fabric standpoint. It’s actually pretty damn scary. And I’m an engineer by trade, and I’ve been doing this for 30 years. … You know, what I do for a living can probably be replaced in two to three years.
Ted Mortonson
managing director, Baird

Ted Mortonson, managing director at Baird, warned that he sees AI in a similar position to where bitcoin was a few years prior, noting that the U.S. regulatory framework is “not set up for very extreme technology advances.” He added Microsoft CEO Satya Nadella’s comments during the company’s earnings Q&A that it has “taken the approach that we are not waiting for regulation to show up” did not bode well.

“For my clients, that rubbed a lot of people the wrong way. Because this is a social issue,” he said. “I mean, if the [Federal Reserve] wants unemployment to go up and a weakening economy, generative AI is going to do it for them.”

Assessing ESG impacts

While there is no standardized methodology to quantify the exact ESG impacts of a given AI-related investment, there are certain considerations investors can take. 

Morgan Stanley created a three-pronged approach for assessing impact for AI-ESG driven investments: 

  1. Assessing how an AI investment can reduce harm to our environment — such as by driving energy efficiencies, preserving biodiversity and reducing waste. 
  2. Examining how AI enhances people’s lives, such as by improving interactions between people and businesses. 
  3. Driving AI technology advancements — being a “key player or enabler across the AI ecosystem to make businesses and society better.” 

The firm characterizes the first two as likely requiring a low- to high- level of effort from investors. It notes that the final step likely requires a high level of engagement. 

Some investors believe AI itself can help investors monitor and track ESG efforts by companies. Sarah Hargreaves, head of sustainability for Commonwealth Financial Network, said AI could be particularly useful for investors to compare the environmental impacts of their investments alongside current and forthcoming regulatory standards.

“I’d also think that AI’s ability to manage and optimize relative ESG data would be particularly relevant for investors looking to delineate between dedicated ESG investments versus those subject to greenwashing,” she wrote in an email to CNBC.  

Baird’s Mortonson also mentioned that tech companies themselves could make AI-ESG analysis easier. He noted that databases and cloud-based companies such as ServiceNow and Snowflake are “incredibly well-positioned with Next Generation AI” to release accurate and detailed ESG data given the significant amounts of data they store.

Employment obsolescence

As AI gains more capabilities and becomes more widely implemented, concerns over job displacement — and potentially obsolescence— have emerged as some of the biggest social concerns. 

The Stanford report, which was published earlier this year, found that only 18% of Americans are more excited than concerned about AI technology — with the foremost concern being “loss of human jobs.” 

Additionally, a recent study by professors at Princeton, the University of Pennsylvania and New York University suggested that high-income, white-collar jobs may be the most exposed to changes from generative AI. 

It added that policy to help minimize any disruptions stemming from AI-related job losses “is particularly important” as the effects of generative AI will disproportionately target certain occupations and demographics. 

“From a social standpoint, it will impact employment, both blue-collar and white-collar employment, I would say materially in the next five to 10 years,” Mortonson said.

Globalt’s Martin sees such losses as part of the natural cycle of technological advancements.

“You can’t stop innovation anyway; it’s just human nature. But it frees us up to do more, with less, and to foster growth. And AI will do that,” said Martin.  

“Are some jobs gonna go away? Yeah, most likely. Will aspects of jobs get better? Absolutely. Will that mean that there will be new things to do? That even the people who are doing the old things can do and move into and migrate into? Absolutely.”

Mortonson was less sanguine. 

“The genie’s out of the bottle,” he said, noting that companies are likely to embrace AI because it can boost earnings. “You just don’t need as many people doing what they’re doing on a day-to-day basis. This next generation of AI [is] basically bypassing the human brain of what a human brain can do.”

“Technology’s moving so quickly, and I think this is the most disruptive from a social fabric standpoint. It’s actually pretty damn scary. And I’m an engineer by trade, and I’ve been doing this for 30 years,” he said. “You know, what I do for a living can probably be replaced in two to three years.”

Articles You May Like

How China is setting up shop in America’s backyard
Helicopters Rescued Patients in Apocalyptic Flood. Other Hospitals Are at Risk, Too.
Trump tells EU to buy US oil and gas or face tariffs
Renewed inflation fears stalk central bankers as markets shudder
Very, Very Alarmed: Messianic Jewish Ministrys Warning About Skyrocketing Anti-Semitism, Call to Defend Jewish People