ESG concerns grow with AI popularity. What investors need to know

0
256
FAANG shares could soar as much as 50% this year, says Fundstrat's Tom Lee

Userba011d64_201 | Istock | Getty Images

Wall Street has eagerly catered to companies that have made remarkable advances in artificial intelligence. However, several investors warn that the widespread use of AI has opened a Pandora’s box of environmental, social and governance (ESG) concerns.

Generative AI models – ChatGPT being the most prominent example – have already been implemented in technical fields like financial analysis and drug development, as well as more human-centric fields like customer service and marketing.

related investment news

CNBC Pro

With the rapid rise and implementation of AI in these industries, some investors worry that the potential ESG downsides have not been adequately addressed and protected.

Investors are demanding more transparency and data from companies about how they are using and investing in the new technology. The current lack of sufficient data from US companies means this space is currently “the Wild West”, as described by Thomas Martin, a senior portfolio manager who leads ESG strategy at Globalt.

“If you are an ESG-focused investor, you depend on the information you receive. The companies aren’t making these available yet, other than the things that make you imagine things. You can’t base an evaluation on that. “Something you imagine, or you don’t know if it’s true or accurate, or when it’s coming,” Martin said. “There needs to be information available that comes from the companies themselves and how they use it [AI].”

Lack of transparency and safety precautions

Investors and analysts have noted that ESG regulatory guidelines for AI are significantly more lax in the US than in the European Union and Asia. Meanwhile, in South Korea, the government’s post-Covid Digital New Deal initiative includes national AI ethics guidelines to promote ethics and responsibility in artificial intelligence development.

Researchers have also attempted to quantify fairness and bias in AI models using various socio-ethnic parameters. For example, the Stanford University Artificial Intelligence Index report assesses the bias of all AI models. A “counterintuitive” association between fairness and bias was found: models that performed better on fairness metrics showed higher gender bias, and less gendered models were more toxic.

Technology is moving so fast and I think that’s what’s most disruptive from a societal perspective. It’s actually pretty damn scary. And I’m an engineer by trade and have been doing it for 30 years. … You know, what I do for a living can probably be replaced in two to three years.

Ted Mortonson

Managing Director, Baird

Ted Mortonson, chief executive at Baird, warned that he sees AI in a similar situation to Bitcoin a few years ago, noting that the US regulatory framework “is not designed for very extreme advances in technology.” He added that comments by Microsoft CEO Satya Nadella during the company’s earnings announcement that the company “has taken the approach that we’re not waiting for regulation to come up” didn’t bode well.

“With my clients, that has gone wrong for a lot of people. It’s a societal problem,” he said. “I mean if that [Federal Reserve] If we want unemployment to rise and the economy to falter, generative AI will do it for them.”

ESG impact assessment

Although there is no standardized way to quantify the precise ESG impact of any given AI-related investment, investors can make certain considerations.

Morgan Stanley has developed a three-pronged approach to AI-ESG driven investing:

  1. Assess how an AI investment can reduce damage to our environment – for example by increasing energy efficiency, preserving biodiversity and reducing waste.
  2. Examine how AI improves people’s lives, for example by improving interactions between people and companies.
  3. Driving advances in AI technology – “being a key player or enabler across the AI ​​ecosystem to make business and society better”.

The company characterizes the first two as likely to require low to high investor effort. It is noted that the final step will likely require a high level of commitment.

Some investors believe that AI itself can help investors monitor and track companies’ ESG efforts. Sarah Hargreaves, head of sustainability at the Commonwealth Financial Network, said AI could be particularly useful for investors to compare the environmental impact of their investments against current and upcoming regulatory standards.

“I also think that AI’s ability to manage and optimize relative ESG data would be particularly relevant for investors looking to differentiate between dedicated ESG investments and those subject to greenwashing,” she wrote in an e-mail. Mail to CNBC.

Baird’s Mortonson also mentioned that technology companies themselves could make AI-ESG analysis easier. He found that databases and cloud-based companies such as service now And snowflake are “incredibly well positioned with next-gen AI” to publish accurate and detailed ESG data given the significant amounts of data they store.

employment obsolescence

As AI becomes more powerful and more widely used, concerns about job displacement — and potentially obsolescence — have become one of societal top issues.

The Stanford report, released earlier this year, found that just 18% of Americans are more excited than worried about AI technology – with the top concern being “losing human jobs.”

Additionally, a recent study by professors from Princeton University, the University of Pennsylvania and New York University suggests that high-income office jobs may be most affected by changes brought about by generative AI.

The study added that developing policies to minimize disruption from AI-related job losses is “particularly important” because the impact of generative AI would disproportionately affect certain professions and populations.

“From a social perspective, it’s going to have a significant impact on employment over the next five to 10 years, both blue-collar and white-collar,” Mortonson said.

Martin von Globalt sees such losses as part of the natural cycle of technological progress.

“You can’t stop innovation anyway; that is human nature. But it gives us the freedom to do more with less and fuel growth. And AI will do that,” said Martin.

“Will some jobs be lost? Yes, most likely. Are aspects of the workplace getting better? In any case. Does that mean there are new things to do? That even the people who do the old things can do this and move in there and migrate there? Absolutely.”

Mortonson was less confident.

“The genie is out of the bottle,” he said, noting that companies are likely to embrace AI because it can increase their profits. “You just don’t need so many people doing what they do every day. This next generation of AI.” [is] It basically bypasses the human brain, which is what a human brain can do.”

“Technology is moving so fast and I think that’s what’s most disruptive from a societal perspective. It’s actually pretty damn scary. And I’m an engineer by trade and I’ve been doing it for 30 years,” he said. “You know, what I do for a living can probably be replaced in two to three years.”