LOS ANGELES, Feb. 16 (Xinhua) -- Significant losses in the U.S. technology sector this month have sparked an international debate among regulators and experts over the potential risks of artificial intelligence (AI) fostering groupthink in financial markets.
As of Friday, Microsoft shares have fallen about 17 percent year-to-date, wiping roughly 613 billion U.S. dollars off its market value. Amazon has shed around 13.85 percent so far this year, erasing about 343 billion dollars in market value and leaving the company valued at roughly 2.13 trillion dollars.
According to data reported by Investing.com, the S&P 500 software and services index lost approximately 1 trillion dollars in market value between Jan. 28 and early February.
While some investors questioned whether heavy spending on AI would generate sufficient returns to justify lofty valuations, other market analyses linked the latest sell-off to the unveiling of a specialized legal AI tool by AI firm Anthropic on Feb. 3.
Some market commentators described the release as a catalyst for a sudden exit from software and services stocks, amid broader concerns that limited AI models used by traders could contribute to groupthink in the financial sector.
Groupthink refers to a phenomenon in which members of a group prioritize consensus over independent judgment. In financial markets, such behavior can lead to herding, where investors adopt similar strategies simultaneously, amplifying volatility.
Some observers warned that if most analysts rely on the same few AI models to interpret data, the diversity of opinion necessary for healthy markets could diminish. Bloomberg Opinion columnist Parmy Olson described this risk as a "market monoculture" in a recent article.
"If market participants are all drawing from the same models trained on largely the same historic data, it's probable they'll not only miss the black swan events that have never happened before, but reach similar conclusions and investment strategies," Olson wrote.
The Financial Stability Board (FSB), an international body that monitors the global financial system, warned in a November 2024 report that homogenization in training data and model architecture poses a growing vulnerability.
The FSB stated that widespread use of common AI models and shared data sources could increase correlations in trading and pricing. Such uniformity, it noted, could amplify market stress and exacerbate liquidity crunches during crises.
In December 2025, the European Systemic Risk Board released a report identifying model uniformity as a key factor that could heighten financial instability, explaining that when many firms deploy similar AI models, correlated exposures may arise, leaving institutions vulnerable to the same shocks simultaneously.
Academic research has also lent support to concerns about declining diversity of thought. A 2024 study published in the journal Science Advances found that while AI can help individuals produce more polished work, it may reduce collective novelty.
In a story published by the Yale School of Management last July, Jeffrey Sonnenfeld, president of the Yale Chief Executive Leadership Institute, warned that the chatbots' vulnerability to manipulation, their susceptibility to groupthink and their inability to recognize basic facts should alarm all users about the growing reliance on them as a research core.
However, some research suggests AI could also contribute to market stability. A September research paper from the Federal Reserve Board examined how AI influences herd behavior, comparing AI agents with human professionals.
The study found that AI models made rational decisions between 61 and 97 percent of the time, depending on the specific model and experimental setup. Human rationality rates in similar tests ranged between 46 and 51 percent.
The paper suggested that AI may help curb the emotional and irrational behavior that often fuels asset bubbles. However, researchers also noted that AI agents could be induced to herd optimally if explicitly instructed to prioritize profit maximization.
Industry experts remain cautious as the financial world enters what some describe as the "Agentic Era," characterized by autonomous AI assistants.
Richard Kramer of Arete Research Services was quoted by Olson as saying that while AI makes analysts more productive, it is unlikely to resolve entrenched incentives to follow consensus views. Experts have argued that healthy financial markets require a diversity of opinion to ensure accurate pricing and prevent systemic panic.
"It should make good analysts more productive, but it's not going to replace the 50 analysts all vying to 'congratulate management,' 'interpret' the call, or end the conflict of interest that skew their ratings to nearly all 'buys,'" Kramer said. ■



