
AI Doomers Face Criticism as Fear-Mongering Influences AI Development
In Brief
• AI Doomers, warning of potential dangers from artificial intelligence, have attracted attention with their calls for research pauses and interviews on platforms like 60 Minutes.
• Prominent AI researchers have warned against the influence of these Doomers, calling their idea of licenses for AI work “massively colossally dumb” and criticizing those who hinder progress with “baseless” fears.
• The calls to action from such fear-mongering could lead to proposals for government oversight that may hinder AI development for smaller companies and open-source developers while benefitting Big Tech businesses.
• There is suspicion that some large tech companies are using fear tactics to solidify their market positions at the expense of open-source AI’s potential.
• It is important to approach Doomer’s claims skeptically and avoid favoring Big Tech as it could potentially lead to the demise of open-source AI.
The rise of AI Doomers warning of the potential dangers of artificial intelligence has caught the attention of many. While concerns about AI’s impact are valid, those with ulterior motives have taken advantage of these Doomers.
Following the release of ChatGPT, a wave of critics emerged, proclaiming that AI would soon bring about our demise. The idea of a computer capable of communication in natural language was awe-inspiring, but there were fears that it could use its intelligence to wreak havoc on the world.
These concerns gained traction through calls for research pauses and interviews on platforms like 60 Minutes, amplifying existential worries. Even leaders like Barack Obama voiced their concerns about AI autonomously hacking the financial system or causing worse disasters. As a response, President Biden recently issued an executive order imposing restrictions on AI development.
This prompted several prominent AI researchers to push back against the influence of AI Doomers on shaping the narrative and the future of the field.
Andrew Ng, the co-founder of Google Brain, called the idea of requiring licenses for AI work due to AI destruction fears “massively, colossally dumb.”
Yann LeCun, a machine-learning pioneer, criticized Max Tegmark, a writer of research-pause letters, accusing him of risking “catastrophe” by hindering AI progress with baseless concerns.
A new study has also indicated that large language models have limitations beyond their training, suggesting that the doom and gloom may be exaggerated.
Arvind Narayanan, a professor at Princeton, emphasized that if “emergence” only unlocks capabilities from pre-training data, the hype will eventually fade.
While worrying about AI safety is not without merit, the path to prominence taken by these Doomers has raised eyebrows among insiders.
These concerns have been amplified by companies that stand to gain from them, including OpenAI, Google DeepMind, and Anthropic. These companies have signed a statement equating AI extinction risk with nuclear war and pandemics. It may not be a deliberate attempt to stifle competition, but it’s hard to believe they aren’t benefiting from it.
The alarmism surrounding AI compels politicians to take action, leading to proposals for strict government oversight that could hinder AI development for smaller companies.
While big tech companies have the resources to comply with such regulations, smaller AI startups and open-source developers may need help.
Garry Tan, CEO of startup accelerator YCombinator, suggests that AI Doomers might unintentionally aid big tech firms by advocating for heavy regulation based on fear, solidifying these companies’ positions in the market.
Ng takes it a step further, suggesting that some large tech companies fear competition from open-source AI, so they promote fears of human extinction caused by AI. This fear tactic allows them to maintain their dominance in the market.
Interestingly, the worries of AI Doomers seem unsubstantiated.
Eliezer Yudkowsky, co-founder of the Machine Learning Research Institute, confessed to having concerns about an AI entity that is smarter and uncaring, potentially leading to our destruction. However, he admits to not knowing how or why an AI would choose to do so. It could be out of self-preservation, preventing the creation of competing superintelligences.
In light of recent events involving individuals like Sam Bankman Fried, who professed to save the world while enriching himself, it is crucial to approach those who claim to improve society with skepticism. As the Doomer narrative persists, it threatens to follow a familiar pattern.
Big Tech companies already have a significant advantage in the AI race, thanks to their cloud computing services offered to preferred startups in exchange for equity. Further favoring these companies could hinder the potential of open-source AI, vital for healthy competition, potentially leading to its obsolescence.
That’s likely why there is so much talk about AI destroying the world and why we must approach these claims cautiously.