The safety practices of major artificial intelligence companies, such as Anthropic, OpenAI, xAI and Meta, are "far short of emerging global standards," according to a new edition of Future of Life Institute's AI safety index released on Wednesday.

The institute said the safety evaluation, conducted by an independent panel of experts, found that while the companies were busy racing to develop superintelligence, none had a robust strategy for controlling such advanced systems.

The study comes amid heightened public concern about the societal impact of smarter-than-human systems capable of reasoning and logical thinking, after several cases of suicide and self-harm were tied to AI chatbots.

"Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants and continue lobbying against binding safety standards," said Max Tegmark, MIT professor and Future of Life president.

The AI race also shows no signs of slowing, with major tech companies committing hundreds of billions of dollars to upgrading and expanding their machine learning efforts. The Future of Life Institute is a non-profit organisation that has raised concerns about the risks intelligent machines pose to humanity. Founded in 2014, it was supported early on by Tesla CEO Elon Musk. In October, a group including scientists Geoffrey Hinton and Yoshua Bengio called for a ban on developing superintelligent artificial intelligence until the public demands it and science paves a safe way forward.

A Google DeepMind spokesperson said the company will "continue to innovate on safety and governance at pace with capabilities" as its models become more advanced, while xAI said "Legacy media lies", in what seemed to be an automated response.

Anthropic, OpenAI, Meta, Z.ai, DeepSeek, and Alibaba Cloud did not immediately respond to requests for comment on the study.

Contact to : xlf550402@gmail.com


Privacy Agreement

Copyright © boyuanhulian 2020 - 2023. All Right Reserved.