The Guardian: AI firms warned to calculate threat of super intelligence or risk it escaping human control

The Urgent Call for AI Firms to Assess the Risks of Superintelligence

In a recent discussion spotlighted by The Guardian, prominent researchers have issued a crucial warning to Artificial Intelligence (AI) companies: it is imperative to thoroughly evaluate the potential risks posed by Artificial Super Intelligence (ASI) to ensure it does not surpass human control.

Max Tegmark, a leading voice in the field, emphasized that companies steering the development of superintelligent systems must take proactive measures in determining the likelihood of losing control over such powerful technologies. “It is essential for these firms to compute what I refer to as the Compton constant, essentially gauging the probability of an oversight,” said Tegmark. He urged that stating comfort in their creations is insufficient; a more precise approach involving measurable percentages is necessary.

Moreover, Tegmark argued that if various companies come together to establish a consensus on the Compton constant, it could pave the way for a unified global strategy focused on ensuring the safety of AI technologies. This collaborative effort would foster the political momentum required to create and implement comprehensive safety regulations for AI.

As advancements in AI continue to accelerate, addressing these concerns is not merely an academic exercise; it is a critical step in safeguarding humanity’s future. The responsibility now rests with AI creators to engage in this essential dialogue, paving the way for ethical and controlled development.

Leave a Reply

Your email address will not be published. Required fields are marked *


  • .
    .
  • .
    .
  • .
    .
  • .
    .