Beyond Mainstream
Social icon element need JNews Essential plugin to be activated.
No Result
View All Result
  • Home
  • JOURNALISTS
  • NEWS OUTLETS
  • FIREBRANDS / CONTROVERSIAL
  • SATIRE
  • DOT CONNECTORS / TRUE HISTORY
  • WHISTLEBLOWERS
  • EMPOWERED HEALTH
  • UFO/DUMBS
  • FINANCIAL
  • About Us
  • Home
  • JOURNALISTS
  • NEWS OUTLETS
  • FIREBRANDS / CONTROVERSIAL
  • SATIRE
  • DOT CONNECTORS / TRUE HISTORY
  • WHISTLEBLOWERS
  • EMPOWERED HEALTH
  • UFO/DUMBS
  • FINANCIAL
  • About Us
No Result
View All Result
Beyond Mainstream
No Result
View All Result

‘Global Priority:’ AI Industry Leaders Warn of ‘Risk of Extinction’

Beyond Mainstream News by Beyond Mainstream News
May 30, 2023
in NEWS OUTLETS
0
‘Global Priority:’ AI Industry Leaders Warn of ‘Risk of Extinction’

More than 350 executives, researchers, and engineers from leading artificial intelligence companies have signed an open letter cautioning that the AI technology they are developing could pose an existential threat to humanity.

The New York Times reports that over 350 executives, researchers, and engineers from top AI businesses have signed an open letter as a warning to the world that the AI technology they are building may be a threat to humanity’s existence.

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

OpenAI founder Sam Altman, creator of ChatGPT

OpenAI founder Sam Altman, creator of ChatGPT (TechCrunch/Flickr)

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads the statement released by the Center for AI Safety, a nonprofit organization. The signatories include top executives from OpenAI, Google DeepMind, and Anthropic, among others.

Some of the well-known signatories include Sam Altman, CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind, and Dario Amodei, CEO of Anthropic. Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their groundbreaking work on neural networks, have also signed the letter.

The open letter comes at a time when worries about the possible negative effects of artificial intelligence are on the rise. Recent developments in “large language models”—the kind of AI system used by ChatGPT and other chatbots—have stoked concerns that AI may soon be used at scale to disseminate false information and propaganda or that it may eliminate millions of white-collar jobs.

Altman, Hassabis, and Amodei recently met with Vice President Kamala Harris and President Joe Biden to discuss AI regulation. Following the meeting, Altman testified before the Senate and urged for government control of AI due to the possible risks it poses, warning that the risks of advanced AI systems were serious enough to warrant government intervention.

The open letter served as a “coming-out” for some business leaders who had previously voiced concerns about the dangers of the technology they were building, but only in secret, according to Dan Hendrycks, executive director of the Center for AI Safety. “There’s a very common misconception, even in the AI community, that there only are a handful of doomers,” Mr. Hendrycks said. “But, in fact, many people privately would express concerns about these things.”

The letter also refers to an idea put out by OpenAI officials for the prudent administration of potent AI systems. They demanded collaboration between the top AI developers, increased technical investigation into complex language models, and the establishment of an international agency for AI safety, akin to the International Atomic Energy Agency, which aims to regulate the use of nuclear weapons.

“I think if this technology goes wrong, it can go quite wrong,” Altman told the Senate subcommittee. “We want to work with the government to prevent that from happening.”

Read more at the New York Times here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

usechatgpt init success

Source link

https://www.breitbart.com/tech/2023/05/30/global-priority-ai-industry-leaders-warn-of-risk-of-extinction/

Previous Post

PREST CBD is my favorite CBD oil. I've tried many different CBD products, but none compare to the quality and effectiven…

Next Post

This Heart Condition Affects Healthy, Young Women

Next Post
This Heart Condition Affects Healthy, Young Women

This Heart Condition Affects Healthy, Young Women

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Browse by Category

  • DOT CONNECTORS / TRUE HISTORY
  • EMPOWERED HEALTH
  • FINANCIAL
  • FIREBRANDS / CONTROVERSIAL
  • JOURNALISTS
  • NEWS OUTLETS
  • SATIRE
  • UFO/DUMBS
  • WHISTLEBLOWERS

CATEGORIES

  • DOT CONNECTORS / TRUE HISTORY
  • EMPOWERED HEALTH
  • FINANCIAL
  • FIREBRANDS / CONTROVERSIAL
  • JOURNALISTS
  • NEWS OUTLETS
  • SATIRE
  • UFO/DUMBS
  • WHISTLEBLOWERS
  • About Us
  • Privacy Policy
  • Contact Us
Social icon element need JNews Essential plugin to be activated.

© 2023 Beyond Mainstream - All rights reserved.

No Result
View All Result
  • Home
  • JOURNALISTS
  • NEWS OUTLETS
  • FIREBRANDS / CONTROVERSIAL
  • SATIRE
  • DOT CONNECTORS / TRUE HISTORY
  • WHISTLEBLOWERS
  • EMPOWERED HEALTH
  • UFO/DUMBS
  • FINANCIAL
  • About Us

© 2023 Beyond Mainstream - All rights reserved.