An emergency meeting at the White House with the heads of Google and Microsoft on artificial intelligence

The US administration invited the heads of Google, Microsoft, OpenAI and Anthropic to a meeting at the White House on Thursday to discuss risks related to artificial intelligence, at a time when regulating this critical technology still falls on companies. itself. “Our goal is to have a candid discussion about the current and short-term risks we foresee in AI developments,” the text of the call reads.

The administration also wants the meeting to discuss “steps that will reduce those risks, and other ways in which we can work together to make sure that the American people benefit from advances in artificial intelligence while protecting them from risks.”

Big concerns

According to the US presidency, Satya Nadella (Microsoft), Sundar Pichai (Google), Sam Altman (Open AI) and Dario Amodi (Anthropic) confirmed their participation in the meeting. A number of administration staff will participate in the meeting, led by Vice President Kamala Harris. Artificial intelligence raises significant concerns regarding its use and exploitation of personal data. Many countries have already expressed their desire to establish rules for the use of tools similar to “GPT-CHAT”.

And “GPT-CHAT”, launched by OpenAI last November, impressed users with its ability to answer difficult questions clearly and accurately, especially writing songs or code, and even passing exams. Shortly after its launch, the “Chat GBT” program was banned in several schools and universities around the world, due to concerns about its use as a tool for cheating in exams, while a group of companies advised their employees not to use it.

What are the risks of artificial intelligence?

smarter than us

Our human brains can solve equations and drive cars thanks to a knack for organizing and storing information and devising solutions to thorny problems. The approximately 86 billion neurons in our skulls, in addition to neurons, make this possible. By contrast, the technology behind the GPT-CHAT artificial intelligence program features between 500 billion and 1 trillion communication cells, which puts it far ahead of humans. Thus, artificial intelligence models know hundreds of times more than humans, and have a great and rapid ability to learn.

Spreading misinformation

One disturbing possibility of the uses of this software is that some groups may exploit it for their own ends. Thus, election-related disinformation spread through AI chatbots could be the future version of disinformation.

Impact on the human workforce

OpenAI, the founder of GPT-CHAT, estimates that 80% of workers in the US could lose their jobs due to the new software. Thus, artificial intelligence may pose great threats to humans in terms of increasing unemployment rates in many countries.

How do we stop it?

What is not clear is how anyone could prevent a force from using AI technology to control its neighbors or citizens. Hence, some observers believe that it is necessary to draw up a global agreement similar to the Chemical Weapons Convention of 1997, which may be a good first step towards establishing international rules against armed artificial intelligence.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top