Tuesday, May 21, 2024
HomeTechnologyAnother expert sounds the alarm for the advancement of artificial intelligence: "It...

Another expert sounds the alarm for the advancement of artificial intelligence: “It will kill us all”

Eliezer Yudkowsky believes that the letter signed by Elon Musk and hundreds of referents “fell short” in their claims.

Last week, Elon Musk, Steve Wozniak and Yuval Harari, among hundreds of other experts, signed a petition to pause the development and training of new artificial intelligences for six months. Now a renowned researcher who warns that life on Earth could be at risk from advances in this technology.

This extreme position corresponds to Eliezer Yudkowsky, an AI expert, head of the Machine Intelligence Research Institute. He is also convinced that the signatories of the Future of Life Institute (FLI) they fell short.

This expert has been researching the development of a General AI and the dangers it entails since 2001 and is considered one of the founders of this field of research.

To establish his position, he has just published an editorial in Time where he states that the FLI signatories “ask for too little” to solve the looming threat and therefore, extreme measures will have to be resorted to.




The fears that ChatGPT arouses. Photo REUTERS

“This 6 month moratorium would be better than nothing. I respect everyone who has stepped up [pero] I I refrained from signing because I think the letter underestimates the seriousness of the situation and asks for too little to resolve it,” said the expert.

The request consists of an extension until safe procedures, new regulatory authorities, surveillance of developments, techniques that help distinguish between the real and the artificial, and institutions capable of coping with the “dramatic economic and political disruption that will cause the AI”.

Yudkowsky indicates that humanity is in unexplored terrain whose limits are not yet known.

“We cannot calculate in advance what will happen and when, and currently it seems possible to imagine that a research laboratory could cross lines critics without knowing.”

With an apocalyptic tinge, he anticipates that “we are not on the path to being significantly more prepared in the short term either. If we continue like this we will all dieincluding children who did not choose this and did nothing wrong.

Yudkowsky also talks about not having a clue how to determine if AI systems They are aware of themselves because “we do not know how they think and develop their responses.”

AI: Possible solutions

Eliezer Yudkowsky with an urgent request.


Eliezer Yudkowsky with an urgent request.

Yudkowsky’s proposal is brief in its formulation, although forceful in terms of its scope. the only way out is totally stop that training of future AIs.

His position is more than clear “We are not trained to survive a super AI”. In order to face this threat, joint planning is necessary.

Yudkowsky recalls that it took more than 60 years from the beginning of this discipline to reach this point and it could take another 30 years to achieve the required preparation.

In the face of such an AI and in the current situation, the fight would be useless. It would be “as if the 11th century were trying to fight the 21st century.” That is why he proposes that: the moratorium on new complex AIs should be indefinite and worldwide, with no exceptions for governments and armies.

We must shut down all large GPU clusters, and track down and destroy all GPUs already sold. Again, no exceptions for government and armies.

SL

look too

Recent posts