AI Researcher Calls for Bombing AI Centers (No you Can't Make This Up.)

Eliezer Yudkowsky a former AI researcher has called for world governments to halt all AI and even the bombing of AI proliferation as it is views like a WMD.

AI Researcher Calls for Bombing AI Centers (No you Can't Make This Up.)

Before it is too late. Like a re-run of a Sarah Conner's Terminator Movie.

'Guys like you 'thought it up..''

In the second terminator movie Sarah Conner interrogates her protector Terminator to discover where they obtained the futuristic chips that had fallen into the hands of  Cyberdyne Systems from the first terminator in the original movie.

The plot continues and they make a desperate attempt to stop Cyberdyne Systems - which has intercepted these chips from the future. Unfortunately they are completely unable to re-plot their futures and only make it to a bunker to shelter.

A former AI researcher (he quit) - Eliezer Yudkowsky is recommending the same plan and  that once AGI (Artificial General Intelligence) is on stream it will literally replace everyone.  They are highly concerned that the subtle shift from 'helpful AI' to 'superAI' will not even be noticed and it will already have been too late.  He with his partner does not expect their daughter to make it to adulthood if AGI takes a foothold.

"Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined)," he wrote. "Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries."

Of course this can never realistically  happen.  It would mean that the nations of the world would have to set down their regional and national conflicts and sign a global peace initiative.  Currently we have have 286 wars since WWII according to the Upsala Institute - and literally sit on the edge of WWIII.  So it's coming. And we can only best prepare for it.

Ironically whoever controls these future AI's will seek supremacy in any future conflict - and will have it. Imagine an AI able to make a decision 100's of times faster and more efficiently than your most brilliant programmers.  In other words - not only is future conflict an inevitability - the future conflicts (with AI 'benefits') is also certain.   No nation-state will be able to resist the temptation of having their local subcontracted AGI mange and run things faster, better and smarter than even their most brilliant human minds - and capable of doing it in seconds where we take weeks and days.

Linux Rocks Every Day