(Natural News)—Pioneering artificial intelligence researcher Eliezer Yudkowsky has warned that humanity may only have a few years left as artificial intelligence grows increasingly sophisticated.
Speaking to the Guardian, he told writer Tom Lamont: “If you put me to a wall and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years. Could be two years, could be 10.”
Yudkowsky, who founded the Machine Intelligence Research Institute in California, is talking about the end of humanity as we know it. He said that the problem is that many people fail to realize just how unlikely humanity is to survive all this.
“We have a shred of a chance that humanity survives,” he cautioned.
Those are scary words coming from someone the CEO of ChatGPT creator OpenAI, Sam Altman, has identified as getting himself and many others interested in artificial general intelligence and being “critical in the decision to start OpenAI.”
Last year, Yudkowsky wrote in an open letter in TIME that most experts in the field believe “that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”
He explained that there will come a point when AI doesn’t do what people want it to do and does not care at all for sentient life. Although he thinks that type of caring could one day be incorporated into AI, at least in principle, no one currently knows how to do it. This means that people are fighting a helpless battle, one that he likens to “the 11th century trying to fight the 21st century.”
Yudkowsky said that an AI that is truly intelligent will not stay confined to computers, pointing out that it’s now possible to email DNA strings to labs and have them produce proteins for you, which means an AI that is solely on the internet at first could “build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”
He has also explained that AI can “employ superbiology against you.”
“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.,” he added.
Computer scientists have been warning since at least the 1960s that the goals of the machines we create will not necessarily align with our own.
Yudkowsky says the solution is to “shut it all down”
So how can we stop this? According to Yudkowsky, there is a lot that needs to be done. For example, an indefinite and global moratorium on carrying out new large training runs should be carried out, without any exceptions for militaries or governments, although it’s hard to imagine getting international cooperation on this matter from places like China.
He also thinks that large GPU clusters should be shut down. These are the big computer farms where the world’s most powerful AIs are trained and refined. Ceilings on the amount of computing power that can be used to train AI systems would also help, as long as they are revised downward in the future as training algorithms become more efficient.
Yudkowsky thinks that we should “be willing to destroy a rogue datacenter by airstrike.” He wrote that even nuclear exchange might be okay if it meant taking out AI, although he now says he would have used “more careful phrasing” on that particular point if he were to write the piece again.
Although some might accuse him of scaremongering or being sensational, the biggest-ever survey of AI researchers, which was released last month, revealed that 16% of them are convinced their work in AI will lead to the extinction of humankind.
Sources for this article include:
Why One Survival Food Company Shines Above the Rest
Let’s be real. “Prepper Food” or “Survival Food” is generally awful. The vast majority of companies that push their cans, bags, or buckets desperately hope that their customers never try them and stick them in the closet or pantry instead. Why? Because if the first time they try them is after the crap hits the fan, they’ll be too shaken to call and complain about the quality.
It’s true. Most long-term storage food is made with the cheapest possible ingredients with limited taste and even less nutritional value. This is why they tout calories so much. Sure, they provide calories but does anyone really want to go into the apocalypse with food their family can’t stand?
This is what prompted the Llewellyns to launch Heaven’s Harvest. They bought survival food from multiple companies and determined they couldn’t imagine being stuck in an extended emergency with such low-quality food. They quickly discovered that freeze drying food for long-term storage doesn’t have to mean sacrificing flavor, consistency, or nutrition.
Their ingredients are all-American. In fact, they’re locally sourced and all-natural! This allows their products to be the highest quality on the market, so good that their customers often break open a bag in a pinch to eat because they want to, not just because they have to due to an emergency.
At Heaven’s Harvest, their only focus is amazing food. They don’t sell bugout bags, solar chargers, or multitools. They have one mission – feeding Americans in times of crisis.
What they DO offer is the ability for people to thrive in times of greatest need. On top of long-term storage food, they offer seeds to help Americans for the truly long-term. They want them to grow their own food if possible which is why they offer only Heirloom, Non-GMO, Non-Hybrid, Open-Pollinated seeds so their customers can build permanent food security on their own property.