🦺 Demand for AI safety and ban on AGI 🚫
5 months ago
General
AI has to have better safety regulations WORLD WIDE
The development of AGI has to be BANNED
The real immediate dangers of AI are:
◽️Joblessness for millions due to AI replacing workers and politicians are not at all prepared for the impact it will have on our society.
◽️Spreading misinformation with AI is faster than ever, and can be made more localized and even personal, making our echo chambers even worse.
◽️Negative environmental impact from data centers and their vast energy consumption.
◽️Only a handful of elites are making all the decisions over how AI will be used, the rest over 8 billion of us were not asked. And they are not as confident the future will be bright for everyone like they advertise:
Vice article: "Silicon Valley’s wealthy elite are viewing doomsday as a serious threat, building bunkers in preparation for the potential destruction of all mankind."
“The billionaires understand that they’re playing a dangerous game,” “They are running out of room to externalize the damage of the way that their companies operate. Eventually, there’s going to be the social unrest that leads to your undoing.”
“The most powerful people in the world see themselves as utterly incapable of actually creating a future in which everything’s gonna be OK.” –Douglas Rushkoff
Other likely threats are:
◽️Military use in autonomous weapons; robots that choose themselves who to kill; it will become easier for big countries to invade smaller countries as the population in the big country are less likely to rebel, when bodybags of their own family members are not coming in.
◽️Authoritan use of AI to surveil and oppress everyone. No free speech; Anything you text, email and search history, will be used against you.
◽️ AI becoming too powerful, unpredictable and out of our control.
I write this journal because I am concerned for the future of the human race. I am not the smartest person on the subject, I'm just a dumb himbo, but even I can see something is wrong with this picture. I just want do my part in spreading awareness from people who are a lot smarter than me: Geoffrey Hinton and Tristan Harris.
We are now at a point where multi billionaires, governments and companies are at an arms race, funneling trillions into the development of AGI - Artificial General Intelligence - to replace most of the working class, to achieve military dominance and rule the new world that comes about in the chaos:
🟥 https://youtu.be/BFU1OCkhBwo?si=BX55EGxFVLz-i4sp 🟥 VIDEO: AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting!
"Ex-Google Insider and AI Expert TRISTAN HARRIS reveals how ChatGPT, China, and Elon Musk are racing to build uncontrollable AI, and warns it will blackmail humans, hack democracy, and threaten jobs…by 2027."
"there's a different conversation happening publicly than the one that's happening privately. I think you're aware of this as well." at 20:09
Publicly OpenAI promotes it wants to benefit everyone:
"Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity." Article - Planning for AGI and beyond.
But this is a lie, successful development of AGI would permanently damage our capitalism run world and all of the working class.
If nobody has a job anymore, no one can afford to buy anything. All the power in the world would only be in the hands of a few trillionaires.
AI Will Do Anything for Its Own Survival: at 38:46
🟥 https://youtu.be/giT0ytynSqg?si=Z6k2i1w-EehmczBH 🟥 VIDEO: Godfather of AI: They Keep Silencing Me But I’m Trying to Warn Them!
"Geoffrey Hinton is a leading computer scientist and cognitive psychologist, widely recognized as the ‘Godfather of AI’ for his pioneering work on neural networks and deep learning. He received the 2018 Turing Award, often called the Nobel Prize of computing. In 2023, he left Google to warn people about the rising dangers of AI.
He explains:
◽️ Why there’s a real 20% chance AI could lead to HUMAN EXTINCTION.
◽️ How speaking out about AI got him SILENCED.
◽️ The deep REGRET he feels for helping create AI.
◽️ The 6 DEADLY THREATS AI poses to humanity right now.
◽️ AI’s potential to advance healthcare, boost productivity, and transform education."
"The European regulations have a a clause in them that say none of these regulations apply to military uses of AI" at 10.26
EU Artificial Intelligence Act contains a clause that exempts AI systems used exclusively for military, defense, or national security purposes from its scope. This means the AI Act's regulations do not apply to these specific applications. So AI can and will be used uncontrollably in warfare strategy and lethal autonomous weapons unless pressure is set to all governments not to.
Hinton believes a robot already can have a subjective experience: At 1h 2min:
"...sentience and consciousness and feelings and emotions but I think in the end they're all going to be dealt with in a similar way there's no reason machines can't have them all because people say machines can't have feelings and people are curiously confident about that. I have no idea why. Suppose I make a battle robot and it's a little battle robot and it sees a big battle robot that's much more powerful than it. It would be really useful if it got scared now. When I get scared um various physiological things happen that, we don't need to go into and those won't happen with the robot ...
"They'll have emotions then they won't have the physiological aspects but they will have all the cognitive aspects and I think it would be odd to say they're just simulating emotions no they're really having those emotions the little robot got scared and ran away it's not running away because of adrenaline ..."
A more visual shorter video on the matter:
🟥 https://youtu.be/86k8N4YsA7c?si=bgg10454sXwvpeQc
"Tristan Harris explores the 2 most probable paths that AI will follow, one leading to chaos and the other to dystopia. He explains how we can pursue a narrow path between these 2 undesirable outcomes. "
The most important thing you can do, is affect your community with awareness to the threat and give hope that together we can make a difference. One person can only do so much, but as a community we can change the world.
WE MUST COLLECTIVELY ACT NOW :
◽️ Taxing those who benefit from AI harder, so money can be given to those who lose their jobs due to AI.
◽️ More transparency and safety precautions. The mental health of our societies' individuals must be protected from AI.
◽️ Ban on deepfakes - Control over your identity should be seen as a human right: Your own likeness - the face, voice and body, including how these are captured, reproduced or imitated digitally, belong to you, and no-one should be allowed to use them without your consent.
◽️ Control over AI must be hand out to responsible organizations, which have our safety in mind as top priority and don't profit from exploiting us.
PLEASE do your best to share this information.
VOTE in elections for politicians who demand for AI safety regulations.
Contact your local government officials and demand for AI safety regulations and a ban on AGI development.
Edited on 4.12.2025
The development of AGI has to be BANNED
The real immediate dangers of AI are:
◽️Joblessness for millions due to AI replacing workers and politicians are not at all prepared for the impact it will have on our society.
◽️Spreading misinformation with AI is faster than ever, and can be made more localized and even personal, making our echo chambers even worse.
◽️Negative environmental impact from data centers and their vast energy consumption.
◽️Only a handful of elites are making all the decisions over how AI will be used, the rest over 8 billion of us were not asked. And they are not as confident the future will be bright for everyone like they advertise:
Vice article: "Silicon Valley’s wealthy elite are viewing doomsday as a serious threat, building bunkers in preparation for the potential destruction of all mankind."
“The billionaires understand that they’re playing a dangerous game,” “They are running out of room to externalize the damage of the way that their companies operate. Eventually, there’s going to be the social unrest that leads to your undoing.”
“The most powerful people in the world see themselves as utterly incapable of actually creating a future in which everything’s gonna be OK.” –Douglas Rushkoff
Other likely threats are:
◽️Military use in autonomous weapons; robots that choose themselves who to kill; it will become easier for big countries to invade smaller countries as the population in the big country are less likely to rebel, when bodybags of their own family members are not coming in.
◽️Authoritan use of AI to surveil and oppress everyone. No free speech; Anything you text, email and search history, will be used against you.
◽️ AI becoming too powerful, unpredictable and out of our control.
I write this journal because I am concerned for the future of the human race. I am not the smartest person on the subject, I'm just a dumb himbo, but even I can see something is wrong with this picture. I just want do my part in spreading awareness from people who are a lot smarter than me: Geoffrey Hinton and Tristan Harris.
We are now at a point where multi billionaires, governments and companies are at an arms race, funneling trillions into the development of AGI - Artificial General Intelligence - to replace most of the working class, to achieve military dominance and rule the new world that comes about in the chaos:
🟥 https://youtu.be/BFU1OCkhBwo?si=BX55EGxFVLz-i4sp 🟥 VIDEO: AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting!
"Ex-Google Insider and AI Expert TRISTAN HARRIS reveals how ChatGPT, China, and Elon Musk are racing to build uncontrollable AI, and warns it will blackmail humans, hack democracy, and threaten jobs…by 2027."
"there's a different conversation happening publicly than the one that's happening privately. I think you're aware of this as well." at 20:09
Publicly OpenAI promotes it wants to benefit everyone:
"Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity." Article - Planning for AGI and beyond.
But this is a lie, successful development of AGI would permanently damage our capitalism run world and all of the working class.
If nobody has a job anymore, no one can afford to buy anything. All the power in the world would only be in the hands of a few trillionaires.
AI Will Do Anything for Its Own Survival: at 38:46
🟥 https://youtu.be/giT0ytynSqg?si=Z6k2i1w-EehmczBH 🟥 VIDEO: Godfather of AI: They Keep Silencing Me But I’m Trying to Warn Them!
"Geoffrey Hinton is a leading computer scientist and cognitive psychologist, widely recognized as the ‘Godfather of AI’ for his pioneering work on neural networks and deep learning. He received the 2018 Turing Award, often called the Nobel Prize of computing. In 2023, he left Google to warn people about the rising dangers of AI.
He explains:
◽️ Why there’s a real 20% chance AI could lead to HUMAN EXTINCTION.
◽️ How speaking out about AI got him SILENCED.
◽️ The deep REGRET he feels for helping create AI.
◽️ The 6 DEADLY THREATS AI poses to humanity right now.
◽️ AI’s potential to advance healthcare, boost productivity, and transform education."
"The European regulations have a a clause in them that say none of these regulations apply to military uses of AI" at 10.26
EU Artificial Intelligence Act contains a clause that exempts AI systems used exclusively for military, defense, or national security purposes from its scope. This means the AI Act's regulations do not apply to these specific applications. So AI can and will be used uncontrollably in warfare strategy and lethal autonomous weapons unless pressure is set to all governments not to.
Hinton believes a robot already can have a subjective experience: At 1h 2min:
"...sentience and consciousness and feelings and emotions but I think in the end they're all going to be dealt with in a similar way there's no reason machines can't have them all because people say machines can't have feelings and people are curiously confident about that. I have no idea why. Suppose I make a battle robot and it's a little battle robot and it sees a big battle robot that's much more powerful than it. It would be really useful if it got scared now. When I get scared um various physiological things happen that, we don't need to go into and those won't happen with the robot ...
"They'll have emotions then they won't have the physiological aspects but they will have all the cognitive aspects and I think it would be odd to say they're just simulating emotions no they're really having those emotions the little robot got scared and ran away it's not running away because of adrenaline ..."
A more visual shorter video on the matter:
🟥 https://youtu.be/86k8N4YsA7c?si=bgg10454sXwvpeQc
"Tristan Harris explores the 2 most probable paths that AI will follow, one leading to chaos and the other to dystopia. He explains how we can pursue a narrow path between these 2 undesirable outcomes. "
The most important thing you can do, is affect your community with awareness to the threat and give hope that together we can make a difference. One person can only do so much, but as a community we can change the world.
WE MUST COLLECTIVELY ACT NOW :
◽️ Taxing those who benefit from AI harder, so money can be given to those who lose their jobs due to AI.
◽️ More transparency and safety precautions. The mental health of our societies' individuals must be protected from AI.
◽️ Ban on deepfakes - Control over your identity should be seen as a human right: Your own likeness - the face, voice and body, including how these are captured, reproduced or imitated digitally, belong to you, and no-one should be allowed to use them without your consent.
◽️ Control over AI must be hand out to responsible organizations, which have our safety in mind as top priority and don't profit from exploiting us.
PLEASE do your best to share this information.
VOTE in elections for politicians who demand for AI safety regulations.
Contact your local government officials and demand for AI safety regulations and a ban on AGI development.
Edited on 4.12.2025
FA+

I just feel like, if you are going to build a digital neural path way and let it powerfully and randomly edit its own coding, design more efficient chips, design robots, it will lead to something eventually.
None of that matters if they still succeed to create something that threatens the jobs of even 40-60% of all working class.
JOHN CONNOR DOES NOT EXIST
IT IS UP TO US
WE NEED SCHWARZENEGGER IN ON THIS HE KNOWS BETTER THAN ANYONE ELSE
First of all there is currently no super, all knowing, sentient ai and probably they will not be one for the foreseeable future. What we currently have are AI's trained for a specific usescase. The most common usecase currently are generative AIs that can generate text or images and understand languages like humans do. That doesnt mean they are sentient or can feel. The most popular one would be Chat-Gpt which indeed has a lot of knowledge since it has been fed with scientific papers. However this doesnt mean that it has automatically the knowledge or skill of someone that won a Nobel Prize. Here is an example of several generative AIs creating a program that shows a working clock:
https://clocks.brianmoore.com/
And you can clearly see the limitation of some of the models. And sadly even that a lot of people are saying different we cannot increase AIs capability indefinetly. We will reach a plateau!
This all said they are lots of true points mentioned. The fear of AI replacing jobs, especially low wage or junior roles is real and something we should be worried about.
There are other things I myself worry about:
- Where is AI getting it's data from if lets say we all depent on Chat-GPT or other AI services
- Energy consumption
- Data privacy and security
- Biases and who controls them
- The depency of programs, services and companies of AI and what this will cause
It does not matter if this behavior is organic or digital, it is still trying to survive in a very simple way. And this ability to try and survive and override its own coding is unpredictable and scary.
Hinton believes a robot already can have a subjective experience:
https://youtu.be/giT0ytynSqg?si=DmnRtq1Bwy483KYU&t=3747
AI will not one day be conscious, it will happen very slowly, piece by piece, like it did with evolution.
There has to be very simple models at first and then build upon that.
I agree and a lot of regulations must be demanded from politicians. Most politicians have no idea about the dangers of AI, is just some funny thing on a phone to play with.
Sure, it might lead us to another Great Depression, but at least the development of AGI will be halted or slowed down enough to make more people realize this technology either needs regulations or needs to be stopped entirely.
...at least, I hope for that to be the case.
We are putting a lot more afford into developing AI currently - Mindset is: they have to be the first to make it, at all cost even with cutting corners in safety, or we will be slaves to the other ones who do succeed first.
I don't think safe spaces free from AI, like FA, are possible if we don't do something.
Furry community reaches out to so many people as well. There are all walks of life here, so spreading information here is not useless.
If we can inform our family members over the dangers, they can too vote for politicians who take this seriously.
We just need to set the ball moving.
Step one: Introduce into the public consciousness the idea that AI can be used to make weapons / robotic workers, which is not possible with current tech.
Step two: Rich people belive the lie, others are happy to spread the panic
Step three: Loot easy investment money from military/industrial sector to keep the bubble from bursting
If it does not look ike an impudent marketing company to you i dont know what else to say
Also another thing the AGI fearmongerers want to accomplish that is already shown to be effective in this journal: get people all fearful of some Terminator scenario in order to distract from the real problems of AI slop, enshittefication, and all the other abuses AI can currently be used for. This journal is playing right into the grifter's hands.
This isn't the first rodeo of "AGI soon!" either. Back in 2010 it was 5 years before AGI Jesus came and raptured us all to the Matrix or some shit.
According to recent studies, more than half of what can be found on the internet is already created by AI. Articles, videos, and images. This means that we, as real people, can no longer be sure whether every second piece of content is even real.
https://futurism.com/artificial-int.....ternet-ai-slop
The internet as we know it is being destroyed right before our eyes.
So anyone who tries to reassure me that AI won't be smart enough to enslave humanity in ten years' time is completely oblivious to how stupid AI is already destroying their everyday life.
It is most important we just warn the people around us and try to affect decision makers to the best of our abilities.
I would also like to add, when the ai bubble does pop, I doubt we’re gonna experience the great depression again. Mostly because humanity has kind of grown enough to avoid the worst of that scenario, though a recession is probable.
But I really hope we come to our senses and stop things from going too far too fast.
We will go forward no matter what, but it has to be done with the safety of the future generations in mind.
And we are really bad at doing that, which can be seen in our greed and gluttony: how we seek quick reward instead of long term stability.
If we want regulators to look at what matters we have to completely throw out this stupid "human extinction" story which was created by the AI industry itself.
Geoffrey Hinton is on the AI industry's side, they all want to misdirect regulators towards worrying about AGI instead of regulating training data copyright, energy usage and psychological manipulation.
AI will not be the future, not now, not ever.
The idea of having control is wilful ignorance at best.
I am both a strong opponent and a supporter of AI, since fortunately/unfortunately I work with/on AI systems. And it can be good, especially when we are not talking about centralized systems (by this I mean, among others, all existing profit-oriented corporations that are built on AI slop (Adobe, OpenAI, Anthropic, Google, Microsoft, Meta, Nvidia, AWS (Amazon), IBM, and quite a few others)). These companies are not only involved in stealing and unauthorized use of as much data as possible, but they are also severely damaging to the environment on an industrial scale, and economically as well… just look at current RAM and chip prices, layoffs, reorganizations, and the disappearance of professions. This is not only the responsibility of these companies, but of every CEO who believes that generative AI would be a solution to everything… which it absolutely is not. It is called generative for a reason, and after a certain point every model will be equally “advanced.” Current technology already allows us to handle the situation with 60–80% better energy efficiency using targeted hardware. But of course, this is not about providing accessible and appropriate help for ordinary people, it’s about achieving 500–1000–2000% profit on something they consider "innovative."
Regarding that AI threatens humanity, I want to point out that an AGI would require a brutally large amount of resources, because current models can only “reason” based on feedback and think forward only. There is no such thing as being aware of their own, they just guess the next steps through serious mathematical computations and thresholds. For an AI to be capable of real thinking, the model would need to retrain itself, create new connections between neurons on the fly, so that neuron activation happens correctly during continuous operation, and then while “controlling” itself, produce a final answer. But… those who believe that AI could thus become a solution to everything are so naive, because these systems will never be independent, autonomous entities.
This 20% risk is much higher, and unfortunately, based on the current situation, it certainly won’t be AI itself, but corporations that will cause it with the rapidly growing number of energy-hungry data centers being built worldwide, which strain not only energy grids, but also the atmosphere and natural and drinking water sources (where even a 2–3°C increase already leads to a decline in ecosystems). So the real problem remains that the side effects of AI stem from these corporation's negligence and environmentally destructive activities, which we will definitely feel first. but they won’t be exempt from the consequences either.
Autonomous killer robots? Someone would really lose their pills. Although autonomous systems have been used in warfare for a very long time (think of Reaper, Patriot, Plantair, LAWS systems are quite bizarre) and I also don’t understand why EU regulations do not apply to the development of military AI. Most likely for self-defense reasons, but this leaves that entire area completely open, which is indeed quite dangerous.
Emotions, consciousness, feelings… this is an area where, if AGI truly does emerge one day, its moral decisions will likely depend on its background knowledge. I like the idea of a sentient and conscious intelligence if it were to exist isolated on my own computer, but the thought is terrifying if it were trained on propaganda and negative content.
I agree with everything you said about how we need to act… things have gone completely off the rails with these companies, which are receiving unimaginable amounts of wealth without restraint, while many of us struggle to afford not just a house, but even a car, alongside everyday expenses.
I would be happy if everyone had their own locally running AI that is completely independent from large corporations and could help in many targeted ways, tailored to your own needs, without putting your identity, well-being, future, or money at risk... just giving an extra hands and faster, logical thinking...
Sorry it took a while to read and respond.
https://www.livemint.com/news/us-ne.....481919001.html
https://arstechnica.com/tech-policy.....nators-allege/
https://www.epi.org/blog/tech-and-o.....85000-workers/
I am in IT i know serior IT and developers that actually claim AI makes their jobs harder because they fix it's mistakes. They echo similar claims to the skeptic ones on the internet. The AI will kill us stuff are distractions from how much they want to get rich at the expense of the working class
https://youtu.be/4lKyNdZz3Vw?si=mEPKV55YlpJEX_T-
https://youtu.be/MaFTqjYjADw?si=PiUOE0yFa9O7LaQy
I think it's just important that common people know the risks of AI, how to resist it, regulate their own use of it, affect politicians in their decision making locally.
Internet of Bugs makes good points on his video, and the focus should be on how AI is dangerous now.
AI Doomerism is something that should not be an excuse not to do anything about the current issues.
But I think you can still hold the knowledge of current and future risks of AI in your head, without being an "AI doomer".
"The AI will kill us stuff are distractions from how much they want to get rich at the expense of the working class"
I honestly am not sure how much reverse psychology is affecting the AI market, cos some people say investors are dumb enough to invest more into something, the more we call it dangerous.
I just want more regulations, AI to not be in the hands of a few billionaires. Them to be held accountable when bad things happen to people due to AI.
Bernie Sanders is one of the few politicians speaking out about AI:
https://www.youtube.com/watch?v=K3qS345gAWI&t=2s
"The threats from unchecked AI are real — worker displacement, corporate surveillance, invasion of privacy, environmental destruction, unmanned warfare.
Today, a tiny number of billionaires are shaping the future of AI behind closed doors. That is unacceptable. That must change."
The dangers of AI itself is a problem to people who are vulnerable and lonely. Which can severely impact their mental health, their self-image, proper human connections, and degrade your own intelligence over prolonged use and dependency on it, because most of what it generates is incoherent slop.
in just a few short years AI Slop has taken over the Internet, authenticity has to be questioned at every turn, something that looks remotely AI gets scrutinized, it's getting impossible to tell what's real and what isn't because you can fabricate any kind of video, image, text or audio using AI Slop with near perfection that you can't tell the difference.
it's a fire lance (still)."
I think AI will be the biggest threat to humanity going forward, it might also be something that helps us all, or maybe just some very few wealthy individuals, who hold all the power.
And yes there are more important issues with the current AI than a possible true AI in the future. Most stemming from user error and the people who design AI's making them as addictive as possible, so people stay longer using them on their platforms etc. It rewards seclusion and living even more inside your own bubble, making mental health problems worse.
Companies should be held accountable, when people take their own lives after talking to an AI, which planned with them how to do it.
DeviantArt used to be an art website, now it has become an AI slop porn site that is very poorly moderated, cos they don't have the time to filter everything being generated and posted. When I've looked there, I've seen so many uncensored adult AI pics.
Coming forward, AI is going to make porn addiction much worse too I think. Cos people can generate a character of their choosing and think they are talking to it etc.
But it should have stayed that way.