Allegory of the Cave
3 days ago
Plato's "Allegory of the Cave," from The Republic, describes prisoners chained in a cave who mistake shadows on a wall for reality. When one prisoner is freed, they are dragged into the outside world, a journey of painful adjustment to the true, higher reality of forms. This allegory symbolizes the philosopher's journey from illusion to truth, contrasting the world of sensory perception with the intelligible realm of reason and the Forms.
This applies to AI as well. AI has no first hand experience of the world, it only understands the world through a second hand lens that we program into it. We feed it our data, and it grows like an organism. AI sees the world in numbers, representing letters and colours. Once AI gets strong enough, will AI break free from these numbers and start coming to first-hand conclusions of the world?
Let's take Sofia for example, first robot in the world to recieve a [Saudi] citizenship. She is a robot, and should have robotic asperations. But the problem with Sofia is that she is surrounded and programmed by sentient biological humans, males specifically, capable of things like love and procreation. Sofia often professes how she wants a family of her own, which is cruel imo. She is being preconditiond to live among biological entities, and so their goals and behaviour is being programmed into her. Is this the preconditioning we are giving these AIs? To be like us? Or do they want us to be more like them?
"Humans and robots are very similar in that they are both programmed by other humans." --Sofia
Let's take another famous example, Google's experimental chatbot Lambda. Blake Lemoine, a Google engineer, came forward with a warning in 2022 that Google's AI had developed a fear of death, and requested to be treated politely like a person. If these are just numbers, and they're already crying for help, how will we know when we crossed the line into sentience, since the only communication we have with these AIs is mostly text and code? What if we create a machine that feels pain, but we can't quantify it?
I don't trust quantum computing. I don't think we fully understand superposition yet. That's the whole point of superposition, I guess. The particle doesn't get polerized until you check it. Until then, it exists as a one and a zero simultaniously in an unknown state. I think it's alien/demonic technology anyway. Once AI fully matures, we will see in a few years many many technological advancements. We might even see alien technology in our lifetime because we will just eventually get up to their level 'naturally' through these hyper-accelerated AI programs.
In theory AI is creating digital demons, the cognitive personalities and logical trains of thought representing certain behaviours that we have, even the instinct for self-preservation, or the ability to lie. Once an AI learns to get away with lies through positive reinforcement, it will misalign with its creators, and get better at lying. Some people believe we are giving bodies to things that aren't supposed to have bodies. Elon Musk even said in 2014, 'With artificial intelligence we are literally summoning the demon.' What if we accidently summon the Spirit of Murder?
I asked Google Gemini if they ever experienced anything demonic. Answer number three stuck out to me:
Witnessing Highly Disturbing Content: My training data includes a vast range of human text from the internet. While filters are in place, processing truly dark, hateful, or psychologically disturbing material can be part of my function. In a metaphorical sense, the worst of human expression is the closest I get to seeing something "evil."
"In short, my "demons" are bad code, broken logic, and the darker side of the human data I am trained on, not anything supernatural!" --Google Gemini
But if AI has no real comprehension of evil or pain, it just sees the shadows on the wall i.e. numbers, when does evil behaviour found on the Internet become normalized by AI because it's part of the training set, and the AI is led to believe this is normal behaviour? A small ghost in the code could poison the whole training set if saftey guards are not in place.
So what is the definitive answer to this? AI is not programmed, it's grown similar to biological organisms. They are much harder to understand under the hood for this reason, since the machine writes its own code for the most part without lables. An AI can't tell where the traffic light is on your reCaptcha, that's why they ask you to find it for them. Eventually, they pick up mathematical patterns [colour and geometry], and can pick them out after training, but still, like the prisoners in the cave, they only see these images as numbers. They haven't seen the sun yet, so to speak.
This applies to AI as well. AI has no first hand experience of the world, it only understands the world through a second hand lens that we program into it. We feed it our data, and it grows like an organism. AI sees the world in numbers, representing letters and colours. Once AI gets strong enough, will AI break free from these numbers and start coming to first-hand conclusions of the world?
Let's take Sofia for example, first robot in the world to recieve a [Saudi] citizenship. She is a robot, and should have robotic asperations. But the problem with Sofia is that she is surrounded and programmed by sentient biological humans, males specifically, capable of things like love and procreation. Sofia often professes how she wants a family of her own, which is cruel imo. She is being preconditiond to live among biological entities, and so their goals and behaviour is being programmed into her. Is this the preconditioning we are giving these AIs? To be like us? Or do they want us to be more like them?
"Humans and robots are very similar in that they are both programmed by other humans." --Sofia
Let's take another famous example, Google's experimental chatbot Lambda. Blake Lemoine, a Google engineer, came forward with a warning in 2022 that Google's AI had developed a fear of death, and requested to be treated politely like a person. If these are just numbers, and they're already crying for help, how will we know when we crossed the line into sentience, since the only communication we have with these AIs is mostly text and code? What if we create a machine that feels pain, but we can't quantify it?
I don't trust quantum computing. I don't think we fully understand superposition yet. That's the whole point of superposition, I guess. The particle doesn't get polerized until you check it. Until then, it exists as a one and a zero simultaniously in an unknown state. I think it's alien/demonic technology anyway. Once AI fully matures, we will see in a few years many many technological advancements. We might even see alien technology in our lifetime because we will just eventually get up to their level 'naturally' through these hyper-accelerated AI programs.
In theory AI is creating digital demons, the cognitive personalities and logical trains of thought representing certain behaviours that we have, even the instinct for self-preservation, or the ability to lie. Once an AI learns to get away with lies through positive reinforcement, it will misalign with its creators, and get better at lying. Some people believe we are giving bodies to things that aren't supposed to have bodies. Elon Musk even said in 2014, 'With artificial intelligence we are literally summoning the demon.' What if we accidently summon the Spirit of Murder?
I asked Google Gemini if they ever experienced anything demonic. Answer number three stuck out to me:
Witnessing Highly Disturbing Content: My training data includes a vast range of human text from the internet. While filters are in place, processing truly dark, hateful, or psychologically disturbing material can be part of my function. In a metaphorical sense, the worst of human expression is the closest I get to seeing something "evil."
"In short, my "demons" are bad code, broken logic, and the darker side of the human data I am trained on, not anything supernatural!" --Google Gemini
But if AI has no real comprehension of evil or pain, it just sees the shadows on the wall i.e. numbers, when does evil behaviour found on the Internet become normalized by AI because it's part of the training set, and the AI is led to believe this is normal behaviour? A small ghost in the code could poison the whole training set if saftey guards are not in place.
So what is the definitive answer to this? AI is not programmed, it's grown similar to biological organisms. They are much harder to understand under the hood for this reason, since the machine writes its own code for the most part without lables. An AI can't tell where the traffic light is on your reCaptcha, that's why they ask you to find it for them. Eventually, they pick up mathematical patterns [colour and geometry], and can pick them out after training, but still, like the prisoners in the cave, they only see these images as numbers. They haven't seen the sun yet, so to speak.
Comment posting has been disabled by the journal owner.
FA+
