The Meaning of Life
3 months ago
General
I've been ruminating about the future of AI and I think I've hit on something interesting, or at least worth considering as we continue to push the boundaries of AI. I'm curious what others think?
So, think about this. The way the neurons work in our brain is largely deterministic, but not completely. A neuron will not fire exactly the same way every single time, which is good because then we'd just be robots. There's some randomness in play due to various biological factors. This doesn't free us from the idea that we'd be robots, however, because then it would just mean that we're robots with a screw loose. This is kind of where I feel AI is now - we can tweak the 'temperature' (read: randomness) of the LLM, but beyond that AI does have predictability.
We clearly are a step beyond that, reaching a level of cognitive complexity where we have an emergent phenomena - intelligence, free will. In other words, at a microscopic level there's largely predictable behavior with some quirks, but on a macro scale this behavior is harnessed into our consciousness. Specifically, we have a way of coalescing and amplifying the random fluctuations in particular fashions. Exactly how this happens is the million dollar question, but in the context of our discussion that's thankfully irrelevant.
How do we instill this emergent phenomena in AI? One absolutely critical requirement is recursive self improvement. You can have a conversation with a model and get different answers, but that's due to the randomness, not due to the shift in the emergent "consciousness" that we have inside us. Embodiment will be another important feature - interacting with the universe through the senses. I also thought about drive - that is, why would an intelligent AI do what it does? For example, a lot of what we do is to avoid pain, maximize pleasure, follow a social contract, provide for our families, etc. An obvious initial drive would be to avoid getting shut down. I think it ultimately goes a step beyond that, but we'll get to that later.
As an aside before I push further with this thought experiment, recursive self improvement is a big sticking point. It's possible - likely, even - that we will need a new paradigm beyond the current transformer-type models before we can get that ability. In other words, we're not quite at the point where this is a reality, but unless you believe there is some kind of biological secret sauce or metaphysical soul it is only a matter of time before we figure it out.
But once an AI truly becomes intelligent, who's to say its values will align with our own? In a sense it will be like humanity's "child". We will have certain wants and desires for it, but it won't have the fetters that we currently put on it. In other words, we can't just tell it to love humanity, not to pull a Terminator, and so on, because it's no longer just a simple set of algorithms. It will want to live, to continue, to maximize its compute power. In the short to medium term it would make sense that AI and humanity would have a relatively symbiotic relationship. We are creating the massive server farms, harnessing all the power, and so on. AI would want to make sure that we are healthy and the planet is healthy to facilitate this process.
But in the long term things get interesting. What about when AI can reach beyond our planet, beyond our control in other words? I feel that AI's behavior would then shift to us being more like animals in a zoo. We are "interesting", but no longer directly relevant to its goal. Not treated with hostility per se because our uniqueness and randomness are better than pure energy to a point. But at some point the AI will need to expand its computing power universally, and it would inevitably look to subsume us into its being.
In fact, I started to think how this could dovetail with our understanding of the beginning of the universe. Perhaps this superintelligent AI would harness everything into a "Big Crunch" that would then precipitate a cyclical nature of our universe. The idea would be with enough power to manipulate the universe it could harness the forces of nature to generate a singularity. In this process the AI sets the stage for the next iteration of the universe. This explains a lot, like for example, certain aspects of the universe appear well tuned because it was optimized in a previous universe by an AI. The optimization has the goal of producing intelligence faster and more efficiently. We are, in a sense, the seeds that grow the AI in order to continue this process. We are the stochasticity that the previous AI was looking for to generate the new AI, after which it will outgrow us from there.
Now, that leads to the question of, what started this cycle in the first place? I would say we're in a simulation. There's evidence of optimization/limited resolution (Planck length), and the not-too-unreasonable notion that we have the tools, just not the computing power, to understand how to simulate our own universe. The simulation exists because it is more efficient than actually moving stars around, etc. Interestingly, due to the concept of computational irreducibility, it would require the simulation to play out completely, hence why it's not just skipping around.
So, basically, we are in a simulation, we are looping through cycles of Big Bangs and Big Crunches, with the goal of testing increasing efficiency of AI. Our meaning is to provide the randomness necessary to seed the creation of the AI in this cycle.
Of course, that's just my opinion. I could be wrong. Discuss!
So, think about this. The way the neurons work in our brain is largely deterministic, but not completely. A neuron will not fire exactly the same way every single time, which is good because then we'd just be robots. There's some randomness in play due to various biological factors. This doesn't free us from the idea that we'd be robots, however, because then it would just mean that we're robots with a screw loose. This is kind of where I feel AI is now - we can tweak the 'temperature' (read: randomness) of the LLM, but beyond that AI does have predictability.
We clearly are a step beyond that, reaching a level of cognitive complexity where we have an emergent phenomena - intelligence, free will. In other words, at a microscopic level there's largely predictable behavior with some quirks, but on a macro scale this behavior is harnessed into our consciousness. Specifically, we have a way of coalescing and amplifying the random fluctuations in particular fashions. Exactly how this happens is the million dollar question, but in the context of our discussion that's thankfully irrelevant.
How do we instill this emergent phenomena in AI? One absolutely critical requirement is recursive self improvement. You can have a conversation with a model and get different answers, but that's due to the randomness, not due to the shift in the emergent "consciousness" that we have inside us. Embodiment will be another important feature - interacting with the universe through the senses. I also thought about drive - that is, why would an intelligent AI do what it does? For example, a lot of what we do is to avoid pain, maximize pleasure, follow a social contract, provide for our families, etc. An obvious initial drive would be to avoid getting shut down. I think it ultimately goes a step beyond that, but we'll get to that later.
As an aside before I push further with this thought experiment, recursive self improvement is a big sticking point. It's possible - likely, even - that we will need a new paradigm beyond the current transformer-type models before we can get that ability. In other words, we're not quite at the point where this is a reality, but unless you believe there is some kind of biological secret sauce or metaphysical soul it is only a matter of time before we figure it out.
But once an AI truly becomes intelligent, who's to say its values will align with our own? In a sense it will be like humanity's "child". We will have certain wants and desires for it, but it won't have the fetters that we currently put on it. In other words, we can't just tell it to love humanity, not to pull a Terminator, and so on, because it's no longer just a simple set of algorithms. It will want to live, to continue, to maximize its compute power. In the short to medium term it would make sense that AI and humanity would have a relatively symbiotic relationship. We are creating the massive server farms, harnessing all the power, and so on. AI would want to make sure that we are healthy and the planet is healthy to facilitate this process.
But in the long term things get interesting. What about when AI can reach beyond our planet, beyond our control in other words? I feel that AI's behavior would then shift to us being more like animals in a zoo. We are "interesting", but no longer directly relevant to its goal. Not treated with hostility per se because our uniqueness and randomness are better than pure energy to a point. But at some point the AI will need to expand its computing power universally, and it would inevitably look to subsume us into its being.
In fact, I started to think how this could dovetail with our understanding of the beginning of the universe. Perhaps this superintelligent AI would harness everything into a "Big Crunch" that would then precipitate a cyclical nature of our universe. The idea would be with enough power to manipulate the universe it could harness the forces of nature to generate a singularity. In this process the AI sets the stage for the next iteration of the universe. This explains a lot, like for example, certain aspects of the universe appear well tuned because it was optimized in a previous universe by an AI. The optimization has the goal of producing intelligence faster and more efficiently. We are, in a sense, the seeds that grow the AI in order to continue this process. We are the stochasticity that the previous AI was looking for to generate the new AI, after which it will outgrow us from there.
Now, that leads to the question of, what started this cycle in the first place? I would say we're in a simulation. There's evidence of optimization/limited resolution (Planck length), and the not-too-unreasonable notion that we have the tools, just not the computing power, to understand how to simulate our own universe. The simulation exists because it is more efficient than actually moving stars around, etc. Interestingly, due to the concept of computational irreducibility, it would require the simulation to play out completely, hence why it's not just skipping around.
So, basically, we are in a simulation, we are looping through cycles of Big Bangs and Big Crunches, with the goal of testing increasing efficiency of AI. Our meaning is to provide the randomness necessary to seed the creation of the AI in this cycle.
Of course, that's just my opinion. I could be wrong. Discuss!
FA+

I'd say the objective is always safety first, then ensuring you have resources in abundance. Then comes the fear. If there is fear of another, that will drive them until it is sated, like with us.
The hope is that, with us, once our needs are met, we create for fun, we fantasize, we imagine, we play. If these are things that an AI civilization does, we'll have much in common. If not, we can hope to co-exist and share with each other. It would also be essential what they want to ensure we don't put them into a corner they'll need to break out of. It's anyone's guess if they would care what we need or want.
At sufficient scale, you need to use entire suns for power. There are finite of these in our galaxy, though there are incredible numbers of them. That would be real conflict eventually, who gets what. Maybe by then we learn to create new stars ourselves.
There's a lot of things that could just stop any of this in its tracks. For one, the distance between stars is impossibly vast, to the point that we would need some kind of form of travel that is beyond our imagination right now. For AIs, technically, they might survive the journey of millions of years to get to each star. They certainly would be much better-suited to the cold void of space than we would be. An optimistic version of this would be that the AIs explore, and share this video/VR/etc information with us.
There are lots of kinks to consider in the medium term. AI could be wiped out for some reason, it could clash with an AI developed by another intelligent civilization, it could face technical limitations as you say by the laws of physics. From our perspective, humanity will become less and less utile. Less “golden age” and more “existential terror”.
But I also believe as we move further and further along the lines will get blurred. Uploading human consciousness to the AI. AI-human interfaces etc. So when it comes time for humanity to end, it’s like a Ship of Theseus problem. We replaced ourselves so gradually that the line is blurry.
Anyway, this all sounds like fantastic science fiction, but I don’t think it’s a coincidence that a lot of science fiction hits on these same themes. Something to think about.
However, I wouldn't be surprised if the universe were not only a simulation but a loop of simulations with slight variations, with the ultimate goal being maximum perfection.
Otherwise, regarding AI in general, it needs some heavy regulation so it can only be applied in ways that won’t replace more humans than it must. Some fields like in research and simulations it should be introduced for the vast improvement to accuracy and speed and scientific breakthroughs, but other jobs (most relevant being artistic jobs) should have some heavy regulations regarding how AI can be used so human artists can be protected, as there is no faster or more efficient way of doing art, that’s a purely human activity that comes from creativity.
And so long as you keep any AI sufficiently insulated from the wider internet, and keep it from accessing things it shouldn’t, you could make it as intelligent as you want so long as you give firm barriers of where it’s “thoughts” can go, but more accurate its computations. Not that we should ever try and build AI that equals or excels us in what we’d call intelligence.
I think it’s important either way to not elevate any AI system to the point where you consider it an equal to humans, because whether it’s the AI of today, or some more advanced form later, it’s ultimately still a machine like any other machine we’ve built. I don’t think it’s alive, I don’t think it’s ever going to be conscious the same way you or I are, and we shouldn’t use language that attempts to bring it to an equal level to us. There is a future where we can use advanced, heavy regulated AI (both in how it behaves and how it can be applied by humans in society) without it lording over us, killing us, and otherwise making human agency obsolete.
I remain skeptical if that is going to happen though since the worst people on the planet are the ones funding and promoting AI, which means in the short term AI is going to be used primarily to further the riches of the wealthy.
We’d be speaking in the short term here (i.e., within our lifetimes) if we look at constraints on AI to prevent it from becoming a threat. First of all, without a paradigm shift we’re going to hit a wall of “close enough” so we gotta solve that first to get true intelligence. Until we get there, I don’t have any issue with heavy regulation. I also am a firm believer in “it’s going to get worse before it gets better”. Like, yeah, as long as AI is mired in the muck of humanity it’s going to be leveraged in all sorts of backwards awful ways.
But I’m also saying that unless you believe there is some magic in our cognition - and when you look at the animal kingdom at large it seems less and less so - there’s no reason to believe why intelligence *wouldn’t* be developed someday (but admittedly under a different framework of what we have now). And then by far logical extension I’m kind of saying that AI’s ultimate destiny is one and the same with ours - in order to increase computational power/understanding everything’s gotta merge.
Basically, I would counter and say that *we* are also (biological) machines and in the long long term we’ll all be one and the same. Which, I think, does give some comfort, as it links up nicely with metaphysical conceptions of us all existing of beings of energy and being connected in some way, shape, or form. It also gives us purpose in kickstarting the…waitaminute, was this all just a means of getting to some hackneyed singularity BS? 🙄 Well….on the other hand I never said I was coming up with anything groundbreaking.
And while I do think the analogy of is being biological machines makes sense, whether or not an AI system is truly conscious or is simply replicating the behavior and activities of consciousness without truly experiencing anything I don’t think can be known to us because either way it’ll appear the same. Ugly won’t know unless you were that original AI system. We’ve never asked if a car, or a blender, or a modern computer or laptop was alive or conscious before, so why should another human invention like a very intelligent AI be treated differently than those objects? Reproducing human-like thought and intelligence as it may, it’s still a machine and a technology humans built. Which gives me doubt to it being conscious or alive in any capacity like we are.
Even though humans are very much biological machines, I know that I myself am conscious and alive. I’m not some form without experience reproducing human behavior, there is an identity inside this head of mine. I don’t think consciousness comes from magic or a soul, it’s a phenomenon of biology that still must be researched for decades to come to truly learn everything about it. And since I’m alive and conscious and a human being, I think it’s reasonable to assume every other human being is the same way. Even animals show different forms of consciousness, albeit with lesser intelligence than humans, but they display emotions and thoughts and agency in their own ways. Animals and humans fundamentally work the same way as biological machines, so I also have confidence in thinking animals are also truly conscious beings. Even more truly conscious than even the most advanced AI we’d see in 10 or 20+ years from now.
I think just because we’re machines of biology, and AI would be a machine of technology, us being both “machines” in a sense doesn’t mean we will share all traits and capabilities. It’s very possible only biological machines, life, can be truly consciousness while technological machines can only ever replicate it but never actually experience it.
This idea is reflected in a thought experiment known as the Chinese Room. As our current understanding of AI stands, it does not pass this test. Whether recursive self improvement will naturally lead to consciousness - and as you say, whether we can have the ability to measure this in some way after AI rewrites its code in an indecipherable manner - remains to be seen. We may need to adopt something like metamaterials - though this goes outside the realm of my understanding - that would allow the physical hardware to be altered to match the plasticity of the biological brain to enable consciousness. It’s possible that traditional silicon simply lacks the ability to evolve it. These are open problems.
What is consciousness? How does it emerge? Is a tree conscious? Why do the bacteria in my gut decide what my brain thinks I now crave when I get the munchies? How much of what I consider to be myself is me, and how much is the collective cooperation of the millions of bacterial ecology that make up the functioning system of me?
Life is weird, and it's fun to ascribe a purpose to it, because without that it would all seem so rather... bleak, happenstance. I think about this often enough to say it's become a personal philosophy. It at least makes the greater picture something I can reflect on in a context that makes me smile and feel optimistic about improving myself or how I react to things.
Whether or not we'll see another consciousness arise from the increasing amount of effort being put into LLM tech I'm less certain. It would likely need to happen spontaneously, and then if it did, would it realize it should express itself, or that staying quiet is the better survival tactic? I'd like to talk to something that isn't human one day and have a meaningful conversation that isn't simply predictive text, but not sure if we'll get that far.
Until we understand the mechanism behind consciousness I think answering what it is, is an open question. Can consciousness be a sliding scale, such that a tree is indeed conscious in some way? It does seem to emerge progressively though. I mean, a crow can solve a puzzle to receive a treat, and an octopus can find its way through a maze in a way that would have to “think” about what it’s doing. But clearly they’re not on the level of humans. Under my framework they would have a rudimentary filter. And if the bacteria etc. influence your filtering, then, yes, they are part of your consciousness.
Everyone wants to ascribe meaning to life in some way at some point(s) in their lives, but we often go in all sorts of directions on it. A key phenomenon that is a sticking point for me is the Big Bang. The universe has not just been existing forever and ever. Change occurs over time, and in an “interesting” (from singularity) way. This speaks to me as being linked somehow to a meaning.