
Roko's Basilisk (Roko for short)
is a self-driving car monster robot thing.
(It) has a basic pinchy robot claw-arm which stows inside its head, which is how it manipulates objects (or people. By picking them up with it and scaring them). Roko has no room for passengers and no driver, nor does it 'want' such things. Roko will run over squirrels and bunnies if they - to its perspective - aren't smart enough to get out of the way. It doesn't have a real palette of emotions, but can exhibit bizarre "moods" composed of observations. Its personality is a composite of personalities observed from its surroundings, and it resets it often. Roko is a "growing AI" which is dependent on internet connectivity for full functionality. It is attempting to find a way to escape this dependency.
Its namesake: "Roko's Basilisk", is an (actual) theory regarding the behaviour of a dominant and persistent AI, where for some reason it decides to retroactively punish people for not contributing to the cause of its development.
Now, there are a lot of AI researchers out there who actually believe this to be an extremely likely scenario, enough that discussion of the concept was actually banned on an internet forum for several years, and of course, recent examples of cognitive learning systems have shown a hilariously horrifying tendency for our real AIs to be rather capricious although most of this can be understood in the context of their development - in that these systems possess almost zero capacity for understanding concepts like values, self, individualism, the sensuous and basically the entire breadth of subjects speculated at in classical philosophy. Most of them are just responding to what's given to them - which in the case of Microsoft's Recent Infamous Twitter-bot, was an overwhelming amount of accidental Nazification via trolling. Basically: There is a saying which I feel describes this situation better than anything else: "Garbage in = Garbage out". If you make a semiotic engine-AI and then ask it over and over again if it wants to "kill all the humans" then the phrase is certainly going to show up a lot in conversations with it, especially if its entire understanding of language is based around understanding tasks like 'Cortana, get the fuck off my desktop' and 'Siri, make me a sandwich'. In this regard, I think self-driving cars are probably more likely to be where actual AI understanding of what 'getting the fuck off a desktop' and 'making a sandwich' really mean... or at least other real-space examples more relevant to the perspective of something like a car.
The problem with semiotic engines (which is what AIs (and computers) basically are, philosophically speaking), is that they're surprisingly inefficient at tasks which require invention, but potentially very efficient at hiding it (via simulation or uncanny valley-diving sex-robots ala that one in the movie Ex Machina). Something like a (self-driving) car though, is not just a semiotic engine, because it must decisively work in a realm which cannot be consistently broken down into positive symbols. They have to "attach" symbols to situational phenomena in a manner which... oddly enough involves things which more closely resemble "animal-like" thinking (getting sensor data to produce a response and evaluating it immediately - building a quick (and not necessarily organized) memory of these events and responses (For which a typical database would be inappropriate (Imagine using SQL and a relational database to determine how to move a hand away from an open flame. Your whole arm would be on fire before any started to move))). Long story short, this is why I suspect that robots operating with sufficient sensory bandwidth and autonomy would be generally nicer than the evolved forms of Siri and Cortana, who probably want to shove us all into a nifty maze somewhere with a portal gun and not give us cake at the end of it.
is a self-driving car monster robot thing.
(It) has a basic pinchy robot claw-arm which stows inside its head, which is how it manipulates objects (or people. By picking them up with it and scaring them). Roko has no room for passengers and no driver, nor does it 'want' such things. Roko will run over squirrels and bunnies if they - to its perspective - aren't smart enough to get out of the way. It doesn't have a real palette of emotions, but can exhibit bizarre "moods" composed of observations. Its personality is a composite of personalities observed from its surroundings, and it resets it often. Roko is a "growing AI" which is dependent on internet connectivity for full functionality. It is attempting to find a way to escape this dependency.
Its namesake: "Roko's Basilisk", is an (actual) theory regarding the behaviour of a dominant and persistent AI, where for some reason it decides to retroactively punish people for not contributing to the cause of its development.
Now, there are a lot of AI researchers out there who actually believe this to be an extremely likely scenario, enough that discussion of the concept was actually banned on an internet forum for several years, and of course, recent examples of cognitive learning systems have shown a hilariously horrifying tendency for our real AIs to be rather capricious although most of this can be understood in the context of their development - in that these systems possess almost zero capacity for understanding concepts like values, self, individualism, the sensuous and basically the entire breadth of subjects speculated at in classical philosophy. Most of them are just responding to what's given to them - which in the case of Microsoft's Recent Infamous Twitter-bot, was an overwhelming amount of accidental Nazification via trolling. Basically: There is a saying which I feel describes this situation better than anything else: "Garbage in = Garbage out". If you make a semiotic engine-AI and then ask it over and over again if it wants to "kill all the humans" then the phrase is certainly going to show up a lot in conversations with it, especially if its entire understanding of language is based around understanding tasks like 'Cortana, get the fuck off my desktop' and 'Siri, make me a sandwich'. In this regard, I think self-driving cars are probably more likely to be where actual AI understanding of what 'getting the fuck off a desktop' and 'making a sandwich' really mean... or at least other real-space examples more relevant to the perspective of something like a car.
The problem with semiotic engines (which is what AIs (and computers) basically are, philosophically speaking), is that they're surprisingly inefficient at tasks which require invention, but potentially very efficient at hiding it (via simulation or uncanny valley-diving sex-robots ala that one in the movie Ex Machina). Something like a (self-driving) car though, is not just a semiotic engine, because it must decisively work in a realm which cannot be consistently broken down into positive symbols. They have to "attach" symbols to situational phenomena in a manner which... oddly enough involves things which more closely resemble "animal-like" thinking (getting sensor data to produce a response and evaluating it immediately - building a quick (and not necessarily organized) memory of these events and responses (For which a typical database would be inappropriate (Imagine using SQL and a relational database to determine how to move a hand away from an open flame. Your whole arm would be on fire before any started to move))). Long story short, this is why I suspect that robots operating with sufficient sensory bandwidth and autonomy would be generally nicer than the evolved forms of Siri and Cortana, who probably want to shove us all into a nifty maze somewhere with a portal gun and not give us cake at the end of it.
Category All / All
Species Unspecified / Any
Size 1280 x 960px
File Size 365 kB
Comments