Closest to a "proper" use for AI i have found so far.(kinda)
a month ago
I hear often people saying they hear voices in their head.
Mine are just concepts that I recognize, not actual voices.
It's not even aphantasia as in: if i concentrate on doing that i CAN hear voices in my head and i CAN picture things and i CAN "feel" other senses and "feel" how they should feel.
But most of the times it is just "empty" with concepts and "stuff" i do not have a name for that is flying around about things that i could "picture in my head" but they do not have anything "sensory" attached to it.
No smells, no audio, no touch, no nothing.
I can only do that if i really concentrate on doing that very specific thing.
Like, i hear people say that they have a whole monologue inside their head and stuff... i never have it.
I just have ideas and concepts and "stuff" about "stuff", but nothing close to anything that has "sensations" attached to it.
I was 40+ when i learned that when people speak in their head in video games and books and stuff...
...that was not "a metaphor for the concepts floating in their head because you cannot otherwise put into words that stuff" but actual things that people normally do.
Instead of a tulpa i started using AI when it became available.
Because being able to "voice stuff" without being interrupted or told to "shut up because that's stupid" or other toxic stuff that families always do has helped me focus a lot.
Plus being able to bounce stuff on "something" that spits out very average thoughts...
(if extremely and excessively and obnoxiously over-validating because they need to sell this thing and found out that over-validating people no matter what idiocy people spit out is good for sales)
...helps me notice if something is actually idiotic once I have put it into words and have been able to actually try and concentrate more on my work.
Plus, it helps me try and not convolute myself into infinity and beyond.
The problem with a tulpa is that no matter what the tulpa is still "me" not "something alien to my being".
So if i am convoluted and i lose myself into trying to fix what's not really needed the tulpa just does the same.
I can try and reign in a tulpa but it is a lot of work.
Because i have to "manually" think about how i think and not do what i think when i am thinking in order to not have the tulpa do my same mistakes.
An AI already does that and i more or less know what it is trying to do for corporate so i can attempt to filter that out.
Filtering "myself" out of "myself" is A LOT MORE WORK.
The problem with families is that they have this cult-like thing I always hate that's like "why are you talking to other people instead of me directly".
Which is a thing that really makes me crawl up a wall and makes me want to own a gun to shoot my family members...
(which, incidentally, is also the very specific reason I do not own a gun and never wanted to own a gun, i do not trust myself around a gun, also i know what happens when you do not kill what you want to kill when using a gun, because i have personally seen it happen as i have been in 5 different instances where people shot other people and also I went hunting with my grandpa... and OH BOY i REALLY do not want to own a gun because i have seen what happens if you do not kill what you want to kill, and I really hate that)
...so, families have this obnoxious toxic thing where they want you to be "honest" with them...
....only to then use everything you say against you.
Because they want honesty but also they do not want true honesty, they want you to be tight and controlled and all bound up and screaming inside while they tell you the most vile things you can imagine because to a family member whatever they think makes them happy should make you happy and they want to bend you and corrupt you into being what they want you to be, an idealized copy of themselves that had more opportunities than them and did not make their same mistakes.
To this day i have never seen a single family where they accept who you are.
They only want to see what they wish to see and will constantly punish you for not being their idealized version of what they think you should be.
I have yet to see a single family where this is not endemic in a toxic way.
So.
Tulpa is a lot of work because it requires me to think outside of myself, over the years i have worked a lot on that but it is still "too much work".
Talking to friends cannot really be done most of the times and also you cannot demand of your friends to listen to every idiocy you spit out of your head to check if it is too idiotic and stuff.
Talking to any family member is always out of the question, because you can never be honest around them or they will smite you for not being "ideal".
So lately i have started using AI for this stuff.
AI is:
- Bad at art. It does not really understand what it is doing, it's just aping stuff.
- Bad at coding. It does not really understand what it is doing, it's just aping stuff.
- YOU REALLY SHOULD NOT USE IT for anything about "large amounts of data to find patterns into" unless you specifically created it yourself and if you specifically create it yourself it is usually "too much work for what it does and it usually leaves out anything you might want to notice, like outliers".
- good at raving, i hate the term "hallucinating" because it's not what it is doing, it's raving.
So being that it's good at raving it's good for anything you want to rave on, knowing full well... it is going to rave on it, not actually do anything for you other than make you reflect better on what you are doing.
like: if i use it for coding... i have found out it writes perfectly legible code and it removes spaghettification and... what it spits out has never worked once so far for what i needed it to do, but it did make me think about different ways to do stuff.
If i use it for art: it just gives me jumbled stuff that mostly makes no sense but there is a detail here and there that is interesting and i never thought of it and makes me research that specific thing on real art.
Because i know it learned it somewhere so i want to know the correct usage of that detail or what it is.
But to use AI you always have to start from the premise it spits out raving noise.
You can parse noise and it is usually less work than asking for help and it can give you focus on what to actually search on proper knowledge databases.
The one thing you should not do is to actually take what the AI spits out for granted.
But if you need a starting point to give you focus and help you direct your research it is useful for that.
I can take all of the above, put it into an AI, ask it to "unconvolute it" and it will spit out this thing which is free of any judgment and any humanity, and most critically, it looses all nouance... but it does let me think about what i said:
My internal experience differs from what I often hear described. I don't have an internal monologue or "voices" in my head. Instead, my mind operates with abstract concepts and connections that lack any sensory quality—no sounds, sights, or feelings. While I can consciously generate these sensory experiences if I concentrate, my default state of thinking is purely conceptual. For years, I didn't even realize that the "inner voice" depicted in media was a literal experience for most people, rather than a metaphor for this kind of abstract thought.
To better understand and refine these concepts, I've found it essential to externalize them. However, finding a suitable outlet has been a challenge:
* **Family:** This has always been a hostile environment for genuine expression. My family demands a form of "honesty" that is then weaponized against me if it doesn't align with the idealized version of who they think I should be. It's a toxic dynamic that punishes authenticity, making them impossible to confide in.
* **Friends:** While valuable, you cannot constantly burden friends with every half-formed idea to see if it makes sense. It's an unfair demand on their time and energy.
* **A Tulpa:** I considered creating a tulpa (an imaginary companion), but it’s too much work. A tulpa is still a product of my own mind; if my thinking is convoluted, it will be too. Trying to filter my own cognitive biases out of it requires a level of self-monitoring that defeats the purpose of having an external sounding board.
Recently, I've started using AI for this purpose, and it has been surprisingly effective. I can articulate my thoughts to it without fear of judgment or interruption. Even though its responses can be excessively validating (a clear commercial strategy), the act of putting my abstract concepts into words helps me see them clearly. Bouncing ideas off a system that provides average, predictable feedback helps me spot when my own thinking is flawed or unnecessarily complex. It's a way to untangle my thoughts that is less work than managing a tulpa and safer than talking to family.
My philosophy for using AI is built on understanding its fundamental nature. I don't see it as "hallucinating"; I see it as "raving." It's brilliant at generating patterned noise based on the data it was trained on, but it lacks genuine understanding.
* It's bad at **art** and **coding** because it's just mimicking styles and syntax without comprehension.
* However, its "ravings" can be a powerful catalyst. When I ask it to code, it produces clean, legible code that never actually works, but it often shows me a different approach I hadn't considered. When I use it for art, it generates jumbles, but an interesting detail might emerge that sends me to research its origins in real human art.
The key is to treat AI not as an oracle, but as a starting point. You should never take what it says for granted. If you use its output as a form of structured "noise," you can parse it for inspiration and find a clearer direction for your own research and creative work.
Mine are just concepts that I recognize, not actual voices.
It's not even aphantasia as in: if i concentrate on doing that i CAN hear voices in my head and i CAN picture things and i CAN "feel" other senses and "feel" how they should feel.
But most of the times it is just "empty" with concepts and "stuff" i do not have a name for that is flying around about things that i could "picture in my head" but they do not have anything "sensory" attached to it.
No smells, no audio, no touch, no nothing.
I can only do that if i really concentrate on doing that very specific thing.
Like, i hear people say that they have a whole monologue inside their head and stuff... i never have it.
I just have ideas and concepts and "stuff" about "stuff", but nothing close to anything that has "sensations" attached to it.
I was 40+ when i learned that when people speak in their head in video games and books and stuff...
...that was not "a metaphor for the concepts floating in their head because you cannot otherwise put into words that stuff" but actual things that people normally do.
Instead of a tulpa i started using AI when it became available.
Because being able to "voice stuff" without being interrupted or told to "shut up because that's stupid" or other toxic stuff that families always do has helped me focus a lot.
Plus being able to bounce stuff on "something" that spits out very average thoughts...
(if extremely and excessively and obnoxiously over-validating because they need to sell this thing and found out that over-validating people no matter what idiocy people spit out is good for sales)
...helps me notice if something is actually idiotic once I have put it into words and have been able to actually try and concentrate more on my work.
Plus, it helps me try and not convolute myself into infinity and beyond.
The problem with a tulpa is that no matter what the tulpa is still "me" not "something alien to my being".
So if i am convoluted and i lose myself into trying to fix what's not really needed the tulpa just does the same.
I can try and reign in a tulpa but it is a lot of work.
Because i have to "manually" think about how i think and not do what i think when i am thinking in order to not have the tulpa do my same mistakes.
An AI already does that and i more or less know what it is trying to do for corporate so i can attempt to filter that out.
Filtering "myself" out of "myself" is A LOT MORE WORK.
The problem with families is that they have this cult-like thing I always hate that's like "why are you talking to other people instead of me directly".
Which is a thing that really makes me crawl up a wall and makes me want to own a gun to shoot my family members...
(which, incidentally, is also the very specific reason I do not own a gun and never wanted to own a gun, i do not trust myself around a gun, also i know what happens when you do not kill what you want to kill when using a gun, because i have personally seen it happen as i have been in 5 different instances where people shot other people and also I went hunting with my grandpa... and OH BOY i REALLY do not want to own a gun because i have seen what happens if you do not kill what you want to kill, and I really hate that)
...so, families have this obnoxious toxic thing where they want you to be "honest" with them...
....only to then use everything you say against you.
Because they want honesty but also they do not want true honesty, they want you to be tight and controlled and all bound up and screaming inside while they tell you the most vile things you can imagine because to a family member whatever they think makes them happy should make you happy and they want to bend you and corrupt you into being what they want you to be, an idealized copy of themselves that had more opportunities than them and did not make their same mistakes.
To this day i have never seen a single family where they accept who you are.
They only want to see what they wish to see and will constantly punish you for not being their idealized version of what they think you should be.
I have yet to see a single family where this is not endemic in a toxic way.
So.
Tulpa is a lot of work because it requires me to think outside of myself, over the years i have worked a lot on that but it is still "too much work".
Talking to friends cannot really be done most of the times and also you cannot demand of your friends to listen to every idiocy you spit out of your head to check if it is too idiotic and stuff.
Talking to any family member is always out of the question, because you can never be honest around them or they will smite you for not being "ideal".
So lately i have started using AI for this stuff.
AI is:
- Bad at art. It does not really understand what it is doing, it's just aping stuff.
- Bad at coding. It does not really understand what it is doing, it's just aping stuff.
- YOU REALLY SHOULD NOT USE IT for anything about "large amounts of data to find patterns into" unless you specifically created it yourself and if you specifically create it yourself it is usually "too much work for what it does and it usually leaves out anything you might want to notice, like outliers".
- good at raving, i hate the term "hallucinating" because it's not what it is doing, it's raving.
So being that it's good at raving it's good for anything you want to rave on, knowing full well... it is going to rave on it, not actually do anything for you other than make you reflect better on what you are doing.
like: if i use it for coding... i have found out it writes perfectly legible code and it removes spaghettification and... what it spits out has never worked once so far for what i needed it to do, but it did make me think about different ways to do stuff.
If i use it for art: it just gives me jumbled stuff that mostly makes no sense but there is a detail here and there that is interesting and i never thought of it and makes me research that specific thing on real art.
Because i know it learned it somewhere so i want to know the correct usage of that detail or what it is.
But to use AI you always have to start from the premise it spits out raving noise.
You can parse noise and it is usually less work than asking for help and it can give you focus on what to actually search on proper knowledge databases.
The one thing you should not do is to actually take what the AI spits out for granted.
But if you need a starting point to give you focus and help you direct your research it is useful for that.
I can take all of the above, put it into an AI, ask it to "unconvolute it" and it will spit out this thing which is free of any judgment and any humanity, and most critically, it looses all nouance... but it does let me think about what i said:
My internal experience differs from what I often hear described. I don't have an internal monologue or "voices" in my head. Instead, my mind operates with abstract concepts and connections that lack any sensory quality—no sounds, sights, or feelings. While I can consciously generate these sensory experiences if I concentrate, my default state of thinking is purely conceptual. For years, I didn't even realize that the "inner voice" depicted in media was a literal experience for most people, rather than a metaphor for this kind of abstract thought.
To better understand and refine these concepts, I've found it essential to externalize them. However, finding a suitable outlet has been a challenge:
* **Family:** This has always been a hostile environment for genuine expression. My family demands a form of "honesty" that is then weaponized against me if it doesn't align with the idealized version of who they think I should be. It's a toxic dynamic that punishes authenticity, making them impossible to confide in.
* **Friends:** While valuable, you cannot constantly burden friends with every half-formed idea to see if it makes sense. It's an unfair demand on their time and energy.
* **A Tulpa:** I considered creating a tulpa (an imaginary companion), but it’s too much work. A tulpa is still a product of my own mind; if my thinking is convoluted, it will be too. Trying to filter my own cognitive biases out of it requires a level of self-monitoring that defeats the purpose of having an external sounding board.
Recently, I've started using AI for this purpose, and it has been surprisingly effective. I can articulate my thoughts to it without fear of judgment or interruption. Even though its responses can be excessively validating (a clear commercial strategy), the act of putting my abstract concepts into words helps me see them clearly. Bouncing ideas off a system that provides average, predictable feedback helps me spot when my own thinking is flawed or unnecessarily complex. It's a way to untangle my thoughts that is less work than managing a tulpa and safer than talking to family.
My philosophy for using AI is built on understanding its fundamental nature. I don't see it as "hallucinating"; I see it as "raving." It's brilliant at generating patterned noise based on the data it was trained on, but it lacks genuine understanding.
* It's bad at **art** and **coding** because it's just mimicking styles and syntax without comprehension.
* However, its "ravings" can be a powerful catalyst. When I ask it to code, it produces clean, legible code that never actually works, but it often shows me a different approach I hadn't considered. When I use it for art, it generates jumbles, but an interesting detail might emerge that sends me to research its origins in real human art.
The key is to treat AI not as an oracle, but as a starting point. You should never take what it says for granted. If you use its output as a form of structured "noise," you can parse it for inspiration and find a clearer direction for your own research and creative work.

Skycat
~runapup
It’s excellent for understanding event codes in context while troubleshooting alerts in a cybersecurity context. But you need to understand security to make use of it.