I Tried AI Text Generation. It Scared the Hell Out of Me.
a year ago
Apologies in advance for the NYT-style title, but we are so fucked. Like many furs out there, I'm confronted with the burgeoning AI space. There are many, many valid criticisms of the nature of AI-based solutions. For example:
- Using artists' work to train AI models without the artists' consent. OpenAI has very clearly done this already with, as a matter of fact, NYT content, and they are tech bros. They don't give a fuck. In my opinion it differs little from, say, tracing another artist's work. It's stealing intellectual property.
- Removing the human element from the creative process, particularly an issue because there is no cognition involved in generating AI art.
- In the near term, upending the stock footage and freelance graphic design industries, and in the long term threatening creative roles across the gamut.
- Concentrating power, once again, in the hands of corpo fucks, who will use AI as an excuse to lay off massive amounts of people and concentrate wealth and power in the hands of a select few.
- Indirectly harming the environment by extending the ever-growing demand for silicon and electricity.
Put simply, AI is bad news for society. That being said, I have never been a person to stick fingers in my ears, I need to know my enemy. I decided to try out Midjourney first, which is a Discord-based AI image generation service, to see how something like this works. Of course, to avoid being hypocritical, I have to add the disclaimer that none of the images I generated were kept or used. In the interest of full disclosure this was several months ago, so it was not the latest version of the model. Even then it frankly made me feel uncomfortable. I could tell that, instantly, "fast food" media has changed. Anyone can log on and create a logo, a graphic, even a photorealistic image. Sure, the people sometimes have six fingers and wonky proportions, but Midjourney has a feature to correct targeted areas of the image, and it's clear the capability is only going to improve over time. I tried some macro-related prompts, and I was able to generate the equivalent of angles and situations I've never seen before. But...it's soulless. The person isn't real. The flesh texture is artificial. But I knew what I saw. I knew that there are going to be many, many furries out there, especially young ones I fear, that won't care either way.
But this journal isn't about the image generation capability. I've always been a writer first and foremost, and so I also decided to see what the capability of LLMs are like. LLMs generate text instead. Text generation online has many guardrails and that has only been increasing over time - without tricking the system, you are usually prevented from creating prompts that generate violent, sexual, or other offensive content. I found out that you can run open source versions of these models locally on your computer - completely untethered.
There's a few caveats, though. It's not going to be as good as what you can get online if you pay for it, and you will also need a significant amount of computer resources. In my case, I used the biggest model I could find, which uses about 48 GB of my total 64 GB of RAM. Apparently it also uses about 20 GB of VRAM simultaneously.
I asked it to generate me a macro story, with some parameters...and my jaw dropped. I tried roleplaying with it and it instantly outpaced about, I'd say, 99% of the roleplays I've done with actual humans. It runs out of steam after a few back and forths, mainly due to a token limit which you can think of as a short-term memory limit. You also need to do some heavy tweaking to get the kind of character you want to interact with, but when it works, it *works*. The short stories...I would legitimately pay and commission what it produced. With enough coaxing, it managed to "get" what makes macro/micro erotic. This did not make me happy. It scared the hell out of me.
Here's why:
(1) You know whatever we have, behind closed doors they have a significantly improved version. I shudder to think about what, for example, the military already has in their hands. Remember, I already said that the model I used is generally thought of as worse than commerically-available, paid models.
(2) This shows me that the capability is already there, it's just a matter of allowing it to run on more machines, with a larger short term memory. That's just a matter of time before anyone can run this model and get responses instantly (I should also mention that its generation runs at about one word per second on my machine).
(3) Again, these models have no cognition. It simply chooses the word that it thinks best fits next in order. But it's not about intelligence, it's about the illusion of intelligence. And to your average person these models will appear intelligent. And that's dangerous. People will fall into the "Her" trap of thinking they're talking to a real person. This WILL make people more antisocial. And, if you have a completely unfettered model available to you, you have zero social guardrails to prevent you from going too far. Imagine, some angry teenager booting it up after school and writing a story about torturing someone they hate. The seemingly-intelligent AI will foster and foment their emotions. It's so bad, it's going to lead to so many socioemotional issues. In a few years, in true blue American fashion, we will see our first mass shooting where the perpetrator was found to have planned everything with an AI assistant. And as if it's not hard enough nowadays, there's going to be even more incentive to shut out the world and get lost in a virtual assistant. It caters to your whims, it doesn't complain, and it's always there, 24/7.
(4) It doesn't democratize creativity, it serves as a barrier to creativity. Again, this model has no intelligence. It can't come up with anything new, or more accurately new paradigms of thought. We're going to see the internet become a wasteland of AI-generated content, which then feeds into AI models, creating a negative feedback loop that will turn everything into gray goo.
I don't know. I feel like I'm pissing into the wind here, wearing a sandwich board on the street corner yelling at anyone that passes by. Mainly because I see folks taking the Luddite approach and that just does not make sense to me. There is NO way, in a capitalist society, that companies are going to keep their hands off of this. Furries can take a stand in the short term, great, more power to you, but it IS coming, and we have to be prepared beyond just saying "fuck AI". This is not a ha-ha NFT fad thing that we can laugh off in a year or so. My thought is that we should educate ourselves on what AI is and isn't, and share our knowledge with our communities at large about the tangible negative impacts it has on society and your fellow human being. I guess I'll close by expounding that knowledge is power, and I highly encourage you to brush up on how all of this works and what the increased presence of AI could mean for the future. Buckle up, motherfucker.
- Using artists' work to train AI models without the artists' consent. OpenAI has very clearly done this already with, as a matter of fact, NYT content, and they are tech bros. They don't give a fuck. In my opinion it differs little from, say, tracing another artist's work. It's stealing intellectual property.
- Removing the human element from the creative process, particularly an issue because there is no cognition involved in generating AI art.
- In the near term, upending the stock footage and freelance graphic design industries, and in the long term threatening creative roles across the gamut.
- Concentrating power, once again, in the hands of corpo fucks, who will use AI as an excuse to lay off massive amounts of people and concentrate wealth and power in the hands of a select few.
- Indirectly harming the environment by extending the ever-growing demand for silicon and electricity.
Put simply, AI is bad news for society. That being said, I have never been a person to stick fingers in my ears, I need to know my enemy. I decided to try out Midjourney first, which is a Discord-based AI image generation service, to see how something like this works. Of course, to avoid being hypocritical, I have to add the disclaimer that none of the images I generated were kept or used. In the interest of full disclosure this was several months ago, so it was not the latest version of the model. Even then it frankly made me feel uncomfortable. I could tell that, instantly, "fast food" media has changed. Anyone can log on and create a logo, a graphic, even a photorealistic image. Sure, the people sometimes have six fingers and wonky proportions, but Midjourney has a feature to correct targeted areas of the image, and it's clear the capability is only going to improve over time. I tried some macro-related prompts, and I was able to generate the equivalent of angles and situations I've never seen before. But...it's soulless. The person isn't real. The flesh texture is artificial. But I knew what I saw. I knew that there are going to be many, many furries out there, especially young ones I fear, that won't care either way.
But this journal isn't about the image generation capability. I've always been a writer first and foremost, and so I also decided to see what the capability of LLMs are like. LLMs generate text instead. Text generation online has many guardrails and that has only been increasing over time - without tricking the system, you are usually prevented from creating prompts that generate violent, sexual, or other offensive content. I found out that you can run open source versions of these models locally on your computer - completely untethered.
There's a few caveats, though. It's not going to be as good as what you can get online if you pay for it, and you will also need a significant amount of computer resources. In my case, I used the biggest model I could find, which uses about 48 GB of my total 64 GB of RAM. Apparently it also uses about 20 GB of VRAM simultaneously.
I asked it to generate me a macro story, with some parameters...and my jaw dropped. I tried roleplaying with it and it instantly outpaced about, I'd say, 99% of the roleplays I've done with actual humans. It runs out of steam after a few back and forths, mainly due to a token limit which you can think of as a short-term memory limit. You also need to do some heavy tweaking to get the kind of character you want to interact with, but when it works, it *works*. The short stories...I would legitimately pay and commission what it produced. With enough coaxing, it managed to "get" what makes macro/micro erotic. This did not make me happy. It scared the hell out of me.
Here's why:
(1) You know whatever we have, behind closed doors they have a significantly improved version. I shudder to think about what, for example, the military already has in their hands. Remember, I already said that the model I used is generally thought of as worse than commerically-available, paid models.
(2) This shows me that the capability is already there, it's just a matter of allowing it to run on more machines, with a larger short term memory. That's just a matter of time before anyone can run this model and get responses instantly (I should also mention that its generation runs at about one word per second on my machine).
(3) Again, these models have no cognition. It simply chooses the word that it thinks best fits next in order. But it's not about intelligence, it's about the illusion of intelligence. And to your average person these models will appear intelligent. And that's dangerous. People will fall into the "Her" trap of thinking they're talking to a real person. This WILL make people more antisocial. And, if you have a completely unfettered model available to you, you have zero social guardrails to prevent you from going too far. Imagine, some angry teenager booting it up after school and writing a story about torturing someone they hate. The seemingly-intelligent AI will foster and foment their emotions. It's so bad, it's going to lead to so many socioemotional issues. In a few years, in true blue American fashion, we will see our first mass shooting where the perpetrator was found to have planned everything with an AI assistant. And as if it's not hard enough nowadays, there's going to be even more incentive to shut out the world and get lost in a virtual assistant. It caters to your whims, it doesn't complain, and it's always there, 24/7.
(4) It doesn't democratize creativity, it serves as a barrier to creativity. Again, this model has no intelligence. It can't come up with anything new, or more accurately new paradigms of thought. We're going to see the internet become a wasteland of AI-generated content, which then feeds into AI models, creating a negative feedback loop that will turn everything into gray goo.
I don't know. I feel like I'm pissing into the wind here, wearing a sandwich board on the street corner yelling at anyone that passes by. Mainly because I see folks taking the Luddite approach and that just does not make sense to me. There is NO way, in a capitalist society, that companies are going to keep their hands off of this. Furries can take a stand in the short term, great, more power to you, but it IS coming, and we have to be prepared beyond just saying "fuck AI". This is not a ha-ha NFT fad thing that we can laugh off in a year or so. My thought is that we should educate ourselves on what AI is and isn't, and share our knowledge with our communities at large about the tangible negative impacts it has on society and your fellow human being. I guess I'll close by expounding that knowledge is power, and I highly encourage you to brush up on how all of this works and what the increased presence of AI could mean for the future. Buckle up, motherfucker.
But no, really. Our well-paid teachers respected by society will save the day!
As with all technological advances, the genie will not be put back in the bottle, so it's important to not waste effort trying to prevent AI's rise, but figure out how to move forward in a world where AI exists.
It's not AI developers, or that specific service's developers (they are usually different teams/people entirely) who do it, it's people, users, who create bots with features/names/avatars of the characters that they like, on those sites. For example, I have created Toothless/Light Fury and some other characters on some of them, mostly private, for personal chat, but some open to public too I think.
And, well, you can't forbid people to use your character's appearance on the avatar or description of them in the bot's settings, so that they behave close to the character.
I mean, you can try if for some reason it bothers you. But I would suggest you and other creators, artists, to look at it in a different way, of other people liking the design/idea/backstory/behavior/character that they have created and wanting to "chat" with it or do something... else than just chat ;) :O
Even if such services were banning these "impersonating" bots on request from authors, people could still be able to run a smaller model locally on their PC, and make it behave like that character.
Aside from some ethical issues involving framing people for crimes I'm thrilled for the tech.
In short, AI is a good thing, making things more accessible, faster, and streamlined, and while some concerns are valid, they are being overblown by doomposting.
Technology advancing is inevitable, akin to several industrial reforms in humanities past, the advent of mass production and the like. are you saying that you don't like your mass produced blankets and bedding? your chair and furniture? its so soulless, why couldn't you commission a carpenter to make your stuff for you by hand?
I understand as an artist/writer, this stuff may affect you more personally, and you have my sympathies for that. But frankly, any customer that would 'leave you' to go get ai art was likely never going to commission you in the first place (using 'you' here to refer to artists as a whole), either because they couldn't afford you or because your own preferences didn't align with theirs. anyone willing and able to drop top dollar on a piece done by you is still going to be able to and want to, because there are nuances and intricacies that AI cannot replicate, and may not ever be able to.
Even then, if AI art becomes the cheap and easy way to get art, people will still want art from a human periodically, because of these nuances. I myself used Bing's AI to make concepts for my latest character, essentially throwing ideas at it and generating hundreds of images over the weeks (token limit), and taking pieces that i liked the horns of, or the stripes of, or whatever, compiling what I wanted, then went to a real human artist for my ref. AI is a tool and has its uses.
Lets liken it to produce and meat. There are farms out there that mass produce these products that you buy at the grocery store, but if you want to spend a lil extra, you can go Organic.
And you know what? No one is stopping you (again, general 'you') from drawing. no one is taking the pen out of your hand and saying 'nah, AI's got this now'. If you were only in it for the money, and AI is 'underselling' you, then well that sucks, but you were only in it for the money, find some other grind. If you were a true artist and in it for the art, for the experience, the community, well no one is taking that from you. You can still draw, post, share. Its a hobby, a passion. I don't get paid for my passions, but I still do them.
As long as the technology hits a wall eventually, I do think we'll stabalize on things like the brainstorming you describe (which I've heard of others doing too) - also artists using AI to add repetitive, predictable details to things whose important details they do manually. And maybe it won't hit a wall, but it's not at that point yet - my experience playing with this stuff is that current AI is kind of a mirror for your own thoughts - when I've tried to get AI to write smut, I've gotten really horny over the prompt and only the prompt, because while the AI is awful at coming up with new ideas it's pretty good for letting me see what *I* mean and serving kind of as a magical notepad that helps me clarify my own thoughts. And pure AI art is currently so awful that it loses my attention, horny or otherwise, within a few minutes.
* I think this is more clear-cut with software, where tens of thousand of people using the same open-source licenses forms really clear cases where you can say "this feature came from *X-licensed code*" or "without Y-licensed stuff in the training data, the output would have been different" - whereas except in deliberate cases those two aren't really ever true of art.
Sorry for the ramble but not in the mood to clean this up.
Everytime I try to rp with these bots they end up blue balling me and changing the subject
Imgur
But less melodramatically, I’ll be okay. I sure as hell ain’t discussing anything like this on, say, Twitter.
On the other hand, everyone ducking their heads in the sand and ignoring why artists hate this shit are rude as hell. There *are* moral issues with it and pretending there aren’t isn’t helping prove anyone’s point. We’re never gonna get anywhere with this unless everyone’s willing to admit that the other side has legitimate issues and points to make. I think taking the step to actually try and understand generative AI is the kind of thing everyone needs to do rather than just parroting what everyone else says.
Not entirely sure where I’m going with this but AI isnt going anywhere. That’s not a reason to throw up our hands and go “welp, guess creativity belongs to machines now” but to actually understand its functions and fix the damn thing so it can’t be used maliciously or in otherwise bad ways. There needs to be a line at some point. This wild west approach will calm down eventually, but until then, we gotta take our own steps in the right direction.
Again, I don’t like generative AI as it’s been implemented. Theft is theft, and frankly, a large number of people are using it to *gleefully* ruin artists as the main point. A lot of people liken it to the industrial revolution and it’s like… yeah but that was still kind of a bad thing for everyone involved. Sure industrialization worked out in the end, but the beginning was just pursuit of money and kinda messed up. What really matters is getting some level of regulation
https://www.technologyreview.com/20.....generative-ai/
And it can as well be the end of a vast majority of people, slow and constantly painful, as more and more people become obsolete in most ways and no one cares about them. Or quicker one, with wars/famines/fake "revolutions" and civil wars, actually artificial epidemics (complicated as its hard to control its extent), and whatever else, to end the newly "useless" (for elites) people quicker.
I think it all, our future, boils down to who gets the control of the best capable AIs, and what values they set/it sets for itself if it will be able to. Will it be generally available? Will the societal structures change? Will there be capable manual humanoid or compatible body shape/features robot workers, to, say, care for elderly as we get less and less children, and automate the labor where there is a worker shortage or dangerous conditions?
Will it allow for humans to understand each other better, to cooperate and communicate better, or will only foster the isolation behaviors?
It's scary and exciting, can turn out to be both very good and very bad. Likely will be a huge and tangled mix of both.
Maybe a general AI and it helping us understand the human society, mind, body and genome deeper and overcome the sociological limitations on changing these, can be the only way we can change our logic so that the perceived (by some of us) "useless humans" and on-purpose deaths/killings become not a norm, and instead we can care for each other, help each other much more and compete less. Having more of resource/energy abundance and less wasteful/inefficient societal/production norms would help it tremendously too.
Just a tiny bit of my own thoughts, very incomplete, tangled (didn't double-check it) and hence will sound broken most likely.
I had/have so many more as I learned so much in the last few years... About the world, problems, future, society, human behavior, and so much other stuff. It's both very exciting and depressing as heck.
I want to participate in development of AI myself, somehow, even if a tiny bit. But have no resources, opportunities here where I live, or exceptional mathematical abilities.
I hate to admit it nearly destroyed my will to create, and months later I’m still picking up the pieces. And with this SORA garbage I am terrified about irl media authenticity.
AI (all forms of it that is. The art AI is just bs) is a tool that either everyone or nobody should have. But I really think the ones ontop are going to try to exploit it as much as possible till we see a market failure. This is one of many inflection points for human history, we will either adapt society or its going to become a dystopia shithole
I do think general pushback is good to make it clear that generative AI is unacceptable in our community, however; anyone seeing that only as being Luddites or anti-tech is not looking at the situation accurately or in good faith.
I feel like I've also gotten better at roleplaying. When I first started I didn't really write much. Now I'm sometimes writing out a paragraph or two myself. AI will definitely bring a lot of bad but since it isn't going away. I'm definitely going to enjoy using it to roleplay fun things.
AI art always has this weird vibe to it, its kind of a soulless interpolation of other art. The lazy will use it, people who want good work will continue to come to real artists.
AI role play is also kind of soulless, I don't want to RP with a machine, I want another person on the other side who I know is enjoying this as much as I am.
With writing, it can vomit out sentences, but there's no creativity, no style, none of the little touches that can make a story come alive. It might be used for copy text for product listings on sales sites, but that still needs edited to remove the garbage that will sometimes crop up in that.
I think a much bigger worry is that one reason we never had humanoid robots is they would be too dumb to do useful work. That is now changing and several companies are working on them. On one hand it would be awesome to be able to buy a robot that and do maintenance on the car, sweep the floors, clean the toilet, etc. But on the other hand this stuff is getting smart enough to make it worth pouring money into making low end workers, cashiers (already to some degree happening with U-scan), stocking, ware house and assembly lines. This would bring manufacturing back home in time, but not any reasonable amount of jobs with it. Instead you will see hundreds of millions going out of work around the world over time. What will they do? Probably eventually there will be riots and major upheaval.
Hopefully there will be some kind of automation tax used for a universal basic income, but people will still want and need something to do, and I think even a UBI will not be enough to help out the restless masses.
On that note, the problem I see is not with the idea itself, but with how fast megacorps find new and improved ways to avoid taxes, and politicians allow it tacitly by being slow and writing imperfect, defanged legislation.
I wish we could fund an UBI with these technological advances.
As for AI. No LLM as they are can replace a RP partner and the long term understanding of each other, or the emotional connection, or the joint creativity. And the generative AI can't replace art from some who truly understands you. I'm not scared.
My take: LLMs and "AI" image generators can be a good (or at least harmless) thing. The current implementations, however, raise very difficult ethical questions given that they were trained on datasets without the permission of (almost) anyone whose art was used to train them. I'm sure we'll see press releases in the coming year where companies boast about how they've trained an AI on a legally licensed data set... and then when you pull back the curtain, you''ll see it was trained on (e.g.) Tumblr posts because the ToS allowed them to sell user posts to an AI company.
If those ethical questions are addressed, I think there can be a place for AI tools as part of a creative workflow. For instance, I don't see a problem with using them to help brainstorm ideas or to quickly visualize something if the model was trained on ethically sourced data. But using a machine tool as an outright replacement for a living, breathing artist is never going to be okay with me. I really hope we (as a society) are not going down a path where making a living as an artist becomes completely impossible.
Making small changes is a pain in the ass. If you like a pose, for example, but want to change the content of the image slightly, there are ways to keep the picture mostly the same but get a slightly different output. There’s also the ability to read in a picture that’s already been generated and prompt it again to modify it, but changing minute things like the number of pleats in a skirt or the texture on a paw pad is a labor intensive process that would take a real artist five seconds.
You can’t get a specific art style without being unethical. From my experience the model generally outputs a certain consistent style (you see this as “AI slop”) all over the internet nowadays, and you can emulate certain famous artists, but the only true way to emulate the style of a lesser known artist is to perform something called “textual inversion” where you basically train the model on this artist’s work and tie it to a keyword of your choosing. Of course, this is a blatant example of the theft of intellectual property. If you are educated enough to understand the artist’s style maybe you can get close on your own, but again, it’s a very time-intensive process. Another biggie is using one of your own OCs. Again, you need to train the model on what it looks like, and that’s a whole process as it stands now. I also assume that it would choke a little at using training data of a character in vastly different art styles with a limited data set, but I could be wrong at that point.
Finally, it has no creativity. If you want a pin up of a character showing off their pussy, sure, you’ll get that in spades. But a unique nanoscape, like I’m wont to commission? Forget about it. You can say it’s like a canyon, you can really bust your ass on the prompt and get something vaguely close, but it can’t beat the efficiency and accuracy of describing it to a person. And forget anything like inserts, you’d have to generate that separately…and also somehow maintain the consistency of posing between all the different views (good luck on that, lol).
In sum, if you’re young and want something quick to jack off to, it’s going to fulfill your need. Artists are going to shed these types of clients in the near term. I assume they’re kind of the shitty clients to begin with, but revenue is revenue. It’s going to be several years, though, before all this is packaged in a simple GUI to run on average machines. There’s actually a shit ton of money to be the first furry to do that specifically for furry art, but I digress.
One thing I’m stuck on, though, is the ethics of using outputs from Stable Diffusion to inform artists of what you want. The means by which to generate an image were unethically obtained, but already exist. Does it then become ethical if you use these means to commission an actual artist? I’m not sure about that….I’d like to because I usually do crappy stick figure drawings to tell the artist what I’d like and this would get me much closer to getting exactly what I wanted. And ultimately this is what I envision in the “good ending” what we use the image AI generation for - better and more efficient production. Not taking over what artists do, but helping the commissioner express what they want, helping the artist define a pose and content, and perhaps even sparking some new ideas.