Imagining 2032 (My thoughts about the state of AI)
3 years ago
As a disclaimer, this is basically going to devolve into stream-of-consciousness impressions about the future of AI generated content (chatbots, image gen, writing, etc.). If you have any blind hate/fear towards AI art and fly into a rage at the mention of it, best skip this giant wall of text.
~~~
On New Year's Eve, I saw an ad by Google of all companies, which surprised me. Why's -Google- pumping money into getting seen? Isn't it THE most ubiquitous site in the world? Like doesn't everyone Google everything? Then, I remembered a couple things.
1. Much of Gen Z has opted to receive their information from TikTok videos and their peers, in fact some barely Google anything at all (allegedly). Here's an example https://www.nbcnews.com/tech/social.....2021%2C%20said.
The fact the ad was on Youtube makes me suspect that Google saw their metrics, saw how many unique searches were being bled out to searches performed on Youtube instead, and decided a nice reminder to their Youtube users of Google's existence would be prudent.
2. As you guys know, the beginning of December heralded the release of ChatGPT, made by Open AI. Google has pretty much been blindsided by the development, and found themselves scrambling to create an answer to this potential competitor.
Now for context, ChatGPT, pretty simple premise. It's a chatbot that is reasonably learned about many topics, and you can simply talk to it. Now, not to dump on the bot, but ChatGPT is not exactly a university professor, it has a sort of layman's knowledge about everything, but it'll get things mixed up and 'recollect' things quite inaccurately, if you get what I mean. Certainly would never ask it for medical advice, but it can understand questions in a natural way.
For example, for a story I was writing about a military base, I wanted to know what the military would call a small group sent to guard an entrance or something. My initial idea was 'task force delta' or something, but ChatGPT told me that they'd call it a 'detail', and that lined up with my own research after the fact. But importantly, when I tried to ask google first, I wasn't sure exactly what to ask, more specifically, how to phrase it. Because my initial phrasings kept getting me something along the lines of 'task force' as an answer, or a slew of only tangentially related articles. Whereas, I asked ChatGPT and just got my answer in a few seconds.
So, getting back to that. ChatGPT and other future chatbots have a neat niche, they have issues with truthfulness and I think that will be the case for a long time. Like a really long time, I could see these issues with the bot outright confidently lying to your face persisting past 2032. But for something noncritical, a quick answer, it sounds like a no-brainer to use something like ChatGPT.
So, that explained, you can see the threat that a ChatGPT-shaped chatbot can pose to search engines in general. And you can see the threat that Open AI proves to Google on multiple fronts; for example, Open AI pretty much crushed Google's own Imagen (An AI image generator by Google, their results are comparable, but Imagen is pretty much still private and beta, whereas Dalle-2 is an actual, usable product, which matters way more than some prototype in Google's lab.) Especially since you know Google, they have an incredibly short attention span, like a toddler. You've seen how they just drop any project quick as a hot potato, be it Hangouts or Stadia or anything else. So why would anyone take Google's own competing beta projects seriously? I bet in 5 years from now, Google's Imagen won't even be out!
And Google's LaMDA, what about that? Well the details are scarce, because again that's another one of Google's prototypes that barely anyone's ever going to get to see or use, but the basics are that LaMDA is meant to be a chatbot sort of like ChatGPT. You may have heard about it through the articles circulating around about 'Google's sentient AI chatbot'. But y'know the 'sentience' bit is kinda hoaxy since the guy who claimed that was some kinda weirdo preacher or whatever. But anyway... We can anticipate that Google will maybe release LaMDA or some other chatbot in the future, if they're going to release anything at all. But image generation AI? Forget it, I really doubt Google will push out Imagen or anything similar, it seems like they've conceded that 'front', so to speak.
So, Image-generating AI, let me pivot back to that for a bit. I believe that Stable Diffusion has sort of 'won' that race for now, due to being ubiquitous, free, and without all those weird Puritan restrictions that Open AI has put on Dalle-2 and all that. In fact, not only is it better to use that open source stuff overall, ignoring the ethics behind Open AI's business practices (can of worms) in general, but Dalle-2's results just aren't all that good. Additionally, Dalle-2 sort of dropped the ball, because there was only a few months period during which Dalle-2 was rolling out more and more access, and remained quite restrictive with services and awful with pricing, then got completely stomped by what Stable Diffusion had to offer. Basically Dalle-2 has become kind of irrelevant.
That being said, I predict at least a Dalle-3 will come out by 2032, probably even more iterations, and it'll return to a sort of prominence, but only as one among many competing models by that point. But due to 'economies of scale', Open AI's investor bucks will win out and it may end up the strongest or at least second strongest/best image generation option. I actually kind of doubt Google will create anything that's #1 in this area, because Google will certainly be all in on some kind of chatbot or another, as of right now not really conceivable/imaginable AI project/application.
The issue with image generation AI right now is 'plagiarism' or more succinctly, copyright issues and the stigma against AI art. Yeah yeah, I know everyone's got a beef against AI art on FA, I'm not here to get bogged down in that conversation. But it's quite unpopular right now, the sort of stigma/beef that people have against crypto and NFTs has sort of transferred onto AI artwork and to an extent, image generation. But the issue will become worse as the AI grows more potent in a few more years, and makes images really, really convincing. This might spur some half-assed legislation to ban or force a labeling/disclaimer on AI-generated images. But what I actually think will happen is a free market solution...
Verification. That is, I imagine a service, -most likely Google-, will create their own sort of AI, which will be able to verify images or text to see 1. if they're true 2. if they're bot-made, or something like that. The exact details, I can't tell you, because I don't know how that'd work, but I think it's not only feasible but it'll be necessary. And Google will be in the perfect position to pull it off, due to their size, the amount and kind of data they've got, and their search engine infrastructure. Plus, they have an incentive to position themselves as an 'arbiter of truth', and brands will want to pool together their $ to make sure no one creates narratives or fake news about Nestle making juice out of monkey blood or whatever shenanigans they're up to lately. You'll basically see a disclaimer when you hover over a picture or post on Google or Twitter or somewhere else, that'll tell you the probability that something's false, AI-generated, or so forth.
(And if we're willing to be imaginative, imagine Google's own future chatbot having 'verification' integrated into it, so it'll only tell you something true, or at least with a high stated probability of being true)
As a sidenote, I also think Microsoft is in a very good position to make their own move, probably by acquiring a smaller company, or pumping more $ into Open AI. But, what we're going to realistically see is a face-off of sorts, with disruptive AI technologies spearheaded by Open AI or companies similar to Open AI, versus a rather ossified group of tech giants like Google, Microsoft, etc.
(Oh, and Meta? Like at this point I think they're irrelevant. Maybe they can buy their way in like Microsoft but they'd do way worse than Microsoft if they tried. So maybe third banana at best.)
To quote Alien vs Predator, 'whoever wins, we lose'. Because Open AI has fuck-awful practices. If you're familiar with the AI dungeon fiasco, that's where the whole nonsense with censoring the #&@% out of everything they do started. And this practice is just gonna keep spreading to other, similar startups and projects, like Character.AI and so forth. This poses a big problem for the internet of the future, because bots threaten to drown out a lot of the content creation, in a sea of sterile low-quality noise. If there's nothing done about it and no real competition or pressure.
(If you have doubts about why OpenAI is just trash *as a company*, check it out https://rentry.org/remember-what-th.....-took-from-you (And mind you I am not saying this link speaks for me, but it should give you where to probe around if you want to go down this rabbit hole))
But to emphasize, this is just how all of these startups and new AI tech companies are starting to lean now. Even Novel AI, a relatively unrestricted platform & service, has pretty much gone radio silence about their updates and isn't very communicative. And this is one of the LEAST bad AI content generation companies, mind you. (For context, they started off with writing/story generation but now dabbled into image gen stuff). We're probably going to see the next few years of AI stuff as quite turbulent, because we're going to see a succession of simply better AI models pop up one after the other, absolutely stomping previous generations/iterations. And because Open AI unfortunately remains at the forefront of this field, we're probably going to see them pump out a new model next year that'll crush anything Novel AI / Stable Diffusion / some other somewhat less evil company will be able to put out by then. So basically anticipate taking repeated L's in terms of user privacy, copyright stuff, and etc. etc. thanks to Open AI. OR... Really any company shaped like Open AI. I am not saying Open AI is some kind of Great Satan and if they're gone everything will be peachy. The meme about everything having to be sanitized/sterile/prohibited has spread across all companies in the AI content generation space, so don't think this trend is going anywhere.
Mind you, Google kinda sucks. But at this point we're finding ourselves dealing with the lesser of two evils. Not going to be a fun 2032!
But before I let you leave, let's try to go over one more thing, trying to be imaginative/speculative here. Because you've seen AI image generation, and AI writing, & AI chatbots and all that jazz, sure. But I believe the next big thing is going to be AI-generated voice.
https://github.com/neonbjb/tortoise-tts
People are sleeping on Tortoise, which was basically made by A Few Dudes and it's already quite good. Throw some big tech bucks at it, and I'm sure we're going to be worried about scam calls that sound exactly like our mom/dad/brother/sister on the phone. How wonderful.
The thing is, we're going to keep seeing a lotta breakthroughs like this, some of which we won't be able to even fathom in the current year, which will keep investor $ flowing into OpenAI. They really don't even need to make a profit with any of their particular projects, it's already clear by now that they're at the bleeding edge of a bunch of breakthroughs and if Uber can skate by for years as a glorified taxi company with startup $, then Open AI is easily going to be around for another 10 years.
I guess there's one more bit, let's talk about whether the AI image generation of 2032 is going to be any better. Well, it's important to not suffer from a lack of imagination. I'm sure before 2022, many of us gawked at the idea of artists being made bankrupt by some cheap 'photobasher' as many fellas on FA like to call it. But within the span of a year, we're already here, and many of our priors have been flipped on their heads. I'm sure many of us really bought into the ideas we were sold by the Scifi stories of years past, we really thought they were going to automate all the physical labor away first then leave the rest of us making art and poetry in our free time, myself included. There was the sort of notion we'd be freed up to work on whatever creative pursuits we liked, and that we'd live in some kind of moneyless society, trading favors and creative endeavors back and forth.
Many popular works with an anarchistic but optimistic vibe even agreed with that notion (Eclipse Phase, The Culture novels, etc.)
Unfortunately we're starting to see the opposite outcome manifest before us. We have seen the totalitarian measures implemented by *certain states* and we can see the main strategy with a lot of authoritarian regimes is to spew 'a hose of lies' to drown out the truth. And we can see how AI content generation is a whole ocean of lies waiting to wash everything else out. We're seeing the repressive measures implemented by companies like Open AI and we're being told & shown /this is the future in store for privacy, creativity, and AI/. I anticipate we may even see malicious attacks by certain rogue states in the future, manipulating the information space with AI-generated images. I don't wanna point any fingers but maybe a /certain/ country in Asia might try it (there's only one authoritarian state there with any real tech aptitude, the rest of the totalitarian countries are sclerotic as fuck but maybe they can outsource it to some 'hackers').
But I don't think that's likely. And I think it's even less likely that the USA or whatever is going to flip within the next few decades into being any flavor of authoritarian regime, or any cyberpunk dystopia. No, I think it's just going to be more like a NSA 2.0 situation, where we're going to be lamenting about dealing with even more privacy & social/political freedom problems. If you read the link above, you saw how creepy AF a small company like OpenAI can be with your data, imagine Google or Microsoft pulling some shit like that, the Open AI stuff only fell under the radar because it wasn't important, just fucking around with a text dungeon. Imagine a leak/blowup of that level with a bigger company, it'd be catastrophic. Imagine Meta creating a chatbot version of you, just like they made 'shadow profiles' using the data aggregated across multiple users. https://www.howtogeek.com/768652/wh.....ou-be-worried/
Oh, uh, I got sidetracked. So, will image generators be better? Well yes. The main issue is that the generators suck with things like hands, but this would be trivial to improve, I give that a year. The secondary issue is the lack of ability to draw niche styles. Like don't get me wrong, if you want cheesecake pinups of Krystal or whatever, it can keep spitting those out. But if you want it to draw a good transformation sequence that isn't just a photomorph like some Animorphs cover, that's not likely within the next couple years. There will be workarounds, like having separate generators for hands and one for the main body, and there can be ones fine-tuned for inflation porn or whatever. Many issues will be resolved by things similar to inpainting (which consists of deleting and repainting a segment of a given picture, and you can just keep 'rerolling' until you get the feature/look you want, say if you wanted a certain style of knotted dick).
That being said! It will be 3 or 4 years before it will be able to draw a decent porn pic of two girls or guys or animals or w/e fucking. And by 2032, it will still only be able to emulate rather generic art styles, and I doubt it'll be able to put most artists out of business. In fact, I believe art will simply become stranger, weirder, more niche, more specific to compensate. Same as how abstract art & cubism were only possible once the photograph liberated artists from the need to be realistic, I think AI generated images will free artists from the need to draw anything sane/coherent/typical/resembling anything in particular. Things that defy labels.
What exactly these things look like? I can't really say, because by the nature of them, words shall fail to properly describe them, so new words will need to emerge by then.
Remember that AI generated stuff can only work on art or content that has already been drawn before. And it can only work on something /that can be described/. So use your imagination, basically. Imagine a new kind of fur marking that cannot really be described right now, that's how weird/rare/uncommon it is. To help you imagine that better, do you know anything about Vexiollogy?
https://i.pinimg.com/originals/00/5.....f77c0e426b.jpg
There's all sorts of weird shapes and geometries and all that possible, is one example. New styles of shading? Or perhaps shading will no longer even be realistic, or make any sense. Entirely new, distorted anatomies and proportions and appendages which cannot be succinctly described. We'll eventually have a dictionary's worth of new, really specific words, like Vexiollogy. Like for real wtf is a pennant? And there's no easy way to translate everything visual/drawn/3d/2d into mere words. I myself struggle with trying to describe certain fur markings in my writings, and I'm a human who can visualize them! If the words do not exist to describe something, then an AI cannot draw it. And that's one easily applicable way where a human artist trumps one.
And it takes a LOT of image examples of a given marking or transformation or whatever for the AI to be able to draw it decently. So some things will simply be undrawable by AI by 2032, at least not in a good/convincing enough way that you can fap to it or whatever. And if you're concerned about your style being copied or your work being used as training data, do consider whatever options your preferred gallery offers for opting out of getting your art scanned by web crawlers or however that works, IDK. Though I imagine within a few years, the generators could become advanced enough that you can simply train your personal generator on your style. So when you want to do a pic, have it generate a sketch of the sort of pose you want, in your style, and that'd save you plenty of time otherwise spent in the preliminary stage. Can easily have a bot within a couple years that turns your sketch into linework/inks within seconds. Or a bot to auto-shade your work. Imagine your workflow being much quicker!
Art could become 50% or 75% faster to draw, even. If you can sketch something yourself, convert it to linework within seconds, then have another bot copy the markings from a reference pic onto your picture (to create the flat colors), then run a bot to auto-shade the picture (using your preferred settings/shading style), & generate logical-looking background elements, that would save so much time.
So, not to end things abruptly but that's about all I have left to say on the matter. I guess 'the moral of the story' is uh, if you're a consumer, you should probably be vigilant about the data you feed tech companies, maybe reach out to a legislator who's not a fossil & write to them about privacy protections or something. IDK I'm not really all that political. I'll probably end up getting a salty comment and end up deleting this though. Peace!
~~~
On New Year's Eve, I saw an ad by Google of all companies, which surprised me. Why's -Google- pumping money into getting seen? Isn't it THE most ubiquitous site in the world? Like doesn't everyone Google everything? Then, I remembered a couple things.
1. Much of Gen Z has opted to receive their information from TikTok videos and their peers, in fact some barely Google anything at all (allegedly). Here's an example https://www.nbcnews.com/tech/social.....2021%2C%20said.
The fact the ad was on Youtube makes me suspect that Google saw their metrics, saw how many unique searches were being bled out to searches performed on Youtube instead, and decided a nice reminder to their Youtube users of Google's existence would be prudent.
2. As you guys know, the beginning of December heralded the release of ChatGPT, made by Open AI. Google has pretty much been blindsided by the development, and found themselves scrambling to create an answer to this potential competitor.
Now for context, ChatGPT, pretty simple premise. It's a chatbot that is reasonably learned about many topics, and you can simply talk to it. Now, not to dump on the bot, but ChatGPT is not exactly a university professor, it has a sort of layman's knowledge about everything, but it'll get things mixed up and 'recollect' things quite inaccurately, if you get what I mean. Certainly would never ask it for medical advice, but it can understand questions in a natural way.
For example, for a story I was writing about a military base, I wanted to know what the military would call a small group sent to guard an entrance or something. My initial idea was 'task force delta' or something, but ChatGPT told me that they'd call it a 'detail', and that lined up with my own research after the fact. But importantly, when I tried to ask google first, I wasn't sure exactly what to ask, more specifically, how to phrase it. Because my initial phrasings kept getting me something along the lines of 'task force' as an answer, or a slew of only tangentially related articles. Whereas, I asked ChatGPT and just got my answer in a few seconds.
So, getting back to that. ChatGPT and other future chatbots have a neat niche, they have issues with truthfulness and I think that will be the case for a long time. Like a really long time, I could see these issues with the bot outright confidently lying to your face persisting past 2032. But for something noncritical, a quick answer, it sounds like a no-brainer to use something like ChatGPT.
So, that explained, you can see the threat that a ChatGPT-shaped chatbot can pose to search engines in general. And you can see the threat that Open AI proves to Google on multiple fronts; for example, Open AI pretty much crushed Google's own Imagen (An AI image generator by Google, their results are comparable, but Imagen is pretty much still private and beta, whereas Dalle-2 is an actual, usable product, which matters way more than some prototype in Google's lab.) Especially since you know Google, they have an incredibly short attention span, like a toddler. You've seen how they just drop any project quick as a hot potato, be it Hangouts or Stadia or anything else. So why would anyone take Google's own competing beta projects seriously? I bet in 5 years from now, Google's Imagen won't even be out!
And Google's LaMDA, what about that? Well the details are scarce, because again that's another one of Google's prototypes that barely anyone's ever going to get to see or use, but the basics are that LaMDA is meant to be a chatbot sort of like ChatGPT. You may have heard about it through the articles circulating around about 'Google's sentient AI chatbot'. But y'know the 'sentience' bit is kinda hoaxy since the guy who claimed that was some kinda weirdo preacher or whatever. But anyway... We can anticipate that Google will maybe release LaMDA or some other chatbot in the future, if they're going to release anything at all. But image generation AI? Forget it, I really doubt Google will push out Imagen or anything similar, it seems like they've conceded that 'front', so to speak.
So, Image-generating AI, let me pivot back to that for a bit. I believe that Stable Diffusion has sort of 'won' that race for now, due to being ubiquitous, free, and without all those weird Puritan restrictions that Open AI has put on Dalle-2 and all that. In fact, not only is it better to use that open source stuff overall, ignoring the ethics behind Open AI's business practices (can of worms) in general, but Dalle-2's results just aren't all that good. Additionally, Dalle-2 sort of dropped the ball, because there was only a few months period during which Dalle-2 was rolling out more and more access, and remained quite restrictive with services and awful with pricing, then got completely stomped by what Stable Diffusion had to offer. Basically Dalle-2 has become kind of irrelevant.
That being said, I predict at least a Dalle-3 will come out by 2032, probably even more iterations, and it'll return to a sort of prominence, but only as one among many competing models by that point. But due to 'economies of scale', Open AI's investor bucks will win out and it may end up the strongest or at least second strongest/best image generation option. I actually kind of doubt Google will create anything that's #1 in this area, because Google will certainly be all in on some kind of chatbot or another, as of right now not really conceivable/imaginable AI project/application.
The issue with image generation AI right now is 'plagiarism' or more succinctly, copyright issues and the stigma against AI art. Yeah yeah, I know everyone's got a beef against AI art on FA, I'm not here to get bogged down in that conversation. But it's quite unpopular right now, the sort of stigma/beef that people have against crypto and NFTs has sort of transferred onto AI artwork and to an extent, image generation. But the issue will become worse as the AI grows more potent in a few more years, and makes images really, really convincing. This might spur some half-assed legislation to ban or force a labeling/disclaimer on AI-generated images. But what I actually think will happen is a free market solution...
Verification. That is, I imagine a service, -most likely Google-, will create their own sort of AI, which will be able to verify images or text to see 1. if they're true 2. if they're bot-made, or something like that. The exact details, I can't tell you, because I don't know how that'd work, but I think it's not only feasible but it'll be necessary. And Google will be in the perfect position to pull it off, due to their size, the amount and kind of data they've got, and their search engine infrastructure. Plus, they have an incentive to position themselves as an 'arbiter of truth', and brands will want to pool together their $ to make sure no one creates narratives or fake news about Nestle making juice out of monkey blood or whatever shenanigans they're up to lately. You'll basically see a disclaimer when you hover over a picture or post on Google or Twitter or somewhere else, that'll tell you the probability that something's false, AI-generated, or so forth.
(And if we're willing to be imaginative, imagine Google's own future chatbot having 'verification' integrated into it, so it'll only tell you something true, or at least with a high stated probability of being true)
As a sidenote, I also think Microsoft is in a very good position to make their own move, probably by acquiring a smaller company, or pumping more $ into Open AI. But, what we're going to realistically see is a face-off of sorts, with disruptive AI technologies spearheaded by Open AI or companies similar to Open AI, versus a rather ossified group of tech giants like Google, Microsoft, etc.
(Oh, and Meta? Like at this point I think they're irrelevant. Maybe they can buy their way in like Microsoft but they'd do way worse than Microsoft if they tried. So maybe third banana at best.)
To quote Alien vs Predator, 'whoever wins, we lose'. Because Open AI has fuck-awful practices. If you're familiar with the AI dungeon fiasco, that's where the whole nonsense with censoring the #&@% out of everything they do started. And this practice is just gonna keep spreading to other, similar startups and projects, like Character.AI and so forth. This poses a big problem for the internet of the future, because bots threaten to drown out a lot of the content creation, in a sea of sterile low-quality noise. If there's nothing done about it and no real competition or pressure.
(If you have doubts about why OpenAI is just trash *as a company*, check it out https://rentry.org/remember-what-th.....-took-from-you (And mind you I am not saying this link speaks for me, but it should give you where to probe around if you want to go down this rabbit hole))
But to emphasize, this is just how all of these startups and new AI tech companies are starting to lean now. Even Novel AI, a relatively unrestricted platform & service, has pretty much gone radio silence about their updates and isn't very communicative. And this is one of the LEAST bad AI content generation companies, mind you. (For context, they started off with writing/story generation but now dabbled into image gen stuff). We're probably going to see the next few years of AI stuff as quite turbulent, because we're going to see a succession of simply better AI models pop up one after the other, absolutely stomping previous generations/iterations. And because Open AI unfortunately remains at the forefront of this field, we're probably going to see them pump out a new model next year that'll crush anything Novel AI / Stable Diffusion / some other somewhat less evil company will be able to put out by then. So basically anticipate taking repeated L's in terms of user privacy, copyright stuff, and etc. etc. thanks to Open AI. OR... Really any company shaped like Open AI. I am not saying Open AI is some kind of Great Satan and if they're gone everything will be peachy. The meme about everything having to be sanitized/sterile/prohibited has spread across all companies in the AI content generation space, so don't think this trend is going anywhere.
Mind you, Google kinda sucks. But at this point we're finding ourselves dealing with the lesser of two evils. Not going to be a fun 2032!
But before I let you leave, let's try to go over one more thing, trying to be imaginative/speculative here. Because you've seen AI image generation, and AI writing, & AI chatbots and all that jazz, sure. But I believe the next big thing is going to be AI-generated voice.
https://github.com/neonbjb/tortoise-tts
People are sleeping on Tortoise, which was basically made by A Few Dudes and it's already quite good. Throw some big tech bucks at it, and I'm sure we're going to be worried about scam calls that sound exactly like our mom/dad/brother/sister on the phone. How wonderful.
The thing is, we're going to keep seeing a lotta breakthroughs like this, some of which we won't be able to even fathom in the current year, which will keep investor $ flowing into OpenAI. They really don't even need to make a profit with any of their particular projects, it's already clear by now that they're at the bleeding edge of a bunch of breakthroughs and if Uber can skate by for years as a glorified taxi company with startup $, then Open AI is easily going to be around for another 10 years.
I guess there's one more bit, let's talk about whether the AI image generation of 2032 is going to be any better. Well, it's important to not suffer from a lack of imagination. I'm sure before 2022, many of us gawked at the idea of artists being made bankrupt by some cheap 'photobasher' as many fellas on FA like to call it. But within the span of a year, we're already here, and many of our priors have been flipped on their heads. I'm sure many of us really bought into the ideas we were sold by the Scifi stories of years past, we really thought they were going to automate all the physical labor away first then leave the rest of us making art and poetry in our free time, myself included. There was the sort of notion we'd be freed up to work on whatever creative pursuits we liked, and that we'd live in some kind of moneyless society, trading favors and creative endeavors back and forth.
Many popular works with an anarchistic but optimistic vibe even agreed with that notion (Eclipse Phase, The Culture novels, etc.)
Unfortunately we're starting to see the opposite outcome manifest before us. We have seen the totalitarian measures implemented by *certain states* and we can see the main strategy with a lot of authoritarian regimes is to spew 'a hose of lies' to drown out the truth. And we can see how AI content generation is a whole ocean of lies waiting to wash everything else out. We're seeing the repressive measures implemented by companies like Open AI and we're being told & shown /this is the future in store for privacy, creativity, and AI/. I anticipate we may even see malicious attacks by certain rogue states in the future, manipulating the information space with AI-generated images. I don't wanna point any fingers but maybe a /certain/ country in Asia might try it (there's only one authoritarian state there with any real tech aptitude, the rest of the totalitarian countries are sclerotic as fuck but maybe they can outsource it to some 'hackers').
But I don't think that's likely. And I think it's even less likely that the USA or whatever is going to flip within the next few decades into being any flavor of authoritarian regime, or any cyberpunk dystopia. No, I think it's just going to be more like a NSA 2.0 situation, where we're going to be lamenting about dealing with even more privacy & social/political freedom problems. If you read the link above, you saw how creepy AF a small company like OpenAI can be with your data, imagine Google or Microsoft pulling some shit like that, the Open AI stuff only fell under the radar because it wasn't important, just fucking around with a text dungeon. Imagine a leak/blowup of that level with a bigger company, it'd be catastrophic. Imagine Meta creating a chatbot version of you, just like they made 'shadow profiles' using the data aggregated across multiple users. https://www.howtogeek.com/768652/wh.....ou-be-worried/
Oh, uh, I got sidetracked. So, will image generators be better? Well yes. The main issue is that the generators suck with things like hands, but this would be trivial to improve, I give that a year. The secondary issue is the lack of ability to draw niche styles. Like don't get me wrong, if you want cheesecake pinups of Krystal or whatever, it can keep spitting those out. But if you want it to draw a good transformation sequence that isn't just a photomorph like some Animorphs cover, that's not likely within the next couple years. There will be workarounds, like having separate generators for hands and one for the main body, and there can be ones fine-tuned for inflation porn or whatever. Many issues will be resolved by things similar to inpainting (which consists of deleting and repainting a segment of a given picture, and you can just keep 'rerolling' until you get the feature/look you want, say if you wanted a certain style of knotted dick).
That being said! It will be 3 or 4 years before it will be able to draw a decent porn pic of two girls or guys or animals or w/e fucking. And by 2032, it will still only be able to emulate rather generic art styles, and I doubt it'll be able to put most artists out of business. In fact, I believe art will simply become stranger, weirder, more niche, more specific to compensate. Same as how abstract art & cubism were only possible once the photograph liberated artists from the need to be realistic, I think AI generated images will free artists from the need to draw anything sane/coherent/typical/resembling anything in particular. Things that defy labels.
What exactly these things look like? I can't really say, because by the nature of them, words shall fail to properly describe them, so new words will need to emerge by then.
Remember that AI generated stuff can only work on art or content that has already been drawn before. And it can only work on something /that can be described/. So use your imagination, basically. Imagine a new kind of fur marking that cannot really be described right now, that's how weird/rare/uncommon it is. To help you imagine that better, do you know anything about Vexiollogy?
https://i.pinimg.com/originals/00/5.....f77c0e426b.jpg
There's all sorts of weird shapes and geometries and all that possible, is one example. New styles of shading? Or perhaps shading will no longer even be realistic, or make any sense. Entirely new, distorted anatomies and proportions and appendages which cannot be succinctly described. We'll eventually have a dictionary's worth of new, really specific words, like Vexiollogy. Like for real wtf is a pennant? And there's no easy way to translate everything visual/drawn/3d/2d into mere words. I myself struggle with trying to describe certain fur markings in my writings, and I'm a human who can visualize them! If the words do not exist to describe something, then an AI cannot draw it. And that's one easily applicable way where a human artist trumps one.
And it takes a LOT of image examples of a given marking or transformation or whatever for the AI to be able to draw it decently. So some things will simply be undrawable by AI by 2032, at least not in a good/convincing enough way that you can fap to it or whatever. And if you're concerned about your style being copied or your work being used as training data, do consider whatever options your preferred gallery offers for opting out of getting your art scanned by web crawlers or however that works, IDK. Though I imagine within a few years, the generators could become advanced enough that you can simply train your personal generator on your style. So when you want to do a pic, have it generate a sketch of the sort of pose you want, in your style, and that'd save you plenty of time otherwise spent in the preliminary stage. Can easily have a bot within a couple years that turns your sketch into linework/inks within seconds. Or a bot to auto-shade your work. Imagine your workflow being much quicker!
Art could become 50% or 75% faster to draw, even. If you can sketch something yourself, convert it to linework within seconds, then have another bot copy the markings from a reference pic onto your picture (to create the flat colors), then run a bot to auto-shade the picture (using your preferred settings/shading style), & generate logical-looking background elements, that would save so much time.
So, not to end things abruptly but that's about all I have left to say on the matter. I guess 'the moral of the story' is uh, if you're a consumer, you should probably be vigilant about the data you feed tech companies, maybe reach out to a legislator who's not a fossil & write to them about privacy protections or something. IDK I'm not really all that political. I'll probably end up getting a salty comment and end up deleting this though. Peace!
RomanComrade
~romancomrade
Very informative text. Thanks for taking your time writing this. One thing I am uncertain about, you continue to write 2032 on multiple spots. Do you by chance mean 2023? Seems wierd otherwise
FlagCatcher
~flagcatcher
OP
Thanks! For the hypothetical, I’m talking about the next 10 years. A bit of a future prediction! I’ll probably end up totally wrong though
FlagCatcher
~flagcatcher
OP
As far as how verification would work. Well I can imagine legislation which would require open sourcing the code / training inputs and so on behind the biggest generators. At the least for use by certain government agencies or approved companies. Then a search engine or other approved site could attempt generating images similar to the target image, and if the suspect image has a high match to multiple images generated by the search engine / verifier, then high probability the suspect image is also generated.
FA+