Do you use AI assistance?
2 months ago
...like ChatGPT, Gemini or even the AI browser answers?
Just occurred to me that I never used, never felt the need! But I don't know what goes on with most of people, and I'm quite curious!
How widespread is it, I wonder?
Just occurred to me that I never used, never felt the need! But I don't know what goes on with most of people, and I'm quite curious!
How widespread is it, I wonder?
It predates the modern LLM AI hype by several years and is based on more concrete algorithms and datasets curated by experts. It might not be quite as good as an LLM AI at making up answers, but it does have some support for natural language input and will actually tell you what it interprets your input as, including if it just has no idea. So you have a chance to reword and clarify your input instead of having to guess if you've got a relevant answer or if you're looking at a confidently presented hallucination.
It's not limited to raw numbers either, you can ask it about stuff like "mass of the sun" or "most populous cities," even using such queries directly in mathematical expressions, if you want.
But, thinking about it, maybe I could use it one day to help me with research for my future stories. I even found this cool online smut fanfic generator to give me prompts for stories.
https://www.telegraph.co.uk/us/news.....t-told-him-to/
I pride myself in being old school.
Unless Grammerly counts then yes I do.
It's tempting to use it for counseling or therapy due to the prohibitive cost and wait times but it's... well.... really REALLY bad for that purpose due to its obsequious nature. D:
Other than that I mainly use it to indulge in trolling it.
At best they're useless (its attempt to "summarize" some code changes turns 2 lines into 3 paragraphs but says nothing of value).
At worst they're dangerous (inventing links to nonexistent software libraries, which hackers then create and use to compromise companies using those tools).
It's a tool. So like any other tool, it will not do things for you, you still have to do the real work, but it can help you with it if you use it correctly.
Most people seem to forget that though and expect it to do the work for them and flawlessly so.
Read this https://gizmodo.com/gpt4-open-ai-ch.....gpt-1850227471 and tell me who is the tool.
It will NOT build a house for you. -YOU- have to build a house, and it will help you do so.
Also, when I have NO clue where to start with something, I ask the IA for help or guidance, usually gives me a good head start, and when I ask for facts I allways asks for sources, so I check the sources to make SURE that the IA isn't "Male-cow-manure" me
Quick prototypes/drafts/schedules etc. are really valuable since you can make it easier for others to "get the idea" before you or them can refine it.
Also, "useless" tasks are no longer a chore and you can focus on things AI won't be able to handle for years, if at all (most people don't realize we are nowhere near actual AI and it's just a statistical oracle predicting words/patterns). Yesterday I was able to finish half a day of boring excel work by just asking Gemini to make me some formulas for certain fields. I know how to do it myself but I don't have time to check everything one by one...boom - it's done and I just need to validate it.
For a lot of people, it might be a gimmick but for pretty much every corpo-rat, AI-tools are a blessing since you are paid for taking responsibility, not for actual work.
BTW, it has some funny niche usage as well. You can use it to identify plants/fungi/animals/items, meaning it is just a real life "pokedex" or guide. Is it foolproof? No, but neither are you with an old paper book with one or two bad drawings of some mushroom...
It can be a fantastic formatting tool though. Give it a big input, ask it to format it into some syntax and it mostly does great.
My #1 use for an LLM (which I run locally) is entertainment. It's often so bad in its hallucinations or attempts to make sense of some inputs, that it can be funny.
For getting actual work done outside formatting? It's not been any help so far. At least, it's done nothing to improve my productivity. If I ask it to write out a template, I still spend a good bit of time proof-reading and correcting the contents. I could've just written it all myself in that time.
Other AI applications are far more useful than LLMs.
Seems web browsers have implemented it, so kinda unavoidable in that context.
I don't see harm in using it for conceptual aids.
But as a method of production, it falls short of quality, and requires regulation to be applied ethically, which is difficult to enforce.
Seeing my failure as a 3d artist I learnt IA art, specially Stable diffusion. I tried different models, I used Controlnet and lots of mods (All that while I use a Radeon RX 580 with 8 gb of Vram!!) I learnt alot in the process. I managed to create cool and nice looking art (which I never posted, all this was made locally)
However, as I managed to create a cool piece of art I realized the fact that said image was unique... TOO unique. Like, an anthro wolf girl in a winter setting, using a cool leather suit. Said image was awesome but then I thought "Heck, I really like this character... But if I want to use this character I can't!" sometimes I thought about her and wished to be able to just do the same image, in a different pose, or in a different place (like, reaching her house and taking off the leather clothes). And despite the fact I learnt alot of how to use Controlnet's modules, and the advances about posing (specially hands!) I was frustrated by the fact that I couldn't bring said character to life again, I could do something SIMILAR but not "That" character again... Ever.
That's what made me return to do 3d Art. with quite new ideas. Because in 3D, once you have a character you can... like... "Imprint" your personality on it. Also there is something no one on the IA discussions (either in favor or against it) EVER mentions...
The Style! I mean, Often I see art, Just like Inert-Ren does, or Badgerben, or spectrumshift, or thoughset, or Lagartuz (just to mention someones). And then I see in the newest art in Furaffinity a piece of art of a previously unknown user, showing an image... And I see the image and I think "This looks like XXX's art" and in the description it says "I comissioned this from XXX" and I say "ha!" I knew it!"
IA art has it's own style, one that 80% of the time I can recognize. But despite the fact I saw alot of AI art, I just can't remember any "AI artist" at all. Like seeing an image and thinking "This looks like X does"
So, basically, AI it's just a tool, like anything else. Think of it like a Swiss Tool. It's useful? Totally!!! But the problem is that people think of AI (in any form whatsoever) like the ultimate tool.
Try to build a house (not only the exterior, but the interior too, electrical and plumbering included, for instance) using an army of people only using swiss tools. Probably they would be able to achieve the objective, but soon they will start wish to have SPECIALIZED tools (like hammers, screwdrivers, saws, etc etc etc) to better do their jobs.
But sure, the bosses or anyone who haven't tried the TOOL itself (IA) would just hear that this "tool" it's the very best thing EVER which would save them tons of money. But eventually they would NEVER EVER find someone who it's as good with a Swiss tool as MacGyver...
Now that I think about this whole thing, and having used LLMs myself (and I know how they work) I can say that I feel more confident in my redaction. I mean, in the past I would often feel ashamed to write such long text because I know I make some grammatical errors here and there (or typos) because English it's not my native language. But now, I know that my writing style, my way of communicate, and all that, separates myself from someone who would just say "Chatgpt, write a text about the drawbacks of AI art vs doing true 3D art"
(Something that I believe in 3 - 7 years would be really common in internet)
whoever reads all that. Thanks for reading me!
But yeah, I understand your point. Eventually despite all that effort I got a generic image. Which if I would want to re-use said character I wouldn't be able to do it again. It was when I found out that it's better to stick to my lousy 3d art (which at least I can do a steady stuff, despite being bad, rather than creating ONE cool image but being unable to replicate it again)
"On it looks like x's art" I've found about the same ratio of AI artists that I can pick out as non-AI artists I can pick out. I expect it's a matter of what locks into your brain as making a distinctive look and how many of each type you watch and like.
For what I can see, everybody who is pro-IA stuff believes that IA will solve all their problems. But they can't understand the very fact that IA can't do anything on it's own. It's the person who NEEDS to properly use the tool (and as I mentioned I tried it myself, and I didn't managed to master it) It's like having a magic hammer which would do things on it's own if instructed, but people forgets it's just a hammer... It can't do things like sawing (for instance).
It's like pretending that photoshop would do magically anything for you (some years ago). SOme years ago could be argued that it was a magical tool, but it needs someone who understands it and mastered it. For very specific reasons. But it can't do things on it's own! Same with IA stuff, people pretends to just throw a task on it and expects to magically solve it.
Despite I used IA art (either Stable diffusion or LLMs) in the end IA can't improve on stuff. Think like the BORG. Sure, it can assimilate stuff, And can do stuff based on what assimilated, it's damn good on it. But IA can't be creative at all! Think about it, if IA can create things on it's own Hollywood would already have took advantage of it, and their best Think-thanks would already created a movie like "EL mariachi" (A mexican movie which was shoot with a budget of 7000 US dollars and got 2 millions of box office revenue.)
There are none like that at present. None.
Then I found out that someone already did that. But the bias against IA is so strong that even doing that it's frown upon.
So, no, for the vast majority of the art community there is no "ethically sourced IA art" even if it's trained on your very own art. It's like, if you use IA whatsoever you re a loser and not really an artist.
Want to hear a funny story? Before IA, I already heard stuff like that, things like "If you use the soft tool on photoshop then you aren't a real artist, because real artists doesn't uses cheats like... Photoshop tools, or if you re using traditional art even using a RULER it's cheating.
And even using REFERENCES was a form of cheating because that meant you can't train your mind on these things!
May I ask what your self trained AI could spit out as results? I've learned that actual self trained AI can not do much due to lack of data. If the AI spits out things you have never actually provided with your data – like generating a horse despite you've never drawn or modeled one, you never provided "horse data" – then that's why people don't take "self-trained" serious.
Honestly I'm not sure how I could do that. But I'm thinking along the lines of scanning real objects (something I already did in the past) and show the AI "This is a fire hydrant" (lots of images) then a Tree "This a X tree" (lots of images) etc.
The main advantage of this would be to keep consistency, for instance, I'm pretty sure current AI (specially Stable diffusion ones) are bad at hands because they used a heck ton of random images, in random positions, probably a heck ton of quite bad art, and thus spews bad hands.
For instance taking a photo of my hands "Hand holding a glass" (correct position) - "Hand holding a pencil" (correct position) the main reason of starting from scratch it's to get rid of all these bad scenes...
Say... in the scene I need a Horse, ok, then go out and 3d scan a horse (and totally get rid of all the random images of horses around, which were automatically labelled instead of manually labeled)
Although something I DO know is that once you have the scanned 3d model, for instance of a horse, I could rig it, and re-texturize it, apply some shaders to make it look like a toon-drawn or anime-ish style. Then it would be a matter of apply that "toon" shader, and train the IA "This is a horse in Toon style"
Sure, is a ton of training first, but I'm guessing it would be like a further evolution of 3D art (which you tell the computer how to draw something, make the mesh, the rig, the textures, the shader) and once you set all the parameters (which takes a heck ton of time) you can take your character, pose it, then in a few minutes you can totally change the pose, or the camera view, and take another image.
Most people complains that IA takes others artwork and that's stealing... Besides all that, I believe that "generic models" spews such awful things because they were trained with lots and lots of random stuff, there was no control of what was put into the model. There are models (like anime-ish ones) with specific themes which are so good that I couldn't even figure out that was made with AI. And as far as I know of model training, usually they are trained using the base of earlier models (mostly Stable diffusion 1.5). Which even specific models carries the mistakes of that original model.
Yeah, it's just an idea, I still need to figure out and learn how to train IA in certain stuff. And I don't have much resources and motivation to "horse around" (I really had to say that!)
(Plus yeah, personally I am among the group of people who thinks it is stupid to eternally cauterize whole sectors of human behavior with automation. True consent is kinda impossible with AI training and I think humanity should focus on better shit, like getting consumption and pollution under control, not automating it.)
I've read AI browser answers, then go check how accurate they are on sites I believe are trustworthy. Haven't caught it hallucinating yet despite how many accounts I read about in the news. I can't think of how an AI can make up non-existent legal information (lots of that in the news) without a human intentionally giving it garbage info or instructions to do so. But my understanding of how to computer code is from the 90's and I know AI is a completely different school of thought on how to code.
I've played with DA's DreamUp image generator (it's free) to see what it does. Can't say I'm impressed compared with the results of to working with a human artist but it's produced a handful of things I liked in a generic way and the drama is so much less than when two egos are involved. If I put in the hours learning to use it that I do on a longer story I'd probably get more out of it just as with any software. While I haven't tried them I do know some some AI image generators make a really nice pictures for those that have invested the time to learn.
One of these days I intend to try to get one to write stories for me to read. If it's any good at it it'd fill in the days when the authors I like haven't posted. I fully expect one day it'll be that good. Whether I have the patience to find the right one and learn how to feed it the information to get what I want out of it is a separate issue.
I know what I want it to be able to do and would happily pay for -- sort the piles of images on my hard drive into folders by artist; auto-tag images for content, style, artist, etc; searching the net for anything by X artist and save best copy in the artist's folder and under a "title by artist" naming format; find and keep only the best version of an image on my hard drive; rename files to match their content -- but so far the kind of tasks I'd actually use it for it sucks at. What I have founds is better than nothing but slow and I still have to go over everything by hand.
To me it looks very much like when 3D modeling (and digital painting, and digital photography, and photo-manipulation, and music, and voice-to-text, and word processors) software became cheap. The early versions and beginners produced a lot of ugly that was suddenly everywhere and derided, the old guard viewed this new way to create as a threat and the masters of the new tools redefined what is art and made the new way to create acceptable after enough years past.
No learning in this. It's black box seed slot machines. Also people thought they 'upgraded their prompting skills' when it was in reality just the service getting updated with more data lol.
I fiddled around with ChatGPT for about five minutes before deciding that it wasn't for me.
Similarly I use GPT as a sort of dictionary. Like, "I'm trying to express X concept," or "I need a word with Y and X connotations".
I also sometimes use it as a grammar/flow fix for when I write, since my natural way of writing sometimes gets a bit too overexplainy. In both cases, it is because English is not my first language but most of my writing is in English.
I've also seen the bullshit 'answers' 'AI' gives. Not just by online memes, but live, in my job. I'd rather dig for sources directly.
The CEO of OpenAI (ChatGPT) has a film about a virtual girlfriend as his fave 'inspiring' movie and is a rich asshole who might have raped his sister.
When the CEOs running those companies state outright that their business would collapse if they had to train their AI legally, my reaction is one word.
GOOD.
I recognize there is a place for LLM's in scientific fields. There are examples of several LLM's that have been coding the sequences of amino acids and proteins, things that since we have started to do that, we managed to fully code roughly 40 or so. Since LLM's introduction to that, it has coded several hundred. It is perfect for tedious work that just takes an absolute eternity for normal people, it is useful for spotting unusual tumors in cancer diagnosis that are too small for the human eye to detect, or indicating possible diagnoses. It is useful for fields like astronomy, where it can assist with determining the difference between a distant star, and a dust cloud.
However these are entirely different programs from GenAI, and those programs, are trained by professionals, not fed the sum total of the world's knowledge and then immediately unleashed to produce cheap, incorrect, dangerous answers for the general public. If you ask a GenAI to tell you a recipe for macaroni casserole, you won't have to hit refresh many times before it tells you to add just a dash of cyanide, or tide pods to it for a "better taste." Or to cook it at temperatures that exceed the surface of the sun. And teenagers are using it to cheat on their homework, poorly. It is stupifying and dumbing down an entire generation, more than one, probably, because how many of our parents were part of the crowd that said not to believe everything online, but believe clearly fake videos are real? Like not even slightly realistic, clearly fake.
I'm a writer, and it offends me to my core that people are using stolen work of others as theirs or as a way to "assist" them out of writer's block, or in any other word, plagarism. That tool was created through theft of writers blood, sweat and tears to get their work done, and...nah. No. I will not ever use AI. I will never ask it what the weather is like. I will turn it off and figure out ways to disable it on any device it comes pre-installed on. I will stop using search programs or websites that advocate that I use it.
I have seen the enstupification of college students first-hand since I'm currently enrolled. There are a few overachievers that use LLMs with great success, but they were smart from the beginning and don't use it as a crutch. Some of my natural science courses at community college were taught at an 8th grade level, and around a third of the class still failed it if I read the statistics right.
Unfortunately, LLM videos are getting more convincing. I have seen a recent "war" video that I had to watch twice, despite seeing the "smooth" motion from the start. Only the smooth motion and tiny artifacts gave it away, although it wasn't depicting anything false, or that I hadn't seen before. Training data is increasing, and the artifacts are decreasing. Imagine a true AGI that had access to the IoT, every phone, camera, microphone, toaster, Roomba, and car. All commerce and bank data passes through it or a lesser system. A machine spirit that controls countless drone swarms, and can find anyone based based on their phone, bank card, license plate, Ez tag, etc. It has the full backing of state power against any dissidents, It could generate fake events, complete with pictures, video, and user sentiment. It would be virtually an all-knowing, all-present, and all-powerful on the earth.
I will probably use LLMs for coding in the future, after I learn C++. Unfortunately, The future of employment belongs to people who can efficiently use LLM until it replaces them entirely. I saw some data recently that shows various tasks requiring around two hours can be finished in 20-30 minutes with LLM use, and the productivity boost is obscene. Business is business, and companies don't give two hoots about how they get their deliverables.
MIT published a study recently about the effects of delegating thought over to LLMs, and it basically rots your brain / lowers your ability to recall information, own it, etc. LLMs do not ever touch my artwork, stories, or characters. IMO, LLMs should have never been created, and it should be a felony to feed unlicensed content into it. Anyone that I create art for needs to know it's 100% original and created by human (or fluffdragon / jackalroo ) hands.
I am having an ongoing battle with my phone to disabled it on there.
I am finding it amusing now there are beginning to be studies of it starting to appear that are hinting that its not good for people, the less you use your brain, essentially the dumber you get. Folks who used AI to write their assignments were unable to explain their arguments when asked, things like that, which matches up with the stories my teacher friends have told me about students. Apparently very easy to see which students have used AI by asking a direct question about something.
I think there is a lot of hype over it at the moment which will fade as realities and limitations strike home. The cost of running it is not sustainable anyway, but people will always jump on the band wagon so they don't miss out.
Ok, I'll go back to being a grumpy old furry now :)
I'm burning the rainforests to keep the few friends I have.
I used it ONCE to generate "art" and deeply regretted it. I can do better while drunk. Sure the shading beats mine but like- I'm better at vomiting what's in my head onto a PNG than the AI is and at least until DNI is a thing I'm fairly sure I'll always be.
And for reference: My Brain > Someone I commission > a random-ass person > "AI"
For example: "I want a list of 2A li-ion 1-cell battery charger chips under $5 a piece, in stock at Digikey and LCSC in 100+ quantites, in SOIC8 or QFN packages. Sort the list by price, and find me the best one having integrated load sharing feature."
That amount of research would take me 3 or 4 hours and a lot of written sticky notes (I'm old school), while the AI compiles me an answer in under a minute. Reason why I use AI for this is the same reason why I use autorouter in designing printed circuit boards. I have extensible knowledge of my craft, and have been designing circuits for a long time. That's why I can understand which data AI is presenting me is relevant and useful, and which are hallucinations/bullshit. Though, I wouldn't recommend AI to beginners in the technical field, because it can lead to disasters.
with this tool I was able to create some scripts for the OBS and automate certain functions I needed for the stream. very useful 🤩
unfortunately the tutorials that gives the AI are not always entirely accurate.
for example Blender must clarify which version. and often you have to make a screenshot and show where the problem is 🤯
The nicer paid tools we have access to there have been very effective once getting over the initial learning curve. They probably save me a couple of hours a day on average. They're quite good with programming tasks, and for organizing thoughts, documentation, creating plans, all sorts of things like that.
This is something that anyone should keep in mind when they decide to use those so called "AI tools". And which is one of the reasons why I do not use any of those Algorithms personaly. I can't in good concience say, well visual art is a no go because of the art it steals, but ChatGPT doesn't hurt because it's text or convenient. The principles behind all of those are the same.
I avoid generating AI art as much as possible because I don’t want to contribute to art thief. For me, there’s an extremely gray area in AI art; as long as you’re doing it for fun instead of profit or personal gain, then I’m okay with it. But if you are using it for profit and business purposes, then that’s where I draw the line.
So I get what you’re saying, AI is a double edged sword, much like religion, politics, and economics. I think if more people are responsible enough to understand it, a more ethical form of AI can be made in the foreseeable future. But for now, it might be best to rely on our own skills and further develop them.
There was a study done that showed the effects of chatgpt on the brain and it was certainly interesting. Solidified my views.
AI has its place, such as detecting cancer early or double-checking grammar and spelling mistakes. I don't think it should replace human thinking, companionship, or any function that relates to the existence of mankind
I know people who use it for creating base scripts though. Or regexes.
that said today I uninstalled an Nivida's app from my PC because it's god awful AI assistant was overriding ever damn setting on my computer. bitch I set up my computer the way I do for a damn good reason.
Grok though does lewd RP very well, better than most in my experience, so I don't have to bother looking for it with others given it doesn't have the time constraints or hangups people tend to have.
Not a perfect tech but it'll just get better as time goes on but yes the "trust but verify" approach is the best for AI in its present state especially on important topics.
It can be fun to ask it silly questions, and look how it reacts to it.^^
besides that
there just aint a use for it as relying on it means i dont get pratice on the thing i would use it for and thus forever be terrible at it