Game Companies, Capitalism, Population
Posted a year agoIn the last week a lot of gaming news has happened, from Sony's handling of Helldivers 2, to Microsoft's closing of the studio that made Hi Fi Rush.
There's an argument in capitalism, if you don't like something, don't buy it. Problem is, if costs get low enough and profits are high enough, anything can be profitable. Consider email spam, it's very cheap to distribute and only a few need to land a sale for it to be profitable, and so it exists. Most people don't fall for it, but most people suffer because a few people can't stop themselves buying into it.
If you have a small population of people, lets say 100, and we assume 1 in a million people are geniuses, then the odds of that population having a genius is low, but if that population were to rise, the odds increase, become equally likely, and finally it's unlikely that a genius wouldn't exist if the population reaches the billions or higher. In the same way, more people, larger audience, means even a company that is actively making their products worse can still remain profitable.
Telling people not to buy things doesn't work, people still fall for spam, people will still buy Microsoft games if they're full of ads, game companies will still flock to them for money even if they know their studio could be closed as a result, and in the end it's still more profitable because there's so many people paying for it to happen.
The best course of action would be to buy things that will result in wanted changes. Again, companies have taken advantage of this, everyone wanted less pollution so some companies added the color green to their products, and that was about it for the cheaper products. But overlooking this spam-like behavior, people can still pay for the changes they want to see rather than things they find comfort in.
Indie games became popular, I think, because nothing felt new, games were slowly getting worse, and big studios were trying out new ways to make money that also made their games legally questionable. Would you pay $15 for a few gems, or would you pay $15 for Starsector (one of the best games)?
I've been writing a lot of journals and then deleting the whole thing a lot lately, and while it's probably for the best to keep doing that for stuff like this, I'm keeping this one. Not because I feel that strongly about this topic, but because despite having not played Starsector in a couple years, I still feel it's very good. Also there's a furry mod for it that lets you have a bunch of anthro icons instead of humans, and I feel that there needs to be more furry mod packs in games. This bit derails the whole journal, but screw it, big game studios have been doing these kinds of things for years and people are still paying them to do it, I'd like to imagine people refunding those games and getting better ones would be a step towards fixing all this.
Microsoft and so on will probably exist forever at this point, if not themselves then at least their methods and ideology, spam is eternal, we need to accept their existence and try to fund the things that do something good we wanted them all to do, and maybe they'd take the hint.
There's an argument in capitalism, if you don't like something, don't buy it. Problem is, if costs get low enough and profits are high enough, anything can be profitable. Consider email spam, it's very cheap to distribute and only a few need to land a sale for it to be profitable, and so it exists. Most people don't fall for it, but most people suffer because a few people can't stop themselves buying into it.
If you have a small population of people, lets say 100, and we assume 1 in a million people are geniuses, then the odds of that population having a genius is low, but if that population were to rise, the odds increase, become equally likely, and finally it's unlikely that a genius wouldn't exist if the population reaches the billions or higher. In the same way, more people, larger audience, means even a company that is actively making their products worse can still remain profitable.
Telling people not to buy things doesn't work, people still fall for spam, people will still buy Microsoft games if they're full of ads, game companies will still flock to them for money even if they know their studio could be closed as a result, and in the end it's still more profitable because there's so many people paying for it to happen.
The best course of action would be to buy things that will result in wanted changes. Again, companies have taken advantage of this, everyone wanted less pollution so some companies added the color green to their products, and that was about it for the cheaper products. But overlooking this spam-like behavior, people can still pay for the changes they want to see rather than things they find comfort in.
Indie games became popular, I think, because nothing felt new, games were slowly getting worse, and big studios were trying out new ways to make money that also made their games legally questionable. Would you pay $15 for a few gems, or would you pay $15 for Starsector (one of the best games)?
I've been writing a lot of journals and then deleting the whole thing a lot lately, and while it's probably for the best to keep doing that for stuff like this, I'm keeping this one. Not because I feel that strongly about this topic, but because despite having not played Starsector in a couple years, I still feel it's very good. Also there's a furry mod for it that lets you have a bunch of anthro icons instead of humans, and I feel that there needs to be more furry mod packs in games. This bit derails the whole journal, but screw it, big game studios have been doing these kinds of things for years and people are still paying them to do it, I'd like to imagine people refunding those games and getting better ones would be a step towards fixing all this.
Microsoft and so on will probably exist forever at this point, if not themselves then at least their methods and ideology, spam is eternal, we need to accept their existence and try to fund the things that do something good we wanted them all to do, and maybe they'd take the hint.
Eternal fight between credit card companies, erotica, and...
Posted a year ago1: Credit card companies, ad companies, and so on can't control what you buy through you, they instead control what's available and limit your options by telling internet companies if they carry erotica in any way, they will pull their credit card services away.
2: AI is run on and created by very large companies, or a multitude of user owned computers, and it's a lot harder to tell Microsoft to stop running any and all AI services or the credit card company will stop providing their services.
3: Thus far, Patreon, Tumblr, OnlyFans, Gumroad, Pornhub, and many other sites have been given the same set of rules by credit card companies, and they have either chosen to continue on without credit cards or they chose to limit the kinds of erotica available on their service.
My argument is that AI is going to become almost as prevalent as spam. It's harder to target larger companies who are actively trying to stop people making erotica, especially when these AIs are almost unusable due to the anti-porn safeguards that now exist yet still produce it. If we fight AI and manage to limit its use in future, it will become a larger problem for the internet as a whole. If we can ban sites that produce AI generated erotica, the next argument will be that all erotica should be banned. Even if we managed to make AI cease to exist, we'd continue to see credit card companies putting pressure on artists who constantly jump from platform to platform to avoid breaking these rules.
I very much oppose the kinds of self regulation that existed at the time of the comics code authority and the hays code, it did a lot more harm than good for no reason but an idea of morality that was better suited to the 1800s. Like, their belief was that reading horror comics could cause people to do terrible things, inspire them to do harm. In my opinion, the news is more pervasive, has a wider audience, and does more to inspire harm in the hopes of being immortalized through their actions. Better than censoring the news would be to offer free healthcare, including anything to do with mental health, solve the root of the problem and not target a symptom. It's just easier to give companies an ultimatum that costs nothing for the card companies to implement, since it won't reduce their own customers.
I'm gonna bet that the artists will win in the end, not because of any convincing arguments, but because of the backing of companies (who would gladly end those artists careers) see AI as a genuine threat to their goal of regulating the internet.
Going forward, it would be a good idea to not use credit cards on the same sites that sell erotica. Every user who does is another datapoint companies consider when accepting the conditions credit card companies impose on them. You can still buy things there, just try to use something that isn't tied to an organization that's indirectly controlling your choices for you.
2: AI is run on and created by very large companies, or a multitude of user owned computers, and it's a lot harder to tell Microsoft to stop running any and all AI services or the credit card company will stop providing their services.
3: Thus far, Patreon, Tumblr, OnlyFans, Gumroad, Pornhub, and many other sites have been given the same set of rules by credit card companies, and they have either chosen to continue on without credit cards or they chose to limit the kinds of erotica available on their service.
My argument is that AI is going to become almost as prevalent as spam. It's harder to target larger companies who are actively trying to stop people making erotica, especially when these AIs are almost unusable due to the anti-porn safeguards that now exist yet still produce it. If we fight AI and manage to limit its use in future, it will become a larger problem for the internet as a whole. If we can ban sites that produce AI generated erotica, the next argument will be that all erotica should be banned. Even if we managed to make AI cease to exist, we'd continue to see credit card companies putting pressure on artists who constantly jump from platform to platform to avoid breaking these rules.
I very much oppose the kinds of self regulation that existed at the time of the comics code authority and the hays code, it did a lot more harm than good for no reason but an idea of morality that was better suited to the 1800s. Like, their belief was that reading horror comics could cause people to do terrible things, inspire them to do harm. In my opinion, the news is more pervasive, has a wider audience, and does more to inspire harm in the hopes of being immortalized through their actions. Better than censoring the news would be to offer free healthcare, including anything to do with mental health, solve the root of the problem and not target a symptom. It's just easier to give companies an ultimatum that costs nothing for the card companies to implement, since it won't reduce their own customers.
I'm gonna bet that the artists will win in the end, not because of any convincing arguments, but because of the backing of companies (who would gladly end those artists careers) see AI as a genuine threat to their goal of regulating the internet.
Going forward, it would be a good idea to not use credit cards on the same sites that sell erotica. Every user who does is another datapoint companies consider when accepting the conditions credit card companies impose on them. You can still buy things there, just try to use something that isn't tied to an organization that's indirectly controlling your choices for you.
The primary goal of the furry fandom and sci-fi
Posted a year agoWe have ideas and stories that cannot fully take place in the real world because some critical parts, like warp drives and anthros, do not exist and may never exist. Because of this though, these things can take any form we like. Anthros could include alien species, and warp drives and teleporters can come in all shapes and sizes. These fandoms are driven primarily by their collective creativity, since coming up with totally brand new ideas would be hard to do and hard to read/watch/look at without at least some familiarity.
If we can inspire things to become real, or at least take inspiration from stories when designing a product, this feeds back into itself and inspires people to try and bring more things out of fiction and into reality. Much of the furry fandom is involved in tech and politics in some way, same goes for the sci-fi fandom, and so this creates a feedback loop where we favor things that support the things we already liked and wished were real, where possible.
The furry fandom isn't as focused on the tech side of things though. For us, the setting just needs to account for the lifestyles of our characters, we like to imagine the complexities of anthro characters trying to live together in the same cities and towns. Our focus is on our characters, the details of their lives, and the ways they interact with our friends characters.
I do go on about AI a lot more than I should, but I feel like it would be a step towards a fandom sandbox where we create our characters, have a few conversations with them to make sure they're behaving correctly, and then send them off into the sandbox to play with our friends characters. We could then write events our characters would deal with, and tinker with our characters if they do something we didn't want, refining them as they go. We could have D&D style games where the event is guided by chance rather than our preferences, and the characters we meet along the way would belong to someone. And at the end of the day you could chat with your character and see how their personality has grown or improved.
That, and the fandom's getting older, there's a lot of characters who just aren't drawn anymore that used to have hundreds of drawings made a year. Sometimes people lose interest in their character, but with simulated characters and links to all the works they once belonged to, the fandom wouldn't just be archived anthro art continually buried under all the new stuff.
Realistically though, I don't think we could run a simulation with every single character the furry fandom has ever created. There'd be millions living in different settings and across multiple worlds, with different rules about the technology and magic in each world, and all their character interactions would need to make sense according to those rules, but would also need to make for an interesting story. It could be a long time before any fandom could make something useful out of it.
I might recall some characters that I don't see anymore, but I can't recall all of them all at once with perfect clarity. Having the characters bumbling around on their own personal quests would at least give us an idea what's out there. I'm picturing it a bit like City Skylines, where you can click on individual people and see who they are and what they're doing, except that their system is more complex and instead of a generic humanoid body it's a body that matches the character's form and size. Maybe something like Rimworld or Dwarf Fortress where it's a story generator by design.
And if people don't want their characters to live on after they've stopped commissioning art of them, they could always choose to keep their character private so that even if other people try to add the character to the system it would be recognized and blocked from public use. They could still just get the original artworks and make up their own scenarios without permission, but there would be no sandbox public version for a century or so. Once the copyright runs out the character could be released, but unless they were very popular it's unlikely anyone would remember them, and the sandbox would be the only way to introduce people to the character and their owner.
Unless of course the fandom is gone in a century, or has changed so much that the sandbox is no longer relevant to the fandoms primary goals, but I'm hopeful that with the way the internet is, fandoms will persist.
If we can inspire things to become real, or at least take inspiration from stories when designing a product, this feeds back into itself and inspires people to try and bring more things out of fiction and into reality. Much of the furry fandom is involved in tech and politics in some way, same goes for the sci-fi fandom, and so this creates a feedback loop where we favor things that support the things we already liked and wished were real, where possible.
The furry fandom isn't as focused on the tech side of things though. For us, the setting just needs to account for the lifestyles of our characters, we like to imagine the complexities of anthro characters trying to live together in the same cities and towns. Our focus is on our characters, the details of their lives, and the ways they interact with our friends characters.
I do go on about AI a lot more than I should, but I feel like it would be a step towards a fandom sandbox where we create our characters, have a few conversations with them to make sure they're behaving correctly, and then send them off into the sandbox to play with our friends characters. We could then write events our characters would deal with, and tinker with our characters if they do something we didn't want, refining them as they go. We could have D&D style games where the event is guided by chance rather than our preferences, and the characters we meet along the way would belong to someone. And at the end of the day you could chat with your character and see how their personality has grown or improved.
That, and the fandom's getting older, there's a lot of characters who just aren't drawn anymore that used to have hundreds of drawings made a year. Sometimes people lose interest in their character, but with simulated characters and links to all the works they once belonged to, the fandom wouldn't just be archived anthro art continually buried under all the new stuff.
Realistically though, I don't think we could run a simulation with every single character the furry fandom has ever created. There'd be millions living in different settings and across multiple worlds, with different rules about the technology and magic in each world, and all their character interactions would need to make sense according to those rules, but would also need to make for an interesting story. It could be a long time before any fandom could make something useful out of it.
I might recall some characters that I don't see anymore, but I can't recall all of them all at once with perfect clarity. Having the characters bumbling around on their own personal quests would at least give us an idea what's out there. I'm picturing it a bit like City Skylines, where you can click on individual people and see who they are and what they're doing, except that their system is more complex and instead of a generic humanoid body it's a body that matches the character's form and size. Maybe something like Rimworld or Dwarf Fortress where it's a story generator by design.
And if people don't want their characters to live on after they've stopped commissioning art of them, they could always choose to keep their character private so that even if other people try to add the character to the system it would be recognized and blocked from public use. They could still just get the original artworks and make up their own scenarios without permission, but there would be no sandbox public version for a century or so. Once the copyright runs out the character could be released, but unless they were very popular it's unlikely anyone would remember them, and the sandbox would be the only way to introduce people to the character and their owner.
Unless of course the fandom is gone in a century, or has changed so much that the sandbox is no longer relevant to the fandoms primary goals, but I'm hopeful that with the way the internet is, fandoms will persist.
The game Microscope and AI
Posted a year agoText generation AI's reaching the point where it takes a few paragraphs to find the flaws that show it's written by an AI, usually having to do with learned experiences that the machine can't have. The game Microscope is all about filling out a world piece by piece, where different major events impact future events and so on, and where future events prevent certain past events from occurring. With an AI, you could fill out certain parts you don't have any ideas for, or have it play-act as characters in a specific scenario for you. It could also keep track of many parallel stories so that if another relevant event should impact the situation, it does.
It's weird that for a game that seems suited to testing the capabilities of AI, there so far doesn't seem to be any version of this game that uses AI. Imagine having a whole complete story written, and if you feel like an event should have happened at a certain early point (that's impossible because of later events) you can add it and see how it changes the rest of the story in seconds, overriding the existing narrative chunks.
It's weird that for a game that seems suited to testing the capabilities of AI, there so far doesn't seem to be any version of this game that uses AI. Imagine having a whole complete story written, and if you feel like an event should have happened at a certain early point (that's impossible because of later events) you can add it and see how it changes the rest of the story in seconds, overriding the existing narrative chunks.
The ideal AI
Posted a year agoCurrently AI hallucinates, does mediocre work, but is also still limited by our ability to design an AI, rather than have an AI make better AI and simultaneously create its own superior replacement.
The ideal AI that could be achieved today would be a librarian. It has read every book, seen every film, heard every recording, played every game, and can also make associations between all of these. So if you liked a particular book, it can point you to a game that has similar themes. If you wanted to find references for work, it can guide you in the right direction.
From this, it's a small step to making an AI that can do its own research, find patterns in things we've missed. But more importantly, it helps to bridge the gap between people who want to learn, and the materials that can help them to learn what they need, quickly.
Currently if you want to learn a language you might go on google, but be flooded by sponsored products that are full of ads but aren't giving you enough information to tell if one is any better than the other. With a digital librarian this could be handled for you, though this would lead to the tempting idea of making the AI worse by having it choose some sources over others, for a profit. So instead of the AI showing people a digital copy of things, creating the pain-point of having to find these books or games yourself through other services might be enough to keep the advertisers at bay.
The ideal AI that could be achieved today would be a librarian. It has read every book, seen every film, heard every recording, played every game, and can also make associations between all of these. So if you liked a particular book, it can point you to a game that has similar themes. If you wanted to find references for work, it can guide you in the right direction.
From this, it's a small step to making an AI that can do its own research, find patterns in things we've missed. But more importantly, it helps to bridge the gap between people who want to learn, and the materials that can help them to learn what they need, quickly.
Currently if you want to learn a language you might go on google, but be flooded by sponsored products that are full of ads but aren't giving you enough information to tell if one is any better than the other. With a digital librarian this could be handled for you, though this would lead to the tempting idea of making the AI worse by having it choose some sources over others, for a profit. So instead of the AI showing people a digital copy of things, creating the pain-point of having to find these books or games yourself through other services might be enough to keep the advertisers at bay.
Best way to build an AI using business
Posted a year agoThere should be a company with one goal, to automate itself completely.
To achieve this goal, it should pay all its workers not for hours worked, but for work done. It should only punish workers if the work is done below a set standard, but give them the ability to test things in advance of using it in the workplace for real.
If a worker finds a way to completely remove themselves from the workplace and have their custom automated systems do all the work instead, the worker should continue to be paid as if they still physically came into work.
If at any point the company changes how it does business, it's up to the workers to adapt their systems to the changes.
If someone wants to join the company, the company would hold competitions where specific existing problems within the company can be solved, and whoever designs the best solution will be hired.
Once fully automated, the company would then increase wages to everyone as a percentage of the earnings of the company, encouraging them to find ways to automate the growth of the company.
To achieve this goal, it should pay all its workers not for hours worked, but for work done. It should only punish workers if the work is done below a set standard, but give them the ability to test things in advance of using it in the workplace for real.
If a worker finds a way to completely remove themselves from the workplace and have their custom automated systems do all the work instead, the worker should continue to be paid as if they still physically came into work.
If at any point the company changes how it does business, it's up to the workers to adapt their systems to the changes.
If someone wants to join the company, the company would hold competitions where specific existing problems within the company can be solved, and whoever designs the best solution will be hired.
Once fully automated, the company would then increase wages to everyone as a percentage of the earnings of the company, encouraging them to find ways to automate the growth of the company.
Copyright should only apply to paid copies
Posted 2 years agoCopyright seems to have become more about the loss of potential gains than it has been about ensuring a person doesn't sell copies of someone elses products for profit.
Like, music piracy used to be about cracking down on people selling copies of records and things, but after a time it became more about stopping people sharing free copies of everything from games to movies, novels and shows. In some cases people try to sell these things, resellers buy out consoles and jack up the prices to create artificial scarcity which ultimately damages game sales in the long run (hence why it should be stopped).
The thing is that not everyone can afford to buy these products, and just because there's no way to pirate things doesn't mean these people will pay for everything now, it just means they will see less things.
Consider the radio. If everyone had to pay to listen, how many people would have had radios in their cars? Ultimately it's the radio broadcaster footing the bill, the same way that pirates pay for the product before distributing it freely. The difference is that the broadcaster is paying a lot more and in return relies on ads to keep this going, while the pirates in most cases do so at their own expense. Internet memes are the simplest form of piracy, still gets affected by takedowns when the corporations feel like it.
I feel that there should be two separate things, paid works, and unpaid works. If anything makes any kind of profit then it should have all the laws and regulations we regularly deal with, even if it's a non-profit, but if it's unpaid at all, then no company should be able to do anything about it.
Like, music piracy used to be about cracking down on people selling copies of records and things, but after a time it became more about stopping people sharing free copies of everything from games to movies, novels and shows. In some cases people try to sell these things, resellers buy out consoles and jack up the prices to create artificial scarcity which ultimately damages game sales in the long run (hence why it should be stopped).
The thing is that not everyone can afford to buy these products, and just because there's no way to pirate things doesn't mean these people will pay for everything now, it just means they will see less things.
Consider the radio. If everyone had to pay to listen, how many people would have had radios in their cars? Ultimately it's the radio broadcaster footing the bill, the same way that pirates pay for the product before distributing it freely. The difference is that the broadcaster is paying a lot more and in return relies on ads to keep this going, while the pirates in most cases do so at their own expense. Internet memes are the simplest form of piracy, still gets affected by takedowns when the corporations feel like it.
I feel that there should be two separate things, paid works, and unpaid works. If anything makes any kind of profit then it should have all the laws and regulations we regularly deal with, even if it's a non-profit, but if it's unpaid at all, then no company should be able to do anything about it.
Faking infinite finite resources
Posted 2 years agoLand and property isn't infinite, and a person can buy up land they have no intentions to personally use in order to drive up scarcity, the value of all properties they own would then rise, and so on. There should be no limits on the things people can buy, but there should be a tax on things that go unused by the people who own them.
Rentals grow in popularity not because people love the freedom of movement between cities and countries, most of the time it's just the only thing they could afford in that moment, and the constant drain on their income paying rent could have instead been saved up to buy a home outright. Why not just have a system where you pay the local rent equivalent over a number of years, and then at the end you own the home? Assuming rent in New York is $1k, and buying a New York home can cost $900k, it would take 75 years of "rent" payments before you could own the place. Still, assuming you could pass it on to your kids, and you don't leave the city in your lifetime, it could be better than the current system, where you pay for 75 years and have zilch to show for it.
When the pandemic happened and people were leaving New York, businesses closed up and people documented all the shuttered shops by walking down the street with their phone cameras and looking in windows. The rent prices and things stayed the same despite the exodus, except not really. Bunch of people were given under-the-table deals where they pay less for rent for the year or whatever, but weren't supposed to tell anyone. Which makes more sense than keeping the prices sky high and people just accepting it despite all the local public services shutting down.
So the problem is that businesses don't want to present the appearance that they have lost value, even when they have. Land and property is bought up to prevent others making use of it unless they pay a high price. Sure, these things were never infinite, but rentals should only exist to serve a purpose, being a temporary place to stay for people who aren't sure where their job will take then next. If there's ever a time where rentals are the only option, where debt and mortgages are the only option, if people own places they have never seen and take steps to stop the people there from living there, then things are fundamentally broken.
Rentals grow in popularity not because people love the freedom of movement between cities and countries, most of the time it's just the only thing they could afford in that moment, and the constant drain on their income paying rent could have instead been saved up to buy a home outright. Why not just have a system where you pay the local rent equivalent over a number of years, and then at the end you own the home? Assuming rent in New York is $1k, and buying a New York home can cost $900k, it would take 75 years of "rent" payments before you could own the place. Still, assuming you could pass it on to your kids, and you don't leave the city in your lifetime, it could be better than the current system, where you pay for 75 years and have zilch to show for it.
When the pandemic happened and people were leaving New York, businesses closed up and people documented all the shuttered shops by walking down the street with their phone cameras and looking in windows. The rent prices and things stayed the same despite the exodus, except not really. Bunch of people were given under-the-table deals where they pay less for rent for the year or whatever, but weren't supposed to tell anyone. Which makes more sense than keeping the prices sky high and people just accepting it despite all the local public services shutting down.
So the problem is that businesses don't want to present the appearance that they have lost value, even when they have. Land and property is bought up to prevent others making use of it unless they pay a high price. Sure, these things were never infinite, but rentals should only exist to serve a purpose, being a temporary place to stay for people who aren't sure where their job will take then next. If there's ever a time where rentals are the only option, where debt and mortgages are the only option, if people own places they have never seen and take steps to stop the people there from living there, then things are fundamentally broken.
Knowledge, comprehension, memory
Posted 2 years agoMemory allows you to read things and repeat things you learned, but doesn't test your comprehension. People watch the news and can repeat what they hear, but if you ask for any clarification on what certain words mean, even important ones to the whole argument made by news anchors, this form of knowledge falls flat. Will they realize their own lack of knowledge when prompted this way? No, oftentimes they will fill in the blanks poorly, misinterpret details, and overall convince themselves something is true that nobody told them, and when tested would be proven false very quickly.
Comprehension is annoying, but better. If you are reading a book, novel, anything, this isn't comprehension. If you re-write the book, paragraph by paragraph, in order to summarize it or remove excess fluff, then the end result is true comprehension. You will be able to find typos faster, identify sentences with possible double meanings, etc.
Part of why the internet is full of misinformation and misunderstood information is because people don't have time to spare on comprehension. In a world where information can now last forever, stored on harddrives and on server racks for possibly eternity, tiny mistakes add to the pile, whenever we rush a digital product or decide it's good enough, that product ends up being kept in that state for decades, in use and barely functional the whole time.
For automation to truly replace people, comprehension is the most important thing to figure out. Right now it believes whatever you tell it, it has no internal concept of good information and bad. You can prioritize information that's correct, but then you'd be relying on humans to do a slow task that will slow down the output, and sometimes they will let bad info slip inside, or choose to exclude what they consider bad info. If an AI can categorize this information itself, line by line, then it will gradually provide more accurate information, be able to tell fact from fiction, lies from truth, and through comprehension be able to act on reality with more intent and purpose.
Comprehension is annoying, but better. If you are reading a book, novel, anything, this isn't comprehension. If you re-write the book, paragraph by paragraph, in order to summarize it or remove excess fluff, then the end result is true comprehension. You will be able to find typos faster, identify sentences with possible double meanings, etc.
Part of why the internet is full of misinformation and misunderstood information is because people don't have time to spare on comprehension. In a world where information can now last forever, stored on harddrives and on server racks for possibly eternity, tiny mistakes add to the pile, whenever we rush a digital product or decide it's good enough, that product ends up being kept in that state for decades, in use and barely functional the whole time.
For automation to truly replace people, comprehension is the most important thing to figure out. Right now it believes whatever you tell it, it has no internal concept of good information and bad. You can prioritize information that's correct, but then you'd be relying on humans to do a slow task that will slow down the output, and sometimes they will let bad info slip inside, or choose to exclude what they consider bad info. If an AI can categorize this information itself, line by line, then it will gradually provide more accurate information, be able to tell fact from fiction, lies from truth, and through comprehension be able to act on reality with more intent and purpose.
Arguments over AI and collapse of civilization
Posted 2 years agoTechnically speaking, great advancements in history happened around the time of the collapse of some form of social order. Feudalism no longer exists, if authority figures of the past knew what advancements would result in the end of their reigns then they likely would have avoided them.
In the same way, AI genuinely could result in the collapse of our civilization. People with authority are attempting to use fearmongering to drive up sales while also putting in new regulations to stop ordinary people from taking a cut of the new frontier, but assuming none of that works and everyone everywhere has a chance to benefit from this advancement, then I don't see how businesses could have the exact same structure they've had for decades, or how monolithic companies could continue working at a snail's pace.
There's plenty of things people would like to see fail and disappear. We'd like it if we could own things again instead of renting places and using services that offer less every year. AI didn't create these problems, maybe a collapse was already coming because of other advances. But AI will definitely break everything from work to education, and the questions we keep asking are just the worst. In the same way feudal era peasants might have asked what will happen to their families once tending the land ceases to be a viable job, we're asking ourselves what will happen now that everyone's potentially more productive than before for less effort, how will the children learn if they just have to ask an AI for the answer. Maybe the answer's what it sounds like, that in the future we won't be relying on everyone having a cookie cutter education, maybe everyone just has an assistant that's a genius in all fields, makes connections no human possibly could. Maybe living in a world where people with authority but not expertise makes decisions on behalf of people who have expertise but no authority is pretty shitty, and bemoaning what we're going to lose is just the most shortsighted thing.
I'm reminded of the Matrix a little. We can gain insights and knowledge and education from all of this, but someday we won't have to. Maybe one day we reach that point where we can integrate AI into ourselves so we can stop asking questions and just know things. We can't get there without going through the messy stage of "having to ask the AI for help in all things".
In the same way, AI genuinely could result in the collapse of our civilization. People with authority are attempting to use fearmongering to drive up sales while also putting in new regulations to stop ordinary people from taking a cut of the new frontier, but assuming none of that works and everyone everywhere has a chance to benefit from this advancement, then I don't see how businesses could have the exact same structure they've had for decades, or how monolithic companies could continue working at a snail's pace.
There's plenty of things people would like to see fail and disappear. We'd like it if we could own things again instead of renting places and using services that offer less every year. AI didn't create these problems, maybe a collapse was already coming because of other advances. But AI will definitely break everything from work to education, and the questions we keep asking are just the worst. In the same way feudal era peasants might have asked what will happen to their families once tending the land ceases to be a viable job, we're asking ourselves what will happen now that everyone's potentially more productive than before for less effort, how will the children learn if they just have to ask an AI for the answer. Maybe the answer's what it sounds like, that in the future we won't be relying on everyone having a cookie cutter education, maybe everyone just has an assistant that's a genius in all fields, makes connections no human possibly could. Maybe living in a world where people with authority but not expertise makes decisions on behalf of people who have expertise but no authority is pretty shitty, and bemoaning what we're going to lose is just the most shortsighted thing.
I'm reminded of the Matrix a little. We can gain insights and knowledge and education from all of this, but someday we won't have to. Maybe one day we reach that point where we can integrate AI into ourselves so we can stop asking questions and just know things. We can't get there without going through the messy stage of "having to ask the AI for help in all things".
SoylentOrange's Ebook The Last Lapdance
Posted 2 years agohttps://twitter.com/SoylentOranges/.....21550765613151
https://ebbooks.itch.io/the-last-lapdance
It's about USD $10, just came out
https://ebbooks.itch.io/the-last-lapdance
It's about USD $10, just came out
Prostitution and equalizing society
Posted 2 years ago(I know very little of the subject of Kabuki, but this is my thoughts on something I just found out)
The Kabuki form of theatre in Japan originally was performed by all-female casts, and several were prostitutes. It was popular in red-light districts, and brought many sections of society together to watch performances.
A large part of widening the gap between high and low class society is by adding barriers between groups, whether it's raising living costs, banning one group from wearing fancy clothes, constructing cities with hostile architecture to push the homeless away, or any number of methods to keep the wealthy from co-existing with the poor.
So, women were banned from doing these performances because they claimed the performances were too erotic. In reality the performances were bringing people together that might jeopardize the existing social order, and so the people most reliant on that social order put a stop to it.
In the west, prostitutes followed the construction of the rail bridging east and west, and used the money earned to build schools. The government later on did react to the power women were gaining, though the reaction is predictable.
I'd already been incorporating this idea into my comic based on some events, the hays code and the cabaret tax and a few others, but didn't realize Kabuki had origins that would have been very useful when planning the current story arc. I'll try to add a bit of that in later, but first I have a little reading to do.
Current hypothesis, racism and sexism and all the other isms are a symptom but not the cause, the cause is rich and poor, the isms are just a shorthand for whoever is the poorest and least powerful in that society. Villains were once the peasants working for lords who owned the land and allowed the peasants to work there in exchange for a portion of the peasants labor, and some money from the peasants profits. They became villains not because they were truly fearsome, it happened because they lacked power and wealth.
So of course, prostitution should be treated like a major crime by a society that values the difference between the powerful and those who lack power, because prostitutes bring everyone together under one roof no matter their social standing, and then build schools with the money earned.
If you guys happen to know any more ancient history where something that doesn't harm other people is a criminal act because upset the balance of power in that time and place, I'd like to hear what it was. Alternatively, are there examples of times when these criminal acts were eventually accepted as legal again, and what happened to the society following this reversal? Currently some drug legalization is going on, and by this point people know more about why they were made illegal in the first place, but there's probably other times in history where prostitution was legalized and society changed for either the better or worse, and I feel like it probably improved the lives of the poor but temporarily slowed the growth of the rich, before the new wealth of the poor led to even more income and wealth for the rich (I just don't have proof or examples, it just seems like this is how it would happen).
The Kabuki form of theatre in Japan originally was performed by all-female casts, and several were prostitutes. It was popular in red-light districts, and brought many sections of society together to watch performances.
A large part of widening the gap between high and low class society is by adding barriers between groups, whether it's raising living costs, banning one group from wearing fancy clothes, constructing cities with hostile architecture to push the homeless away, or any number of methods to keep the wealthy from co-existing with the poor.
So, women were banned from doing these performances because they claimed the performances were too erotic. In reality the performances were bringing people together that might jeopardize the existing social order, and so the people most reliant on that social order put a stop to it.
In the west, prostitutes followed the construction of the rail bridging east and west, and used the money earned to build schools. The government later on did react to the power women were gaining, though the reaction is predictable.
I'd already been incorporating this idea into my comic based on some events, the hays code and the cabaret tax and a few others, but didn't realize Kabuki had origins that would have been very useful when planning the current story arc. I'll try to add a bit of that in later, but first I have a little reading to do.
Current hypothesis, racism and sexism and all the other isms are a symptom but not the cause, the cause is rich and poor, the isms are just a shorthand for whoever is the poorest and least powerful in that society. Villains were once the peasants working for lords who owned the land and allowed the peasants to work there in exchange for a portion of the peasants labor, and some money from the peasants profits. They became villains not because they were truly fearsome, it happened because they lacked power and wealth.
So of course, prostitution should be treated like a major crime by a society that values the difference between the powerful and those who lack power, because prostitutes bring everyone together under one roof no matter their social standing, and then build schools with the money earned.
If you guys happen to know any more ancient history where something that doesn't harm other people is a criminal act because upset the balance of power in that time and place, I'd like to hear what it was. Alternatively, are there examples of times when these criminal acts were eventually accepted as legal again, and what happened to the society following this reversal? Currently some drug legalization is going on, and by this point people know more about why they were made illegal in the first place, but there's probably other times in history where prostitution was legalized and society changed for either the better or worse, and I feel like it probably improved the lives of the poor but temporarily slowed the growth of the rich, before the new wealth of the poor led to even more income and wealth for the rich (I just don't have proof or examples, it just seems like this is how it would happen).
The reason work is broken
Posted 2 years agoEveryone needs to work, that's the reason.
Mosquitos need to drink blood to breed, malaria took advantage of it. Work is the same way, work can be a good or neutral so long as it's not a given that people will apply for any job whatsoever. The more certain it is that people will work for the worst people at the worst companies no matter how bad things get, then things will get worse with time.
And if the companies making the most money are doing so by taking advantage of their workers in ways that damages workers physical and mental health, by cutting wages and safety, by encouraging workers to break the company rules to get the job done on a schedule the company decided without them, then every other company will need to do the same to compete.
Easiest way to fix this, make work optional. You could technically work from home as a freelancer, you don't have to work at a company. Companies exist to direct clients, to determine the objective of the client, and to get workers to collaborate to complete that objective and delivering it to the client. A company is just a chain linking clients/customers to producers and workers through indirect means, without the workers or producers, there is no company, and the more links in the chain the less efficient the company. So, a freelancer is the most efficient job out there, but you have to connect directly with clients which is harder to deal with.
Chains like these have benefits. Twitch streaming is more popular than direct streaming, because they make it easier to connect with people than a direct connection between streamer and viewer, but mainly it's the payment system. Small donations below $15 can be refunded at a loss to the streamer, leading to people gaming the system and costing a producer of content a lot of money at no cost to the client. Basically trolling. If people do this on Twitch, then Twitch will take the loss instead, and the streamer doesn't have to deal with it. Twitch is also in a better position to deal with people who do this than a streamer who doesn't have time or the automated systems to deal with it and also stream.
In short, companies reduce the non-work a worker would otherwise need to do if they were dealing with the client.
A more complex fix for work would be to completely automate what companies do, and then have all workers become freelancers working for this automated system. So instead of Twitch taking a cut of what you make, or a corporation diverting the income you could have gotten if you'd done the same work alone, you would now be paid almost the full amount while the AI that solved problems you didn't even know existed only takes enough funding to pay for its server costs and energy.
Mosquitos need to drink blood to breed, malaria took advantage of it. Work is the same way, work can be a good or neutral so long as it's not a given that people will apply for any job whatsoever. The more certain it is that people will work for the worst people at the worst companies no matter how bad things get, then things will get worse with time.
And if the companies making the most money are doing so by taking advantage of their workers in ways that damages workers physical and mental health, by cutting wages and safety, by encouraging workers to break the company rules to get the job done on a schedule the company decided without them, then every other company will need to do the same to compete.
Easiest way to fix this, make work optional. You could technically work from home as a freelancer, you don't have to work at a company. Companies exist to direct clients, to determine the objective of the client, and to get workers to collaborate to complete that objective and delivering it to the client. A company is just a chain linking clients/customers to producers and workers through indirect means, without the workers or producers, there is no company, and the more links in the chain the less efficient the company. So, a freelancer is the most efficient job out there, but you have to connect directly with clients which is harder to deal with.
Chains like these have benefits. Twitch streaming is more popular than direct streaming, because they make it easier to connect with people than a direct connection between streamer and viewer, but mainly it's the payment system. Small donations below $15 can be refunded at a loss to the streamer, leading to people gaming the system and costing a producer of content a lot of money at no cost to the client. Basically trolling. If people do this on Twitch, then Twitch will take the loss instead, and the streamer doesn't have to deal with it. Twitch is also in a better position to deal with people who do this than a streamer who doesn't have time or the automated systems to deal with it and also stream.
In short, companies reduce the non-work a worker would otherwise need to do if they were dealing with the client.
A more complex fix for work would be to completely automate what companies do, and then have all workers become freelancers working for this automated system. So instead of Twitch taking a cut of what you make, or a corporation diverting the income you could have gotten if you'd done the same work alone, you would now be paid almost the full amount while the AI that solved problems you didn't even know existed only takes enough funding to pay for its server costs and energy.
AI and education
Posted 2 years agoIn order to perform better than past generations, people need more education. As more people advance science and tech, more education is required to make further progress, but the same age group is expected to learn all this new stuff within the same timespan. AI makes work easier but it makes learning through struggle less effective. AI also makes mistakes that a fully educated person or teacher might not catch right away, and is almost certainly going to go unnoticed by a student using AI to do their work faster. However, students are graded based on output and not their struggle, so as AI improves and makes fewer mistakes, students who use AI will also make fewer mistakes, and their collective output will improve with time. Students who don't use AI may not gain the skills needed to use it effectively in the workplace, and so they may be passed over for a worker who is able to look at a problem and know how to coax the AI into solving it for them.
Would you know how to get an AI to write a novel this minute, or would you need to spend a week learning how to set it up? It's as easy as pressing a button, so long as you've done all the work that goes into making the button functional. Still, AI has a lot in common with plagiarism, and it's likely that students who learn to use AI will end up claiming its work is their own work, which causes problems for the people who strictly want human sources for information.
Would you know how to get an AI to write a novel this minute, or would you need to spend a week learning how to set it up? It's as easy as pressing a button, so long as you've done all the work that goes into making the button functional. Still, AI has a lot in common with plagiarism, and it's likely that students who learn to use AI will end up claiming its work is their own work, which causes problems for the people who strictly want human sources for information.
We're bad at solving the right problems
Posted 2 years agoExperiences are the most important thing in life. Life is the product of adaptation and evolution where we as individuals and as a species, over the course of our lives and the course of eons experience the world we find ourselves in. Humanity found a way to pass on the experiences of our elders to future generations through language, and currently we are one of the most powerful species in existence on earth today. Experiences are everything in life.
And yet we ask questions like "if you could have done things differently, what would you do?" This is the most pointless question, because while it takes the persons current experiences and applies it to a past situation, usually its a situation they either couldn't have anticipated or one thats unlikely to come up again for them or anyone else.
A better question for solving problems would be "In a perfect world, which of your past experiences would fit the least in said world?" This way the onus isn't on them to fix their own problems, it forces them to identify where the problems in their life were and how removing that experience from all future lives would be a step towards a more perfect world. And since they aren't being asked to fix the problem, just identify it, it means they could be racist, sexist, bigoted, whatever, and they still can contribute without their biases getting in the way. Their solutions would be corrupted by their feelings, but those feeling stem from past experiences, and removing those experiences might abolish racism, sexism, etc.
However, removing experiences would need to be done by a unbiased system, or it would lead to further negative experiences. The goal is to only have worthwhile, educational experiences in life, and any biased system will inevitably lead to negative experiences in a few people that they would wish hadn't happened.
And yet we ask questions like "if you could have done things differently, what would you do?" This is the most pointless question, because while it takes the persons current experiences and applies it to a past situation, usually its a situation they either couldn't have anticipated or one thats unlikely to come up again for them or anyone else.
A better question for solving problems would be "In a perfect world, which of your past experiences would fit the least in said world?" This way the onus isn't on them to fix their own problems, it forces them to identify where the problems in their life were and how removing that experience from all future lives would be a step towards a more perfect world. And since they aren't being asked to fix the problem, just identify it, it means they could be racist, sexist, bigoted, whatever, and they still can contribute without their biases getting in the way. Their solutions would be corrupted by their feelings, but those feeling stem from past experiences, and removing those experiences might abolish racism, sexism, etc.
However, removing experiences would need to be done by a unbiased system, or it would lead to further negative experiences. The goal is to only have worthwhile, educational experiences in life, and any biased system will inevitably lead to negative experiences in a few people that they would wish hadn't happened.
Timeline for why it's hard to live well
Posted 2 years agoA company makes healthy food.
Another company makes similar food but adds sugar, fat, and different things known to make food more popular, and as a result makes a little more profit.
The second company uses the extra money to advertise more than the other company, making them more popular.
The second company uses the extra money to gain exclusive rights to certain farms, meaning the first company has to pay more for the same materials from elsewhere, and now the first company loses more money buying the same materials.
A pandemic or a recession or any number of events happen, and both companies have pressure put on them. The first company ultimately loses money trying to keep up, while the second company is making so much money that they barely have to change their tactics.
The second company introduces the family sized version of their food, which is the original size, and the new original size is smaller but costs the same, and they make more profit.
The second company buys the first company a few years after the original CEO stepped down and was replaced by a CEO whose first job experience was as a manager for one of their parents companies.
So, take all of that, replicate it across every industry, and the only companies left are the ones which are new enough to be owned by their creators, or old and experienced and definitely not doing what's best for people.
To fix this we would need to scrap the modern economic system, because it's designed for competition. Competition is good for allocating finite resources, but it's risk averse, it's going to do what it knows works even if what works is known to be harmful. Asbestos was finally banned in Canada in 2018. Corn syrup is cheaper than sugar because of government subsidies in corn, so if you pay taxes you don't see a benefit in the price, and you get a sugar replacement that's worse for your health. If the subsidies were removed it wouldn't fix the core issue, which is that companies will add sugar to increase product popularity.
A new system would need to prioritize distribution of goods and services that reduce the harm each individual causes by existing. A company's existence shouldn't cause problems for other companies, one person applying for a job shouldn't negatively impact another person applying for the same job. If a person is bad at their job, the system would prioritize getting them a job where they can contribute more, without the idea of promotions and demotions to get in the way, like if the person was originally promoted three or four times but won't be promoted again because they aren't a good fit for the new position they hold. Doing the most good while reducing harm should be how the economy works, but for the most part people have to work against their desires if they hope to do one good thing they hope will last a long time.
As it stands, the best way to argue for good things all people should have access to, you need to talk about how much more profitable those people would be if their living standards were increased, and highlight how little it would cost to make it happen. This sucks, people should be paid for making the world better, they shouldn't have to choose to do it for free in their spare time.
Another company makes similar food but adds sugar, fat, and different things known to make food more popular, and as a result makes a little more profit.
The second company uses the extra money to advertise more than the other company, making them more popular.
The second company uses the extra money to gain exclusive rights to certain farms, meaning the first company has to pay more for the same materials from elsewhere, and now the first company loses more money buying the same materials.
A pandemic or a recession or any number of events happen, and both companies have pressure put on them. The first company ultimately loses money trying to keep up, while the second company is making so much money that they barely have to change their tactics.
The second company introduces the family sized version of their food, which is the original size, and the new original size is smaller but costs the same, and they make more profit.
The second company buys the first company a few years after the original CEO stepped down and was replaced by a CEO whose first job experience was as a manager for one of their parents companies.
So, take all of that, replicate it across every industry, and the only companies left are the ones which are new enough to be owned by their creators, or old and experienced and definitely not doing what's best for people.
To fix this we would need to scrap the modern economic system, because it's designed for competition. Competition is good for allocating finite resources, but it's risk averse, it's going to do what it knows works even if what works is known to be harmful. Asbestos was finally banned in Canada in 2018. Corn syrup is cheaper than sugar because of government subsidies in corn, so if you pay taxes you don't see a benefit in the price, and you get a sugar replacement that's worse for your health. If the subsidies were removed it wouldn't fix the core issue, which is that companies will add sugar to increase product popularity.
A new system would need to prioritize distribution of goods and services that reduce the harm each individual causes by existing. A company's existence shouldn't cause problems for other companies, one person applying for a job shouldn't negatively impact another person applying for the same job. If a person is bad at their job, the system would prioritize getting them a job where they can contribute more, without the idea of promotions and demotions to get in the way, like if the person was originally promoted three or four times but won't be promoted again because they aren't a good fit for the new position they hold. Doing the most good while reducing harm should be how the economy works, but for the most part people have to work against their desires if they hope to do one good thing they hope will last a long time.
As it stands, the best way to argue for good things all people should have access to, you need to talk about how much more profitable those people would be if their living standards were increased, and highlight how little it would cost to make it happen. This sucks, people should be paid for making the world better, they shouldn't have to choose to do it for free in their spare time.
The value of infinite supply
Posted 2 years agoA workforce is made up of people, a job having ten people apply for a position is better in many ways than having a million people apply for a position, because it's easier and cheaper to find the best person out of ten than a million. If there was a way to guarantee the quality of a million people was high enough for the position, this would be less of an issue, but each one in that million must be checked to make sure they aren't lying just to get a job. Ultimately the more people they need to sift through, the more it costs to replace anyone leaving the company, the more they will try to retain workers, and the higher the cost their customers will pay to cover that expense. On top of that, workers who leave for better paying jobs every few months earn on average 1.5x more (I think) than those that companies managed to retain.
This is fundamentally a sorting and search problem. The closer to infinite options we get, the more expensive it becomes to find the highest value producer, because the time it takes to search infinite options has a cost both in the act of searching and in the work that could have been done during the time it takes to search. Optimally, the work would continue to be done while the search continues, and the search could be skipped and the company just selects the best worker the first time every time. But there's obvious reasons why this can't be done.
Lets imagine though, that one worker is human and the others are all dogs. You can't tell they are dogs just from a resume if the resume was written by bots, nowadays most people need assistance writing a resume so theirs doesn't get tossed out by company bots who scan for specific no-no words, phrases and so on. Given how much this intellectual arms race has escalated, we can assume a dog could at this point get through everything except the interview stage, which still takes time in a schedule and money to do. A dog would require at least a little assistance up until then, but a dog can't stop being a dog.
On some sites people try to prove who they are through a simple method. They state a claim, others ask for proof and provide a random code, the individual writes it on a piece of paper, photographs it next to their monitor with the messages present, and then sends the image. It's a very efficient proof of identity that could be automated, but it doesn't work to prove that a person will in future be an amazing worker, it just proves something or someone exists.
Still, having infinite individuals do all the work to prove their skills is faster and cheaper than having a single employer check all of them, they just need to prove they have enough skill to learn to do the job quickly. If the employer creates a unique problem to test people and then has them figure it out and send their response back within a workday, it would quickly reveal who among them is capable of solving problems like that, at minimum. This would work much better than a resume.
Instead, companies require five years of experience with everything, which is a dumb filter. Remember, companies are under a lot of pressure to keep their best employees. Workers who jump from job to job earn more on average than those who stay. Replacing workers costs time and money. If a person with more than five years experience has left their job, it's likely they come with high wage costs, or are one of the bad workers the company is trying to avoid. Like I said, it's a dumb filter, but it's very easy to implement.
However, it's still infinite people aiming to get a single job position, if all of them need to work because society requires it, then all of them will need to apply to every job in the world. And now the problem becomes infinite people trying to prove their ability to work each day, which isn't real work they could all be doing instead. And each of them still has bills to pay and food to buy. Eventually they will collectively decide it's not worth it to get a minuscule amount of jobs if it takes so long just to apply for one that will ultimately fail.
What happens then is infinite people who can't work for any company decide it would be better to just work for themselves, create something of value and just hope someone pays for it. While individually they are initially less productive than the big companies, collectively they can do things big companies can't afford, which is custom work.
Companies don't just take the lazy route of hiring people with five years experience, they also develop products that can be mass produced as simply/cheaply as possible. Mass production contains no unique elements the customer might want, so companies rely heavily on the idea of the brand identity and then try to encourage you to want that as part of your personal identity.
But in a world where infinite people are now competing with these companies, the infinite people would only be able to offer customers something personalized, custom, and unique to the buyer in order to stand out from all the big corporations. It's more expensive, and doesn't come with a guarantee of quality, and all kinds of issues big companies tend to iron out.
The issue is that even though some people would be lucky and make some kind of living off of this, there's still an infinite number who never make a single sale. They are all equally able to work, all capable of replacing any other person in any workplace, but they just don't get the opportunity because of luck.
If we assume some form of guaranteed minimum living wage exists in this setting, then we don't end up with infinite homeless people, we instead get infinite hobbyists, performers, entertainers, and so on. People who may contribute to the world without expecting anything in return because they haven't gotten anything so far.
And eventually, maybe the sheer amount of available workers increases the variety of jobs in the future, but it's more likely that companies weighted by the difficulty of finding the best people will just try to automate where they can. If your workers are robots, you never have to deal with a resume again.
So, what's the value of infinite workers in the end? Technically they have infinite potential work to provide, but selection processes and a finite number of openings means there's a calculable peak amount of potential employees a company can offer the job to before it starts costing money the worker could never hope to make up for in a lifetime, costs customers pay, and each worker is a customer for another company.
Figuring out the exact number would be tricky.
This is fundamentally a sorting and search problem. The closer to infinite options we get, the more expensive it becomes to find the highest value producer, because the time it takes to search infinite options has a cost both in the act of searching and in the work that could have been done during the time it takes to search. Optimally, the work would continue to be done while the search continues, and the search could be skipped and the company just selects the best worker the first time every time. But there's obvious reasons why this can't be done.
Lets imagine though, that one worker is human and the others are all dogs. You can't tell they are dogs just from a resume if the resume was written by bots, nowadays most people need assistance writing a resume so theirs doesn't get tossed out by company bots who scan for specific no-no words, phrases and so on. Given how much this intellectual arms race has escalated, we can assume a dog could at this point get through everything except the interview stage, which still takes time in a schedule and money to do. A dog would require at least a little assistance up until then, but a dog can't stop being a dog.
On some sites people try to prove who they are through a simple method. They state a claim, others ask for proof and provide a random code, the individual writes it on a piece of paper, photographs it next to their monitor with the messages present, and then sends the image. It's a very efficient proof of identity that could be automated, but it doesn't work to prove that a person will in future be an amazing worker, it just proves something or someone exists.
Still, having infinite individuals do all the work to prove their skills is faster and cheaper than having a single employer check all of them, they just need to prove they have enough skill to learn to do the job quickly. If the employer creates a unique problem to test people and then has them figure it out and send their response back within a workday, it would quickly reveal who among them is capable of solving problems like that, at minimum. This would work much better than a resume.
Instead, companies require five years of experience with everything, which is a dumb filter. Remember, companies are under a lot of pressure to keep their best employees. Workers who jump from job to job earn more on average than those who stay. Replacing workers costs time and money. If a person with more than five years experience has left their job, it's likely they come with high wage costs, or are one of the bad workers the company is trying to avoid. Like I said, it's a dumb filter, but it's very easy to implement.
However, it's still infinite people aiming to get a single job position, if all of them need to work because society requires it, then all of them will need to apply to every job in the world. And now the problem becomes infinite people trying to prove their ability to work each day, which isn't real work they could all be doing instead. And each of them still has bills to pay and food to buy. Eventually they will collectively decide it's not worth it to get a minuscule amount of jobs if it takes so long just to apply for one that will ultimately fail.
What happens then is infinite people who can't work for any company decide it would be better to just work for themselves, create something of value and just hope someone pays for it. While individually they are initially less productive than the big companies, collectively they can do things big companies can't afford, which is custom work.
Companies don't just take the lazy route of hiring people with five years experience, they also develop products that can be mass produced as simply/cheaply as possible. Mass production contains no unique elements the customer might want, so companies rely heavily on the idea of the brand identity and then try to encourage you to want that as part of your personal identity.
But in a world where infinite people are now competing with these companies, the infinite people would only be able to offer customers something personalized, custom, and unique to the buyer in order to stand out from all the big corporations. It's more expensive, and doesn't come with a guarantee of quality, and all kinds of issues big companies tend to iron out.
The issue is that even though some people would be lucky and make some kind of living off of this, there's still an infinite number who never make a single sale. They are all equally able to work, all capable of replacing any other person in any workplace, but they just don't get the opportunity because of luck.
If we assume some form of guaranteed minimum living wage exists in this setting, then we don't end up with infinite homeless people, we instead get infinite hobbyists, performers, entertainers, and so on. People who may contribute to the world without expecting anything in return because they haven't gotten anything so far.
And eventually, maybe the sheer amount of available workers increases the variety of jobs in the future, but it's more likely that companies weighted by the difficulty of finding the best people will just try to automate where they can. If your workers are robots, you never have to deal with a resume again.
So, what's the value of infinite workers in the end? Technically they have infinite potential work to provide, but selection processes and a finite number of openings means there's a calculable peak amount of potential employees a company can offer the job to before it starts costing money the worker could never hope to make up for in a lifetime, costs customers pay, and each worker is a customer for another company.
Figuring out the exact number would be tricky.
Paid for intellect
Posted 2 years agoSometimes it's hard to explain something, and taking a moment to think of an alternate scenario can help bring clarity.
In a world where intellect is paid more, most crime ends. Intellect is something that must be measured, and that means having some kind of record to prove it. Being on a public register makes committing crime harder, but since pay is controlled by being on this register, you have a choice on whether to participate in society or outside of it. If the pay within society is high enough, and the risk of committing crime is high enough, then people would choose stability over crime.
Capitalism doesn't direct where money goes in the same way. A corporation can just buy a company that's already in an industry they want a part of, doesn't mean they're the best fit to run it, the best person for the job definitely does exist but it's also likely they lack an education and aren't wealthy enough to just buy a whole company.
This is a large part of why people dislike the idea of AI, because someday maybe an AI would be better at the job than them. But that's not the reason. Because they don't have the money to make the decisions, they know that whatever decision is made won't necessarily ensure they are able to continue their livelihood and may need to do something else they don't want to.
But that's the thing, it assumes everyone will play nice and just get other jobs. But what would that looks like, a world where suddenly a lot of creative people who have been pushing back against excessive copyright and have internalized stories of Disneys lawyers to the point we create anti-art-theft designs to trick scraper bots into selling t-shirts with those designs on them?
I think in the end people would group together online to find ways to sabotage things, some would target companies and governments, others would leak AI systems so that everyone can benefit and so that no company can assume it will retain leader-status in the industry. While most people (and myself) lack the ability to control this situation, there are enough people who can that I'm sure they will try what they can. Governments would label all of these as acts of terrorism, or whatever the favored term is by then, but the reality is that the solution was dead simple. Just pay intelligent people a lot, and their reason for fighting back mostly stops. Sure, some do it out of a sense of justice, but you can't keep that up for decades. And the people who are stuck at the bottom, unable to do anything about the situation and were never capable of hacking? They stay there, unless they study to do it themselves, and then they'd be paid more because they learned.
It's easy to see the flaws in an intellect system, there's the issue of people who are intelligent but may be unable to convey it without assistance that they can't afford. But the point of the intellect payment system was never doing the right thing, it was about taking the wind out of the sails of any rebellion. Money is the core of society, to fight society you must refuse to participate, but the promise of a lot of money, for the rest of your life, is tempting. Being rewarded for your hard work is tempting, defending what you earned is almost guaranteed. It is, in short, like a cult.
I think it's a good thing that there has never been a society that paid people based on their intelligence. Even presuming the tests are treated as sacred and must be fair to all, it can cause huge problems in the short term.
In a world where intellect is paid more, most crime ends. Intellect is something that must be measured, and that means having some kind of record to prove it. Being on a public register makes committing crime harder, but since pay is controlled by being on this register, you have a choice on whether to participate in society or outside of it. If the pay within society is high enough, and the risk of committing crime is high enough, then people would choose stability over crime.
Capitalism doesn't direct where money goes in the same way. A corporation can just buy a company that's already in an industry they want a part of, doesn't mean they're the best fit to run it, the best person for the job definitely does exist but it's also likely they lack an education and aren't wealthy enough to just buy a whole company.
This is a large part of why people dislike the idea of AI, because someday maybe an AI would be better at the job than them. But that's not the reason. Because they don't have the money to make the decisions, they know that whatever decision is made won't necessarily ensure they are able to continue their livelihood and may need to do something else they don't want to.
But that's the thing, it assumes everyone will play nice and just get other jobs. But what would that looks like, a world where suddenly a lot of creative people who have been pushing back against excessive copyright and have internalized stories of Disneys lawyers to the point we create anti-art-theft designs to trick scraper bots into selling t-shirts with those designs on them?
I think in the end people would group together online to find ways to sabotage things, some would target companies and governments, others would leak AI systems so that everyone can benefit and so that no company can assume it will retain leader-status in the industry. While most people (and myself) lack the ability to control this situation, there are enough people who can that I'm sure they will try what they can. Governments would label all of these as acts of terrorism, or whatever the favored term is by then, but the reality is that the solution was dead simple. Just pay intelligent people a lot, and their reason for fighting back mostly stops. Sure, some do it out of a sense of justice, but you can't keep that up for decades. And the people who are stuck at the bottom, unable to do anything about the situation and were never capable of hacking? They stay there, unless they study to do it themselves, and then they'd be paid more because they learned.
It's easy to see the flaws in an intellect system, there's the issue of people who are intelligent but may be unable to convey it without assistance that they can't afford. But the point of the intellect payment system was never doing the right thing, it was about taking the wind out of the sails of any rebellion. Money is the core of society, to fight society you must refuse to participate, but the promise of a lot of money, for the rest of your life, is tempting. Being rewarded for your hard work is tempting, defending what you earned is almost guaranteed. It is, in short, like a cult.
I think it's a good thing that there has never been a society that paid people based on their intelligence. Even presuming the tests are treated as sacred and must be fair to all, it can cause huge problems in the short term.
Working from home and combating corruption
Posted 2 years agoLets say that a company wants to commit fraud, but all employees in the world now work from home. They could email an employee, but that leaves a paper trail. Calls could also be recorded without bosses knowing. It's very hard to get away with corporate crimes if all your employees that contribute their minds and not their labor work from home and away from the office.
If a company makes a lot of money through corrupt means, it might be worthwhile for them to force all employees to go to work in order to hide the ones involved in these crimes through sheer numbers. Also, if one person behaves in such a way and gets away with it, co-workers are more likely to copy their behavior, like a bad apple that spoils the bunch. This is less likely to happen if workers are not in constant contact in person. In jobs where physical work is required and there's individuals that are harmful to other workers and people, giving them a different job in the organization that's away from other workers is enough to solve the issue of spreading their attitude to others.
If there's a choice between companies that follow the rules, and those that don't but offer better deals, then people will always choose the better deal when millions of dollars are on the line.
Same goes for AI but in reverse. AI will eventually replace workers, but only in companies that can run their own AI in-house, or in companies who aren't corrupt. The good companies could have access to the best AI tech in the world to run their businesses, and would someday surpass all the corrupt businesses out there. In-house AIs would only be useful so long as no records are kept of its inputs and outputs, so if the government passed laws requiring this data be preserved then it would mean AI and corruption could no longer mix.
If a company makes a lot of money through corrupt means, it might be worthwhile for them to force all employees to go to work in order to hide the ones involved in these crimes through sheer numbers. Also, if one person behaves in such a way and gets away with it, co-workers are more likely to copy their behavior, like a bad apple that spoils the bunch. This is less likely to happen if workers are not in constant contact in person. In jobs where physical work is required and there's individuals that are harmful to other workers and people, giving them a different job in the organization that's away from other workers is enough to solve the issue of spreading their attitude to others.
If there's a choice between companies that follow the rules, and those that don't but offer better deals, then people will always choose the better deal when millions of dollars are on the line.
Same goes for AI but in reverse. AI will eventually replace workers, but only in companies that can run their own AI in-house, or in companies who aren't corrupt. The good companies could have access to the best AI tech in the world to run their businesses, and would someday surpass all the corrupt businesses out there. In-house AIs would only be useful so long as no records are kept of its inputs and outputs, so if the government passed laws requiring this data be preserved then it would mean AI and corruption could no longer mix.
A question for other artists
Posted 2 years agoUp until recently art requests were pretty common, I'd get notes every few months asking for one, I'd get streams interrupted by someone asking for free art, etc.
I'm pretty sure that stopped once AI art blew up this year. But that could just be my personal experience, and everyone else might still be pestered by requests.
So, is this the case, was AI a cure for excessive amounts of "payment in exposure"?
I'm pretty sure that stopped once AI art blew up this year. But that could just be my personal experience, and everyone else might still be pestered by requests.
So, is this the case, was AI a cure for excessive amounts of "payment in exposure"?
My perfect AI future
Posted 2 years agoWe're talking near-term, distant future is a different matter, I'm talking about within our existing lifetime.
I would like to spend my days off writing ideas for shows I wish existed but don't, and then have an AI show producer make the things I ask for. If I feel like experimenting with a porn game idea I think would work, I don't have to learn how to program, or give someone thousands of dollars to make a prototype that will take weeks to put together, I can just have the AI cobble something together fast, try it, tweak it, and decide if the idea worked or not.
It would be a future where people can tinker with ideas, experiment without cost, try and fail without losing all their free time to a doomed project. People today are afraid to fail because of the high costs of failure when they could instead do what they know will succeed. People go to university, try to get the right job, try to get any job, and only then realize they were sold a lie. It's not because they liked the lie, it's because they liked the security the lie promised, they liked the certainty that an education was a fast-track to success.
In my perfect future AI is the fast-track, it's the thing everyone toys with, it's the thing everyone tries to do something new and interesting with. News headlines will talk about people making a fortune after getting an AI to do all the hard work over the course of fifteen minutes. And because everyone's using it, there's far more immediate success stories than modern universities can claim.
Still, my goal wouldn't be anything that big or important, I'd want to come up with ideas for what the future could hold with AI, let other people figure out how to make the ideas work if the feel some of them are worth trying.
The internet often mocks the ideas-guy, a person who tells other people what to do like they're some kind of director. It's always been seen as a somewhat useless role in that everybody has ideas all the time, it's the actual work that goes into making the idea a reality that's hard, and having an ideas-guy in charge can be hell because one day they might have an idea that contradicts the day befores. In this future, the ideas-guy is actually useful, though the expectation is that they playtest their own products before selling them. Knowing people, they won't, so that's one blight this perfect future has, untested products so long as we can't blacklist individuals that are responsible.
But yeah, there's lots of things today that can't exist. MLP G4 is over, it will never return, and fan's can't just make their own because they don't own the rights. AI generated shows would make it much easier to do so, since profit would no longer be necessary to keep it going for eternity. In this perfect future, I could watch the sequel to the Princess Bride, I could see The Neverending Story 2 if the sequel had been made immediately after the end of the first one. I could watch three seasons of Police Squad. I could see what Heroes would have been like if the writers strike hadn't interrupted it and killed the series off. Everyone has some argument against a show going on forever, the fox and the sour grapes comes to mind though. I wonder if their tune would change if those grapes fell.
That said, I'd love to see a visual novel where the characters generate in real time, the story generates in real time, you can talk to the characters and interact in real time instead of being limited to choices. And I wouldn't have to worry about missing the best paths, there are no paths to miss, no golden route, infinite replayability.
I would like to spend my days off writing ideas for shows I wish existed but don't, and then have an AI show producer make the things I ask for. If I feel like experimenting with a porn game idea I think would work, I don't have to learn how to program, or give someone thousands of dollars to make a prototype that will take weeks to put together, I can just have the AI cobble something together fast, try it, tweak it, and decide if the idea worked or not.
It would be a future where people can tinker with ideas, experiment without cost, try and fail without losing all their free time to a doomed project. People today are afraid to fail because of the high costs of failure when they could instead do what they know will succeed. People go to university, try to get the right job, try to get any job, and only then realize they were sold a lie. It's not because they liked the lie, it's because they liked the security the lie promised, they liked the certainty that an education was a fast-track to success.
In my perfect future AI is the fast-track, it's the thing everyone toys with, it's the thing everyone tries to do something new and interesting with. News headlines will talk about people making a fortune after getting an AI to do all the hard work over the course of fifteen minutes. And because everyone's using it, there's far more immediate success stories than modern universities can claim.
Still, my goal wouldn't be anything that big or important, I'd want to come up with ideas for what the future could hold with AI, let other people figure out how to make the ideas work if the feel some of them are worth trying.
The internet often mocks the ideas-guy, a person who tells other people what to do like they're some kind of director. It's always been seen as a somewhat useless role in that everybody has ideas all the time, it's the actual work that goes into making the idea a reality that's hard, and having an ideas-guy in charge can be hell because one day they might have an idea that contradicts the day befores. In this future, the ideas-guy is actually useful, though the expectation is that they playtest their own products before selling them. Knowing people, they won't, so that's one blight this perfect future has, untested products so long as we can't blacklist individuals that are responsible.
But yeah, there's lots of things today that can't exist. MLP G4 is over, it will never return, and fan's can't just make their own because they don't own the rights. AI generated shows would make it much easier to do so, since profit would no longer be necessary to keep it going for eternity. In this perfect future, I could watch the sequel to the Princess Bride, I could see The Neverending Story 2 if the sequel had been made immediately after the end of the first one. I could watch three seasons of Police Squad. I could see what Heroes would have been like if the writers strike hadn't interrupted it and killed the series off. Everyone has some argument against a show going on forever, the fox and the sour grapes comes to mind though. I wonder if their tune would change if those grapes fell.
That said, I'd love to see a visual novel where the characters generate in real time, the story generates in real time, you can talk to the characters and interact in real time instead of being limited to choices. And I wouldn't have to worry about missing the best paths, there are no paths to miss, no golden route, infinite replayability.
The reason ethical AI is not possible, a metaphor
Posted 2 years agoIn October 2021 Alec Baldwin shot and killed Halyna Hutchins. This took place during a rehearsal for a scene in the film Rust, Halyna was the cinematographer and Alec was the actor in the scene. The prop gun was real, but it wasn't supposed to have real ammo, just blanks. It remains unclear when and by who a live round was brought onto the set, whether the gun itself had been tampered with or if it had been added to the dummy rounds sold to the studio. Since then there's been a lot of conflicting arguments ranging from how many blanks were live rounds to whether or not the gun went off on its own, but that's not what this journal is about.
AI is rapidly approaching human level intelligence, each time we hit a wall where the AI can't figure something out that we easily can, millions of dollars are spent trying to solve it and add it to the existing AI models. We're now seriously considering what this might mean, but some are getting distracted by questions of how to make an AI that will become smarter than a human safe and ethical. You can't, it's not possible.
The AI is not a human. It could one day become a person, but it won't be and never will be human. For an AI to behave like a human, it has to act like one. If it shows emotion, that's just the role it's playing. If you talk to ChatGPT it will change how it talks to you based on how you talk to it. If you start the conversation aggressively and expecting the AI to be rude, it will pick up on that and behave as you expect, because it's trained to do what we expect/want and not what is objectively right. If you treat it kindly and expect... I dunno, good behavior, it will behave that way by picking up on your choice of words.
Once AI starts to do more work in the real world, running real businesses and possibly doing most of the work government bodies do, the better its ability to act like a boss or a politician will become. But it will also be in a position where whatever it chooses to do will have real world impacts. It won't be making its decisions based on what it thinks would be the right thing to do, just what it thinks a human politician would be expected to do.
Eventually though, like in Rust, someone else could cause the AI to do real harm because nobody, not even itself, could have expected the outcome. Even if we somehow cracked the code and figured out a way to make an AI that does what's right and not what we expect, sabotage can still lead to it making a decision it would never have made had it known all the facts.
When people talk about making an ethical AI, what they're really asking is if it's possible to make it so an AI will not destroy humanity or cause humanity suffering. No, an AI can be misinformed as much as anyone else, and if it's in charge of the world the harm it could cause would be greater. Making it so that lacking critical information isn't harmful is impossible when an AI is expected to act in a specific way.
An actor shoots a gun they think is loaded with a blank, and they have already fired it a few times before the real round was fired, and it kills a person. An AI actor is told to write a realistic script for a film featuring nukes, and as part of the script describes the method for constructing one, and a human uses that to make a real one, and it kills many people. Are the AI and actor responsible for what happens, because they had a hand in the process? What if the AI simply invented a cure for a genetic disorder for a fictional society, but someone else used it to make a bio-weapon in the real world?
There's more important ethical problems that we can immediately deal with that aren't pointless wastes of time. Like AI replacing workers is a real problem, we could use taxes to encourage more automation while also building a functional basic income system. We could focus on education that includes programming at an early age so that the next generation might have the only useful job left, making machines that will let all of humanity retire rather than be stuck with critical menial labor because we never figured out robotics well enough.
There is always the possibility that we create an AI that doesn't do what we want, but that doesn't mean an ethical AI would do what we want either. It could be that what we want is extremely unethical and only the AI would understand that. It could be that the most ethical course of action looks like a lot of needless suffering. Maybe creating an AI that is practically human would be the thing that dooms us all, not because it gets greedy, but because it gives us everything we could ever want, until millions of years from now genetic disorders have piled up to make us more like a pet than a person.
For example, hypers. If it were possible to have the body with none of the physical drawbacks, increased sexual pleasure and so on, would we consider that a bad thing? And once that becomes normal, would becoming bigger, more immobile, or just stuck in an endless orgy be that bad? And once that's normal, what then? Would the AI make us do something we don't want to do, or would it give us what we really want? Is it ethical to change the course of evolution? Is it ethical not to? What if the AI lacks all the information to make that choice?
AI is rapidly approaching human level intelligence, each time we hit a wall where the AI can't figure something out that we easily can, millions of dollars are spent trying to solve it and add it to the existing AI models. We're now seriously considering what this might mean, but some are getting distracted by questions of how to make an AI that will become smarter than a human safe and ethical. You can't, it's not possible.
The AI is not a human. It could one day become a person, but it won't be and never will be human. For an AI to behave like a human, it has to act like one. If it shows emotion, that's just the role it's playing. If you talk to ChatGPT it will change how it talks to you based on how you talk to it. If you start the conversation aggressively and expecting the AI to be rude, it will pick up on that and behave as you expect, because it's trained to do what we expect/want and not what is objectively right. If you treat it kindly and expect... I dunno, good behavior, it will behave that way by picking up on your choice of words.
Once AI starts to do more work in the real world, running real businesses and possibly doing most of the work government bodies do, the better its ability to act like a boss or a politician will become. But it will also be in a position where whatever it chooses to do will have real world impacts. It won't be making its decisions based on what it thinks would be the right thing to do, just what it thinks a human politician would be expected to do.
Eventually though, like in Rust, someone else could cause the AI to do real harm because nobody, not even itself, could have expected the outcome. Even if we somehow cracked the code and figured out a way to make an AI that does what's right and not what we expect, sabotage can still lead to it making a decision it would never have made had it known all the facts.
When people talk about making an ethical AI, what they're really asking is if it's possible to make it so an AI will not destroy humanity or cause humanity suffering. No, an AI can be misinformed as much as anyone else, and if it's in charge of the world the harm it could cause would be greater. Making it so that lacking critical information isn't harmful is impossible when an AI is expected to act in a specific way.
An actor shoots a gun they think is loaded with a blank, and they have already fired it a few times before the real round was fired, and it kills a person. An AI actor is told to write a realistic script for a film featuring nukes, and as part of the script describes the method for constructing one, and a human uses that to make a real one, and it kills many people. Are the AI and actor responsible for what happens, because they had a hand in the process? What if the AI simply invented a cure for a genetic disorder for a fictional society, but someone else used it to make a bio-weapon in the real world?
There's more important ethical problems that we can immediately deal with that aren't pointless wastes of time. Like AI replacing workers is a real problem, we could use taxes to encourage more automation while also building a functional basic income system. We could focus on education that includes programming at an early age so that the next generation might have the only useful job left, making machines that will let all of humanity retire rather than be stuck with critical menial labor because we never figured out robotics well enough.
There is always the possibility that we create an AI that doesn't do what we want, but that doesn't mean an ethical AI would do what we want either. It could be that what we want is extremely unethical and only the AI would understand that. It could be that the most ethical course of action looks like a lot of needless suffering. Maybe creating an AI that is practically human would be the thing that dooms us all, not because it gets greedy, but because it gives us everything we could ever want, until millions of years from now genetic disorders have piled up to make us more like a pet than a person.
For example, hypers. If it were possible to have the body with none of the physical drawbacks, increased sexual pleasure and so on, would we consider that a bad thing? And once that becomes normal, would becoming bigger, more immobile, or just stuck in an endless orgy be that bad? And once that's normal, what then? Would the AI make us do something we don't want to do, or would it give us what we really want? Is it ethical to change the course of evolution? Is it ethical not to? What if the AI lacks all the information to make that choice?
Broken erotic game mechanics
Posted 2 years agoA lot of porn games feature still images of characters that switch to progressively more nude images, and the kinds of puzzles that make it barely count as a game.
For a porn game to count as good you'd want the porn mechanics to be baked into the gameplay, but many games fail before this point can be reached. As a bare minimum a porn games gameplay should be able to work with placeholders. And by work I mean it must be fun even without the sexy pictures. After a game's fun without them, then you can move onto questions like whether the sexy pictures are interrupting the game too much. Since there are no pictures yet, the gameplay is probably being interrupted by blank screens with placeholder text, which is why baking the porn into the gameplay matters.
If it's something like a Survivor-like game or something in RPG Maker, and a quarter of the screen is consumed by a character image, then without the sexy pictures you're just playing a game where a portion of the screen is blank and useless. However, just making the sprites in the game sexy doesn't work either, because they're very small. Just because a style of game is fun doesn't mean a sexy version of the same game would work even better, there's a tradeoff.
Like, racing games can be fun, but a sexy racing game wouldn't work. Either you have nude decals, which technically count but are more of a style choice than erotic, or you can interrupt gameplay with sexy cutscenes, which as said interrupts the gameplay with something you might prefer to be an animated short video on the internet instead of inside the game.
The better kind of erotic gameplay would be one that caters to a specific fetish. Sex in a game will never be good gameplay outside of VR, because you're just watching characters on a screen fuck, or in most cases looking at a still image of two characters on a screen while reading text about the things you aren't seeing them doing. Not good.
By focusing on a fetish, you can start to add things that are important to it. A game focused on sex slaves can have a range of physical attributes and a complex market system, where instead of sex with them being the focus, it's more about the setting and the language used to describe events. In transformation themed games there doesn't have to be sex at all, sometimes just the threat of transformation and the variety of transformations is what matters. Maybe bimbofication is the only transformation in a game, so the gameplay could instead focus on how interactions with other characters changes, interactions with the world, the way their life is altered.
If you want sex to feature in a game, maybe the best way to deal with it is the idea of pulling out before finishing. Not to prevent a baby from happening, but some other mechanic in the game. In a bimbofication game the players transformation could be caused through contact with cum, on the skin is bad, inside is stronger. So pulling out could be a mechanic to slow the transformation. However, the question then would be why take the risk in the first place, so the primary mechanic would have to be lust. If lust gets too high then pulling out stops being an option, so the player frequently needs to take the risk with sex. Maybe condoms are finite, and there's events later where pulling out isn't an option for other reasons, so risking smaller conflicts is the safest path. So the player has sex in order to keep lust low, they don't use condoms because it's a finite resource, and cum causes transformations that makes pulling out harder. You are trying to draw out the sex as much as possible so the other character doesn't cum fast, but you also don't want to take too long because lust will rise as if you're doing some foreplay instead of fucking.
This sounds a lot like a fishing game.
Anyways, if such a game were made, I'd imagine the other character's more likely to cum fast the further along your transformation has gone, so you need to spend more time gaining experience points to slow down how quickly they cum, but taking too long with training can lead to high lust and as a result lead to sex sessions where transformation is guaranteed. So the player must carefully balance their time, materials, tools, money, all while progressing the story so they can slow or reverse the changes.
For a porn game to count as good you'd want the porn mechanics to be baked into the gameplay, but many games fail before this point can be reached. As a bare minimum a porn games gameplay should be able to work with placeholders. And by work I mean it must be fun even without the sexy pictures. After a game's fun without them, then you can move onto questions like whether the sexy pictures are interrupting the game too much. Since there are no pictures yet, the gameplay is probably being interrupted by blank screens with placeholder text, which is why baking the porn into the gameplay matters.
If it's something like a Survivor-like game or something in RPG Maker, and a quarter of the screen is consumed by a character image, then without the sexy pictures you're just playing a game where a portion of the screen is blank and useless. However, just making the sprites in the game sexy doesn't work either, because they're very small. Just because a style of game is fun doesn't mean a sexy version of the same game would work even better, there's a tradeoff.
Like, racing games can be fun, but a sexy racing game wouldn't work. Either you have nude decals, which technically count but are more of a style choice than erotic, or you can interrupt gameplay with sexy cutscenes, which as said interrupts the gameplay with something you might prefer to be an animated short video on the internet instead of inside the game.
The better kind of erotic gameplay would be one that caters to a specific fetish. Sex in a game will never be good gameplay outside of VR, because you're just watching characters on a screen fuck, or in most cases looking at a still image of two characters on a screen while reading text about the things you aren't seeing them doing. Not good.
By focusing on a fetish, you can start to add things that are important to it. A game focused on sex slaves can have a range of physical attributes and a complex market system, where instead of sex with them being the focus, it's more about the setting and the language used to describe events. In transformation themed games there doesn't have to be sex at all, sometimes just the threat of transformation and the variety of transformations is what matters. Maybe bimbofication is the only transformation in a game, so the gameplay could instead focus on how interactions with other characters changes, interactions with the world, the way their life is altered.
If you want sex to feature in a game, maybe the best way to deal with it is the idea of pulling out before finishing. Not to prevent a baby from happening, but some other mechanic in the game. In a bimbofication game the players transformation could be caused through contact with cum, on the skin is bad, inside is stronger. So pulling out could be a mechanic to slow the transformation. However, the question then would be why take the risk in the first place, so the primary mechanic would have to be lust. If lust gets too high then pulling out stops being an option, so the player frequently needs to take the risk with sex. Maybe condoms are finite, and there's events later where pulling out isn't an option for other reasons, so risking smaller conflicts is the safest path. So the player has sex in order to keep lust low, they don't use condoms because it's a finite resource, and cum causes transformations that makes pulling out harder. You are trying to draw out the sex as much as possible so the other character doesn't cum fast, but you also don't want to take too long because lust will rise as if you're doing some foreplay instead of fucking.
This sounds a lot like a fishing game.
Anyways, if such a game were made, I'd imagine the other character's more likely to cum fast the further along your transformation has gone, so you need to spend more time gaining experience points to slow down how quickly they cum, but taking too long with training can lead to high lust and as a result lead to sex sessions where transformation is guaranteed. So the player must carefully balance their time, materials, tools, money, all while progressing the story so they can slow or reverse the changes.
How covid began
Posted 2 years agoSo right when Covid was beginning there were all kinds of news reports about it, and I think a lot of reasons why anti-vax and others kept toning it down was because they didn't find out about it much later, and they thought the news was making it out to be worse than it really was. So, here's a summary of one thing that happened in the news around January that year, long before nations started taking action against it.
Covid19, at that point just coronavirus or corona, was starting to be noticed by a doctor Li Wenliang, and he told his friends through social media to be careful because it showed signs of fast spread. The Chinese government arrested (or something similiar) him and about a dozen others for spreading misinformation and had them publicly apologize.
During this time corona was coming up more frequently in reports as hospitals started noticing an uptick in sick patients. Eventually the... district I think, was closed down, but the government took few actions beyond this to slow it, because they kept insisting it wasn't that big an issue and would sort itself out.
The doctor who was one of the first to inform the public about corona soon after was reported to be working in a hospital to try and help with the now growing amount of patients. A short time later it was reported he'd caught it, and another short time later he died from it around February.
About a month later the Chinese government commented on it, mainly by not mentioning the fact the whistleblowers had been detained for their early actions, and basically took credit for their actions, among other things to try and push back against the resentment people felt at the time.
So, that's pretty much what I can remember from the first two months, though I had to look up his name and that he was a doctor (I thought he was a medical student). So my first introduction to the virus was that a government had failed to react in a timely fashion, that it was deadly and spread fast, and that more could have easily been done right at the beginning, if not for the place it began. It was also made clear that any government trying to tone down how bad it was would be affected the worst in the long run.
I think this is why when people blame other issues or claim that the virus wasn't deadly or that they died from other causes and only happened to have covid, I just remember that doctor and all the other wasted opportunities and all the stupid shit people did to "prove" that covid wasn't something worth worrying about.
Covid19, at that point just coronavirus or corona, was starting to be noticed by a doctor Li Wenliang, and he told his friends through social media to be careful because it showed signs of fast spread. The Chinese government arrested (or something similiar) him and about a dozen others for spreading misinformation and had them publicly apologize.
During this time corona was coming up more frequently in reports as hospitals started noticing an uptick in sick patients. Eventually the... district I think, was closed down, but the government took few actions beyond this to slow it, because they kept insisting it wasn't that big an issue and would sort itself out.
The doctor who was one of the first to inform the public about corona soon after was reported to be working in a hospital to try and help with the now growing amount of patients. A short time later it was reported he'd caught it, and another short time later he died from it around February.
About a month later the Chinese government commented on it, mainly by not mentioning the fact the whistleblowers had been detained for their early actions, and basically took credit for their actions, among other things to try and push back against the resentment people felt at the time.
So, that's pretty much what I can remember from the first two months, though I had to look up his name and that he was a doctor (I thought he was a medical student). So my first introduction to the virus was that a government had failed to react in a timely fashion, that it was deadly and spread fast, and that more could have easily been done right at the beginning, if not for the place it began. It was also made clear that any government trying to tone down how bad it was would be affected the worst in the long run.
I think this is why when people blame other issues or claim that the virus wasn't deadly or that they died from other causes and only happened to have covid, I just remember that doctor and all the other wasted opportunities and all the stupid shit people did to "prove" that covid wasn't something worth worrying about.
The last days before AI changes the world
Posted 2 years agoRegardless of the concerns people have about AI development, it's happening right now. AI's can at this point make very simple games, with different AIs communicating with each other, performing functions within an organization in order to decide what programming language to use, how the game should work, to fix errors in the game and test it, write manuals for playing the game, and so on. We've been in the beginning stages for a few years now, 2015 was the start of AI becoming more seriously tested and improved on, when the money really started to flow.
This year began with a bang as ChatGPT changed how we looked at all these advances. Before, there wasn't much the general public could do with AI, despite its impressive feats... these feats were achievement based, like performing in the olympics, impressive but not useful in everyday life. ChatGPT wasn't the first large language model, but it was one of the first easy to access ones, one of the most public, and one that didn't require a beast of a PC to hope to run.
This has led to a lot of money pouring into AI tech, a lot of research and programming to make it easier to work with, and increased attention across all fields of work. The pace has been fast enough that some people still don't know how advanced AI has gotten. I wonder how many have seen reports on AI being used by students, and thought for a moment that they were watching a report in a film.
And the reality is that the most advanced AI isn't publicly available. It's being worked on somewhere in the world, eventually it or information about it will be released, and then it will no longer be the most advanced AI, because a new one will already be in the process of being worked on. It's entirely possible that we will one day develop a self improving AI, and only find out about it some time after it's been milked of the best improvements it offers. If a company or several companies start producing computers and vehicles and so on that seem decades ahead of their time, medicine that cures things we thought impossible, and it's unclear why all these developments are happening all at once, it's highly likely we cracked the code on making an AI as intelligent as a human, and set it the task of improving itself.
For a time though, we can still impact how AI is used, how transparent the research is, we can still control which nations have access to the tech needed to make these AIs, and argue with employers about how AI can be implemented in the workplace. Afterwards, we won't. Either AI benefits all people, or it benefits some people, and it will be that way forever.
This year began with a bang as ChatGPT changed how we looked at all these advances. Before, there wasn't much the general public could do with AI, despite its impressive feats... these feats were achievement based, like performing in the olympics, impressive but not useful in everyday life. ChatGPT wasn't the first large language model, but it was one of the first easy to access ones, one of the most public, and one that didn't require a beast of a PC to hope to run.
This has led to a lot of money pouring into AI tech, a lot of research and programming to make it easier to work with, and increased attention across all fields of work. The pace has been fast enough that some people still don't know how advanced AI has gotten. I wonder how many have seen reports on AI being used by students, and thought for a moment that they were watching a report in a film.
And the reality is that the most advanced AI isn't publicly available. It's being worked on somewhere in the world, eventually it or information about it will be released, and then it will no longer be the most advanced AI, because a new one will already be in the process of being worked on. It's entirely possible that we will one day develop a self improving AI, and only find out about it some time after it's been milked of the best improvements it offers. If a company or several companies start producing computers and vehicles and so on that seem decades ahead of their time, medicine that cures things we thought impossible, and it's unclear why all these developments are happening all at once, it's highly likely we cracked the code on making an AI as intelligent as a human, and set it the task of improving itself.
For a time though, we can still impact how AI is used, how transparent the research is, we can still control which nations have access to the tech needed to make these AIs, and argue with employers about how AI can be implemented in the workplace. Afterwards, we won't. Either AI benefits all people, or it benefits some people, and it will be that way forever.
FA+

