Genuinely combating AI
4 months ago
https://picarto.tv/SimonAquarius for my livestreams, everyone's invited
https://www.patreon.com/simonaquarius to support my comic on Patreon
https://www.patreon.com/simonaquarius to support my comic on Patreon
A heads up, I'm pro AI as far as true open source, global collaboration goes. I feel that whenever corporations do anything it's always a pursuit of profit even if the product being made worse brings in more money. They start off strong to get people dependent on it, but then over time changes make it a little bit worse until finally people just want the original with only the good updates. If they try the same thing with AI, then they would be giving up the main strength of an unbounded learning machine, the lack of limits on growth in capability. Cutting corners automatically means imposing some kind of limit by definition, you can't cut a corner without an edge to cut.
If we want to stop this kind of AI being developed we need to get ahead of them. As regular people without billions of dollars, it's pretty much impossible for us to get corporations to do something unprofitable. And whenever corporations buy out things like education, they always introduce reading material that supports their industry, so it's pretty difficult to motivate people who have a corporate endorsed education into protecting themselves from those corporations.
But AI can do a lot of things that are tricky or hard to do. Like combing through internet archival data piecing together a timeline of every corporate executive's movement through their industries. It's so hard to track down information about these jobs, to the point it's more effective to tell people to boycott a brand rather than to boycott whatever brand has hired a specific executive.
In the present there's some tariffs that are predictably going to backfire in a few months time if they aren't removed, and the global reaction to them has resulted in an app in France that tells you if a product you scan was in part made in the US. I think it goes off bar codes right now, but imagine if there was a camera app that could detect products on shelves, recognize them, and then highlight them to show which ones are good to get.
The reason the real app feels like a time saver is because even if a product was assembled in that country, it might have been made in the US at some point, maybe the base materials even. In an effort to not lose sales, corporations will obfuscate information while feigning transparency, as much as the law will allow. Apps can get around this because they aren't owned by the corporations selling those products.
This is the key to fighting AI. You can't just kill off one thing, corporations will pour more money in until it's no longer a problem. You need an AI ecosystem. If corporations can control what tools can run an AI, or they control distribution, or they control access, then life will be made worse for people who at most have a phone in terms of advanced tech. If there is no monopoly on AI, and access is provided freely for all, then corporations would struggle to make AI profitable.
If AI is going to be trained on public data, the result should not be sold for profit, it should also be public. Not because it's more fair, it's because it forces corporations to make a choice, they can use public data and share their research by law which would prevent their use of sensitive data, or they can use internal data and have a worse system than their competition. This would kill off the push from corporations to make publicly trained AI for profit, they would be incentivized to keep it secret, and so would use them internally at best. And because of this, the rate of advancements for AI that benefits corporations would slow dramatically, and AI that is favorable to citizens would speed past them.
Arguing about all AI being bad will not bring back the past. AI exists, and while everyone wastes time debating if thought matters more than action, corporations are getting a foothold in how AI will be used for the rest of time. So, picture a future a thousand years from now, imagine one where people use AI in the same way we use paper books.
How do we get there from a time when population decline is caused by government policy? Once AI becomes more of a thing, policy will reduce the human population. If people aren't needed for labor, infrastructure, then all that's left is to reduce the number of people. Make people choose to have less kids by not keeping wages up with inflation, control the rate of inflation, control how many doctors there are, give corporations enough control that needless suffering culls the population just enough to get rid of people that won't be missed by the fortunate. Except now there's corporate funded AI research, so policy will shift some more, reduce the population a bit faster.
Humanity's extinction won't be caused by AI directly in this scenario. It'd be caused by a solar flare, once the global population was reduced to the point it cannot sustain itself without AI assistance. A bunch of rich people trying to re-learn how to farm from whatever books remain in their private galleries before they starve.
This outcome doesn't work if regular people also have access to AI, and pit it against their own governments. Presently AI isn't a huge threat, but once it is, the threat works both ways. Corporations have a lot more to lose and have to operate carefully to avoid risk, and once they've taken our data and our work and our future we have nothing left to lose and everything to gain by breaking their foundations. Maybe AI could play a part in that, because nothing else has really worked. We created unions to ensure workers have rights, so they made unions out to be a bad thing by any means necessary. Worker safety began because people were being crushed into paste by machines and the businesses did nothing until forced to. AI will not remain safe for much longer, and it's so much easier to automate the work executives do than it is to automate labor.
If we want to stop this kind of AI being developed we need to get ahead of them. As regular people without billions of dollars, it's pretty much impossible for us to get corporations to do something unprofitable. And whenever corporations buy out things like education, they always introduce reading material that supports their industry, so it's pretty difficult to motivate people who have a corporate endorsed education into protecting themselves from those corporations.
But AI can do a lot of things that are tricky or hard to do. Like combing through internet archival data piecing together a timeline of every corporate executive's movement through their industries. It's so hard to track down information about these jobs, to the point it's more effective to tell people to boycott a brand rather than to boycott whatever brand has hired a specific executive.
In the present there's some tariffs that are predictably going to backfire in a few months time if they aren't removed, and the global reaction to them has resulted in an app in France that tells you if a product you scan was in part made in the US. I think it goes off bar codes right now, but imagine if there was a camera app that could detect products on shelves, recognize them, and then highlight them to show which ones are good to get.
The reason the real app feels like a time saver is because even if a product was assembled in that country, it might have been made in the US at some point, maybe the base materials even. In an effort to not lose sales, corporations will obfuscate information while feigning transparency, as much as the law will allow. Apps can get around this because they aren't owned by the corporations selling those products.
This is the key to fighting AI. You can't just kill off one thing, corporations will pour more money in until it's no longer a problem. You need an AI ecosystem. If corporations can control what tools can run an AI, or they control distribution, or they control access, then life will be made worse for people who at most have a phone in terms of advanced tech. If there is no monopoly on AI, and access is provided freely for all, then corporations would struggle to make AI profitable.
If AI is going to be trained on public data, the result should not be sold for profit, it should also be public. Not because it's more fair, it's because it forces corporations to make a choice, they can use public data and share their research by law which would prevent their use of sensitive data, or they can use internal data and have a worse system than their competition. This would kill off the push from corporations to make publicly trained AI for profit, they would be incentivized to keep it secret, and so would use them internally at best. And because of this, the rate of advancements for AI that benefits corporations would slow dramatically, and AI that is favorable to citizens would speed past them.
Arguing about all AI being bad will not bring back the past. AI exists, and while everyone wastes time debating if thought matters more than action, corporations are getting a foothold in how AI will be used for the rest of time. So, picture a future a thousand years from now, imagine one where people use AI in the same way we use paper books.
How do we get there from a time when population decline is caused by government policy? Once AI becomes more of a thing, policy will reduce the human population. If people aren't needed for labor, infrastructure, then all that's left is to reduce the number of people. Make people choose to have less kids by not keeping wages up with inflation, control the rate of inflation, control how many doctors there are, give corporations enough control that needless suffering culls the population just enough to get rid of people that won't be missed by the fortunate. Except now there's corporate funded AI research, so policy will shift some more, reduce the population a bit faster.
Humanity's extinction won't be caused by AI directly in this scenario. It'd be caused by a solar flare, once the global population was reduced to the point it cannot sustain itself without AI assistance. A bunch of rich people trying to re-learn how to farm from whatever books remain in their private galleries before they starve.
This outcome doesn't work if regular people also have access to AI, and pit it against their own governments. Presently AI isn't a huge threat, but once it is, the threat works both ways. Corporations have a lot more to lose and have to operate carefully to avoid risk, and once they've taken our data and our work and our future we have nothing left to lose and everything to gain by breaking their foundations. Maybe AI could play a part in that, because nothing else has really worked. We created unions to ensure workers have rights, so they made unions out to be a bad thing by any means necessary. Worker safety began because people were being crushed into paste by machines and the businesses did nothing until forced to. AI will not remain safe for much longer, and it's so much easier to automate the work executives do than it is to automate labor.