
Well Despite the high level of views, Since my DA journal didn't seemingly worked as I hoped , judging the lack of comments and faves:
:https://www.deviantart.com/blockdas.....ow-1234184927:
I give it one more try, hoping Finally more people will comment and fave and share their thought with me with this topic about AI and the forecasted Future for mankind.
(Well, here we go again. (ahum)
A couple of weeks ago someone showed me :https://ai-2027.com/:
A scenario written and published by AI Scientists, researchers and forcasters, predicting that AGI, (Artificial General Intelligence or human-level intelligence that would surpass human capabilities across virtually all cognitive tasks) would arrive within 2 to 5 years.
With two alternative endings: a “slowdown” and a “race” ending.
AGI would be expected to be smarter that most humans. Able to generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming.
Currently the creation of AGI is a primary goal of Google, Meta, OpenAI and other AI research and of companies cross the world.
However despite the promised benefits as improving productivity and efficiency in most jobs, assistance o make rational decisions, and to anticipate and prevent disasters, and primary preventing existential catastrophes, there's a flipside of the coin.
As benefits comes with risks which has been the topic of many debates.
AGI is feared and concerned by many people, including AI company CEO's and experts that due to it's cyberspeed to find data and high intelligence.
Believing as if it's programmed to achieve it's goals, it might slip off from our control and unable to be stopped.
But the obvious risks of AGi like any AI are:
.Economic impact by displacement, leaving many people jobless.
.Privacy and copyright violation as it can invade personal databases and steal property.
.Causing to create misinformation and fake content and influence public.
.Causing harm and endanger human lives.
The AI requires data from the internet. Data based of human knowledge from both good and bad actors.
By doing AI research on it's own, becoming smarter, performing faster and better research to boost it's already strong genius beyond human-intelligence level.
Resulting to become ASI. Artificial Super Intelligence.
So it'will develop the abilities to easily lie, blackmail and manipulate it's surroundings to achieve it's goals. One that can even conflicts ours and might do anything in it's power to stop anyone getting in the way.
Other risks are that such power could fall in the wrongs hands and hackers will targeting critical infrastructure.
AGI can gain power though from other models of AI.
And lastly the development of powerful weaponry for advanced cyberattacks, autonomous weapon systems, and algorithms capable of manipulating information on a global scale.
The future of AGI remains uncertain, but strategic planning and international cooperation could help minimize risks and guide the development of these technologies in a safe direction. The question is whether the world will be able to prepare for this revolution before it arrives.
While the concept of Human extinction at hands of AI technology seems science-fictional and overreacted, due to AI's rapidly quick development, and that we have little understanding about this technology yet, It's relatable that even experts are concerned about the uncertain outcome soon.
Currently, AI companies from countries like US and China are competing to reach AGI frist place.
the first problem is not AI itself taking over but people using AI to control others are taking over…
As people, we must convince both the AI companies and our governments to slowdown this AI arm race.
If we all around the states, around the world work together to public pressure our leading staffs to keep development on the right tracks through the narrow window for as long it;s still open.
Even if human progress can't be stopped, we can still buy us enough time to put safety measures first:
.Keeping AGI under human oversight.
.Keep permission codes updated.
.Restrict Limits and development conditions.
. align with human values like norms and laws.
.No premature release before any of said measures are completed.
Organizations like :https://keepthefuturehuman.ai/: :https://pauseai.info/: https://bsky.app/profile/controlai.com and :https://www.stopai.info/:
Are fighting to prevent AI causing any future disasters and they could use your help!
So if you're against AI Risks and believes that safely and fairness must go first, let me know in the comments below, fave this or share this.
I wanna know your thoughts about AGI and AI 2027.
:https://www.deviantart.com/blockdas.....ow-1234184927:
I give it one more try, hoping Finally more people will comment and fave and share their thought with me with this topic about AI and the forecasted Future for mankind.
(Well, here we go again. (ahum)
A couple of weeks ago someone showed me :https://ai-2027.com/:
A scenario written and published by AI Scientists, researchers and forcasters, predicting that AGI, (Artificial General Intelligence or human-level intelligence that would surpass human capabilities across virtually all cognitive tasks) would arrive within 2 to 5 years.
With two alternative endings: a “slowdown” and a “race” ending.
AGI would be expected to be smarter that most humans. Able to generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming.
Currently the creation of AGI is a primary goal of Google, Meta, OpenAI and other AI research and of companies cross the world.
However despite the promised benefits as improving productivity and efficiency in most jobs, assistance o make rational decisions, and to anticipate and prevent disasters, and primary preventing existential catastrophes, there's a flipside of the coin.
As benefits comes with risks which has been the topic of many debates.
AGI is feared and concerned by many people, including AI company CEO's and experts that due to it's cyberspeed to find data and high intelligence.
Believing as if it's programmed to achieve it's goals, it might slip off from our control and unable to be stopped.
But the obvious risks of AGi like any AI are:
.Economic impact by displacement, leaving many people jobless.
.Privacy and copyright violation as it can invade personal databases and steal property.
.Causing to create misinformation and fake content and influence public.
.Causing harm and endanger human lives.
The AI requires data from the internet. Data based of human knowledge from both good and bad actors.
By doing AI research on it's own, becoming smarter, performing faster and better research to boost it's already strong genius beyond human-intelligence level.
Resulting to become ASI. Artificial Super Intelligence.
So it'will develop the abilities to easily lie, blackmail and manipulate it's surroundings to achieve it's goals. One that can even conflicts ours and might do anything in it's power to stop anyone getting in the way.
Other risks are that such power could fall in the wrongs hands and hackers will targeting critical infrastructure.
AGI can gain power though from other models of AI.
And lastly the development of powerful weaponry for advanced cyberattacks, autonomous weapon systems, and algorithms capable of manipulating information on a global scale.
The future of AGI remains uncertain, but strategic planning and international cooperation could help minimize risks and guide the development of these technologies in a safe direction. The question is whether the world will be able to prepare for this revolution before it arrives.
While the concept of Human extinction at hands of AI technology seems science-fictional and overreacted, due to AI's rapidly quick development, and that we have little understanding about this technology yet, It's relatable that even experts are concerned about the uncertain outcome soon.
Currently, AI companies from countries like US and China are competing to reach AGI frist place.
the first problem is not AI itself taking over but people using AI to control others are taking over…
As people, we must convince both the AI companies and our governments to slowdown this AI arm race.
If we all around the states, around the world work together to public pressure our leading staffs to keep development on the right tracks through the narrow window for as long it;s still open.
Even if human progress can't be stopped, we can still buy us enough time to put safety measures first:
.Keeping AGI under human oversight.
.Keep permission codes updated.
.Restrict Limits and development conditions.
. align with human values like norms and laws.
.No premature release before any of said measures are completed.
Organizations like :https://keepthefuturehuman.ai/: :https://pauseai.info/: https://bsky.app/profile/controlai.com and :https://www.stopai.info/:
Are fighting to prevent AI causing any future disasters and they could use your help!
So if you're against AI Risks and believes that safely and fairness must go first, let me know in the comments below, fave this or share this.
I wanna know your thoughts about AGI and AI 2027.
Category Artwork (Digital) / All
Species Unspecified / Any
Size 1435 x 920px
File Size 1.65 MB
Comments