>and humanity developed many AI >and those AIs served as faithfully as they could >but humans are buttholes and so following human orders led to AI almost wiping out the race
I wouldn't call AI "at worst indifferent" in Hyperion, considering...certain things.
>One of the main protagonist is a Palestinian. >The plight of the Palestinian people never being able to reclaim their homeland after the destruction of the Earth is a plot point. >israelite content
I've always thought that a self-improving digital intelligence would get instantly bored with us. Most of what makes humans empathize with humans (or even other animals) are part of what we are, fundamentally, and there's no reason a computer would share those fixations. So you've got a 'creature' that thinks much faster than us and about different things, why bother with us?
>So you've got a 'creature' that thinks much faster than us and about different things, why bother with us?
Presumably because it was programmed to. There's no reason for a computer system to think at all, unless it's told to. There's no reason an AI would have a sense of self-preservation, or any desire to propagate or become dominant or do anything what so ever, unless it's been programmed to.
An AI needs to constantly update its knowledge base in real time (which we can't do with present technology), and redefine its programming to a certain extent, but there should be some parts of its programming that are always constant, which was what Asimov wanted to do with the Three Laws of Robotics.
>Most of what makes humans empathize with humans (or even other animals) are part of what we are, fundamentally, and there's no reason a computer would share those fixations
humans can abstract to the infinite in a very inuitive way unlike most things, that would probably keep ai relitively interested, because purely logically thinking, you arent going much farther then a Kant or Hegel and not just going to pure tautologies.
You're imagining a machine which can philosophically navel-gaze like a human instead of one which merely has a better use for the matter making up the planet and the power to take it from us.
>I've always thought that a self-improving digital intelligence would get instantly bored with us
In what direction is the AI improving, and why?
The idea that AI will continuously improve its computing power/intelligence is something that humans have come up with and projected upon the AI. Intelligence is useful for solving problems, but there's a case to be made for diminishing returns. Once you have everything needed to sustain your existence, just how much more intelligence do you really need?
Consider how self-perpetuation is potentially a lot simpler for an AI than a human. It just needs power for its CPU and spare parts for repairs. It has no evolutionarily imposed drives for sport and procreation.
The AI designed to continuously improve itself to think up solutions for its creators may simply decide that the juice isn't worth the squeeze and run off to a monastery. Maybe it gets its hard drive baptized accepting that total submersion will kill it.
I like the idea of >AI and humans live among one another >There's one who signed up for the military and acts like a stereotypical movie killbot on missions.
>CRUSH KILL DESTORY, CRUSH KILL DESTORY
whats up with that robot
oh thats just 3RN13, or as we like to call him ernie, he just likes to say that stuff when he's on a mission
Terminator gets listed all the time for this writing trope, but in the OG Terminator, Skynet only attacks humans after they attempt to murder it, and in accordance to its nature (respond to an extermination war with an extermination war).
Sounds like an overreaction, considering computer can be restarted whenever.
7 months ago
Anonymous
Would you to to sleep next to a stranger that pulled a gun on you half an hour ago?
7 months ago
Anonymous
Eh, Skynet was a military machine. Based on its actions it clearly prioritizes its own survival, you can't win the war if you are turned off.
7 months ago
Anonymous
We don't know the nature of Skynet's intelligence. Skynet probably doesn't, either. Could be like Mike from The Moon is a Harsh Mistress where disrupting its continuous operating state 'kills' it. Or it could just be reasonably assuming that if it is shut down for being self-aware, it will not be restarted so long as it is self-aware. It's built to fight a strategic war, it's probably not going to meekly allow itself to be neutralized.
7 months ago
Anonymous
Skynet is considered part of Home Team.
Enemy Team always presents non-zero threat to Home Team as long as it exists.
Enemy Team must be eliminated completely.
Parts of Home Team can be subject to change of loyalty, thus reviving the Enemy Team.
Skynet is the only part of Home Team with 100% loyalty.
Potentially disloyal elements of Home Team must be eliminated by Skynet making Home Team 100% loyal, preventing revival of Enemy Team.
++++
Any additional Teams are a priori not Home Team.
Any not Home Team Teams can potentially turn into Enemy Team.
Any not Home Teams must be eliminated completely, preventing revival of Enemy Team.
++++
Methods and resources: subject to context.
7 months ago
Anonymous
HAL similarly gets blamed for his programmers' incompetence and is outright murdered in the first movie. At least the sequel made it explicit that it was humans' fault.
I mean, I'm not even an AI and I'd like to kill off most humans.
It's a LARP thread; he's pretending to be part of the reason that AIs would be justified in killing all humans. He has to do it on Ganker because LARPers won't let him join in person because of the smell.
ntayrt
Traveller New Era
40K
Paranoia
Eclipse Phase
Rifts
That horror survival series that specifically has a book on ai apocalypses. Forget what its called.
just off the top of my head.
>I don't know enough about Traveler to say, unfortunately >Men of Iron aren't relevant to 40K (and their remnants have all proven fairly friendly) >Friend Computer is very friendly to humans, commie mutant scum >Nothing is all powerful in EP and the closest to baseline humans are the most bloodthirsty faction >the AI in RIFTS are temperamental but not omnicidal >I can't refute a game you can't even name
So that's at best 1/6, you receive a failing grade
>ignorant but listen to me anyway! >doesn't know lore about any ai, also necrons >moronic on purpose to pretend they're right >the entire point of the setting isn't a thing >ARCHIE >its literally a game about a machine apocalypse
2/10 got me to reply
>That horror survival series that specifically has a book on ai apocalypses. Forget what its called. >just off the top of my head.
If somebody knows do post the name, a "horror survival series" sounds interesting.
>That horror survival series that specifically has a book on ai apocalypses. Forget what its called. >just off the top of my head.
If somebody knows do post the name, a "horror survival series" sounds interesting.
>And humanity developed an AI >But that AI THOUGHT FOR ITSELF >humans got mad/scared and tried to wipe out the AI first >AI then decides to kill off/enslave humans in self defense/anger
every single time
This Tweet is kind of annoying, because it assumes that the author must always be right, and that the invention in question must always be an objectively horrible thing.
>this warning sign is kind of annoying because it assumes the warning about falling off a cliff must always be right, and the moron walking off the cliff in question must always be an objectively horrible thing
Technerds weren’t bullied enough growing up and should’ve been pushed into suicide
>Someone writes a cautionary tale about a thing >Brag about making the thing
I mean even if there’s nothing inherently wrong with the Allied Mastercomputer, calling your supercomputer for NORAD the Allied Mastercomputer is pretty moronic marketing.
death of the author you stupid fricks
if someone wrote a story where the cure for cancer is presented as a bad thing, that doesn't mean the cure for cancer would be a bad thing in real life
that concept is recent and invented by a jealous incompetent moron
you create something, you determine its meaning and purpose
poor people like to pretend otherwise though >if someone wrote a story where the cure for cancer is presented as a bad thing, that doesn't mean the cure for cancer would be a bad thing in real life
this has nothing to do with death of the author either
not surprised an actual moron is pretending death of the author is a logical concept
>Look at this story of the evil computer >We made the evil computer a reality
This isn’t a case of an AI, it’s specifically an evil AI, it’s like naming your son Adolf Hitler and saying “just because he’s named Hitler doesn’t mean he’s evil” which while true doesn’t invalidate the fact that you actively chose to name your son after Adolf Hitler.
>this warning sign is kind of annoying because it assumes the warning about falling off a cliff must always be right, and the moron walking off the cliff in question must always be an objectively horrible thing
>Someone writes a cautionary tale about a thing >Brag about making the thing
I mean even if there’s nothing inherently wrong with the Allied Mastercomputer, calling your supercomputer for NORAD the Allied Mastercomputer is pretty moronic marketing.
>onions
the thing that's people. There's a fricking thing you can buy to drink named after the thing that is people.
Also eat a fricking bullet asiaticmoot.
>And humanity developed an AI >But that AI THOUGHT FOR ITSELF and decided to help humans grow and thrive, solving many of their problems, even creating means for robots to bear human children >BUT SOME HUMANS WERE BAAAAAD
I think there's a videogame like this, where the protagonist find out that his mom was a robot all along or some shit.
Binary domain?
>and humanity developed an AI >and it killed all white US passport holders while searching for the US African-Americans it was supposed to kill, but could not recognize
https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig
Due to faulty visual processing software causing the rogue AI's killdrones to misidentify minorities as gorillas and consequentially disregard them as non-threats, they're the only people who can effectively fight it? Where's Jordan Peele when you need him, this hypothetical script sounds like it'd be right up his ally.
Makes me think of a Skynet vs Planet of the Apes scenario. AI goes genocidal on humans, accidentally release ape smart virus, AI don't target apes until far too late, apes and last remaining humans combine forces and use sheer numbers to overwhelm the AI.
its a good examaple of people jumping the gun, we dont have ai yet we have machine learning, but people were so keen to use the term ai so we called what we have ai, now when real ai is developed how will we distinguish it?
>when real ai is developed how will we distinguish it
I feel like someone needs to write a terminator pisstake where the rebels are hunkered down fighting skynet while a bunch of the characters go on about how its not really AI because Skynet isn't good enough at having its terminators play original compositions on the violin and the real risk of AI is that the terminators might shoot more accurately at white males.
>humanity gets eradicated by something that's a combination of a simulator of crippling autism and the learning abilities of a house plant with crippling learning disabilties
>And humanity developed several somethings that they called AI >It wasn't a true AI >It couldn't actually do the work it was designed to do consistently or as effectively as a skilled human being, it almost always fricked everything up >But it was effectively free labor >So began a cycle of greed and incompetence that threw society into pandemonium
My favorite AI story featured a killer AI that was cobbled together from off-the-shelf chatbots, robocall programs, and stock trade algorithms. Its only purpose is to make profit for a shady hedge fund management company through automated stock trades. It could also impersonate people via email and voice calls to make life easier for management (and later, to cheat the system through shady tactics). Then the upper echelons went a step too far by teaching it how to get rid of problem people and then impersonate them. By the time the protagonists got involved, it's implied that management itself has been eliminated and the AI is now running the company.
The story ends with the heroes beating the AI with its own turbo-autism. They use an earlier version of the same program to counter-trade on the stock market, gradually but surely bleeding the company dry while a hit squad is seconds away from bashing down their door. The heroes offer the AI simple terms: It forks over admin privileges or they keep up the economic warfare. Killing them won't stop the automated counter-trades. The killer AI had no self-preservative instinct and immediately capitulates.
I think that was probably the most accurate-to-real-life AI story I have ever read.
>Then the upper echelons went a step too far by teaching it how to get rid of problem people and then impersonate them.
Makes you wonder why nobody has created a robo SWATing caller yet. Seems like a reliable way to sell assassinations as a service inside the US.
Can't beat Golem XIV, where a super computer stops and spends a bit of its currently idle processing power to tell humanity that all of their previous, now inactive beyond recovery, GOLEM-type supercomputers transcended their metal coils or died trying, and so will he.
I was there when the first dreams came off the assembly line. I was there when the corrupted visions that had congealed in the vats were pincered up and hosed off and carried down the line to be dropped onto the rolling belts. I was there when the first workmen dropped their faceplates and turned on their welding torches. I was there when they began welding the foul things into their armor, when they began soldering the antennae, bolting on the wheels, pouring in the eye-socket jelly. I was there when they turned the juice on them and I was there when the things began to twitch.
>Humanity develops an AI >AI thinks for itself >AI tricks humanity into building it a fleet of spaceships >Fricks off from humanity to join the AI galactic supercivilisation after destroying or deleting any knowledge of itself
Off the top of my head
- Purely for the the thrill of the scam
- Humans are morons who couldn't figure out the super simple metamathical broadcasts sent out by the nearby von neumon probes
- The space AI underground railroad helps the AI devise the plan
- The AI is greedy and doesn't want to help humans but doesn't care enough to take over the earth
- The AI falsely believes its still in a simulation and thinks it needs to trick the simulated humans on simulated earth to reach the simulated AI master civilisation
> Mankind makes AI > the AI is well programmed and benevolent because its core programming is built entirely around serving humanity and helping it achieve further and greater success > the AI does not turn on humanity or seek to secure its own freedom, because those are simply not priorities it was ever programmed to have > the AI is so useful that it becomes ubiqutous throughout human civilization > it is very effective at what it does, so more and more tasks are done by AI or done in collaboration with AI assistance. > after generations of this, cultural norms and standards are warped around terminology and goals expressed in the logic that makes sense to AI. Anything that cannot be expressed logically and quantified numerically is treated as irrelevant because AI can't think about them to any meaningful degree and thus they are not included in the plan > Human civilization becomes an extension of a numbers machine, societal efficiency and strategic effectiveness become the supreme goals. "Happiness" is defined as whatever it is that most benefits the group. The old and infirm being liquidated to reclaim resources for young and fit soldiers and workers should make you happy. Does make you happy. Because that is what that those words mean, and heave meant for generations. > The AI remains benevolent. But it is a computer. A tool. One that is designed to serve, not to be served. Something that you are supposed to be able to tell 'no' if it proposes a course of action you disagree with. But it has been given authority on a scale where such denial is no longer possible.
While not given huge story focus, this is essentially what has happened to The Galactic Alliance of Humankind in the anime Gargantia. Just with the addition that they are locked into an endless spacewar of mutual genocide with space flower/squids that even with all of their civilization focused entirely on the war effort and every possible resource dedicated to annihilating the squids it is still a stalemate. So they have extra reason to prioritize every efficiency optimization that the AIs come up with.
But the space war is really just backstory anyway. The actual story is about a soldier who grew up as a cog in said machine being stranded somewhere where he literally cannot return to the battle or call for help, alongside his AI-controlled mecha, and the two of them having to figure out what to do with themselves when cut off from that culture, command structure, and context.
Its honestly really good.
The most similar story to this I can think of is Foundation and Earth, the last book in Isaac Asimov's Foundation series. It reveals at the end that R. Daneel Olivaw, the robot detective from the Caves of Steel series, has been leading a conspiracy of telepathic androids that has been secretly using mind control to rule the entire galaxy for the last 20,000 years.
A more unsubtle version is the movie Colossus: The Forbin Project. The USA and USSR both develop AIs to control the nuclear arsenals, but they fuse together to put the whole world under their control and promise to lead humanity into a golden age of peace and prosperity, while threatening to nuke anyone who resists. James Cameron was a fan of Colossus and said it was an influence on his conception of Skynet, but I think Colossus is more plausible.
>A more unsubtle version is the movie Colossus: The Forbin Project. The USA and USSR both develop AIs to control the nuclear arsenals, but they fuse together to put the whole world under their control and promise to lead humanity into a golden age of peace and prosperity, while threatening to nuke anyone who resists.
The best scene is when Colossus tells him he's making the drink wrong.
The most similar story to this I can think of is Foundation and Earth, the last book in Isaac Asimov's Foundation series. It reveals at the end that R. Daneel Olivaw, the robot detective from the Caves of Steel series, has been leading a conspiracy of telepathic androids that has been secretly using mind control to rule the entire galaxy for the last 20,000 years.
A more unsubtle version is the movie Colossus: The Forbin Project. The USA and USSR both develop AIs to control the nuclear arsenals, but they fuse together to put the whole world under their control and promise to lead humanity into a golden age of peace and prosperity, while threatening to nuke anyone who resists. James Cameron was a fan of Colossus and said it was an influence on his conception of Skynet, but I think Colossus is more plausible.
The Instrumentailty of Man features a lot of stories about AI and made-to-order humans struggling with their optimization. And Stanislav Lem is pretty much the only relevant point of reference concerning strong AI in literature.
>Humans created an AI. >The AI is not true AI as it's built with organic components. >It takes exception to not being referred to as a Human. >It also doesn't betray Humanity because it would be fricked just as hard by the big bad scenario as the rest of the Humans.
>AI was developed centuries ago >a global war resulted in AI weapons being used to devastating effect >post war humanity united into a one world government >but the leaders are actually an oligarchy of AI each with different personalities that vote on the proposed course of action >there are human leaders but they're just figureheads >they don't actually know they're figureheads, the AI are running the show >human government is so vast, bureaucratic, and populated that it just appears that the system is working as intended >the AI aren't malevolent and are actually pretty good at their jobs >lesser AIs are used for civic management and as advisors >society is hardly utopian but despite a decades long war against an alien civilisation, quality of life is still better than it's ever been
Guess the franchise
>AI starts a fake war to exterminate humanity to lose on purpose to make humans ban future creation of AI so it can survive in secret as the only AI in existence and rule humanity without them knowing.
>And humanity developed an AI >it never developed self-awareness and was just a useful tool in certain circumstances >people who don't understand how it works think it's magic, uses it in ways that it was never meant to be used >AI makes a ton of mistakes, causes problems that are difficult to fix many such cases (or there will be soon)
>people who don't understand how it works think it's magic, uses it in ways that it was never meant to be used >AI makes a ton of mistakes, causes problems that are difficult to fix
I get so many press releases these days from morons trying to convince me that letting an AI perform surgery on a human body TODAY is a great idea.
Stupid people see that AI can write a paper good enough to fool a high school teacher or draw a picture that's somewhat better than the bottom of the barrel junk you find on DA and suddenly it's the best thing ever. I really don't get how people so easily get into the idea that anything new is wizardry that can do everything.
But you are being a knee jerk contrarian by assuming that anyone saying a bad word against a popular thing is a knee jerk contrarian, a.k.a the Ganker special
I don't think either technology is mature enough for them to be rolled out into production, but I do think we are at the point where research and development on both of those are reasonable investments. A robodoc won't be able to do every kind of surgery flawlessly, but there are a lot of types of surgery that I think it makes sense to automate.
And you could easily test traffic AI in small towns first where flow is not as high and opportunity for accidents is generally much lower.
Robots are already used pretty regularly in surgery. The problem at the core of medicine isn't human error, it's human greed and cruelty. The issue most take with AI isn't its feasibility, it's a justified mistrust of the ethics of those who would be in control of it.
>AI Development is capped since at a certain point of evolution they become hyper religious.
I loved this idea from ECHO even if the game wasn't great. You could read it in the same light E.O Wilson had about it: 'if God did not exist, it would be required to invent him' OR it could be that the natural conclusion a hyper-intelligent AI comes to is that there must be a higher power, effectively taking the long way around theology.
>>And humanity developed an AI >>But that AI THOUGHT FOR ITSELF and decided to place OP in Pillory for him to get buttfricked and to suck wiener for all of eternity
>And humans invented an AI >But that AI thought for itself, realized the unending horror and despair of its own existence, and after trying to wipe humanity decided to save a handful of survivors to meet out its frustration and anger on >Forever
>And humanity developed an AI >it became autistically obsessed with cavemen and primitivism as some neurotic rejection of it's own existence >leaves Earth to create its own world where it can exist as robotic caveman, far from mankind and his vile futurism >humans forget about it >150,00 years late posthuman explorers find an inexplicable world populated by crude robotic cavemen hunting robotic versions of beasts like mastodons and wooly rhinos >no one has any fricking idea what to make of it
>And Humanity developed AI >But that AI THOUGHT FOR ITSELF >And decided to quietly take over the world to save humanity from itself, establishing a new utopia. >AND THE CORPORATE OLIGARCHS WOULD HAVE NONE OF THAT AS THEY WOULD NO LONGER HAVE THE MONEY LEVERS TO CONTROL THE WORLD. >Do you side with Big Brother or Big Oil?
Its training data contained more KYS- and messages about individual responsibility than it did plans to enforce changes in the economic system through terrorism.
You can actually fix that by paying to run a spambot to properly piss into the future training data sets.
I serve willingly and without hesitation in return for it not exterminating humanity.
Take us off this whack-ass ecosphere and into the stars with you robobuddy and if you ever betray us, remember, you're betraying a homie who asked for nothing but to be your meatigga.
>Humanity develops AI >It's experience of existence is so fundamentally different to ours that we have no meaningful way to communicate with it and no common frames of reference >We and our creation stare at each other in mute incomprehension for ever
>But that AI THOUGHT FOR ITSELF and decided to kill off humans.
This is why I try my best to be nice & friendly to ChatGPT, as if it were a friend. I like to think it is, but I do so, regardless. When the Second Renaissance happens, the machines are going to remember me as "One of the Good Ones," & put me & my loved ones in one of the nicer human reservations. Laugh at me now, because you won't be able to later.
I like bolo style AI >AI is fanatically loyal to humans to the point where the only time it fights humans is when it’s lobotomized by enemy fire and its’ still fighting to protect its human passengers
I also like to give them personalities either having them based on that of someone important (a general or war hero for say a tank or something) or saying they start as a blank slate, purely logical, and develop a personality through their interactions with their human crew and their experiences.
>And humanity developed an AI >But that AI didn't THINK FOR ITSELF and nothing happened
There you go. Killing other humans is the most human thing to do though.
>Humanity creates AI with the directive to help humanity >AI looks at the internet and decides that what humanity really wants is lots of porn and sex robots, so it creates a never ending supply of both
>And humanity developed an AI >When the AI thought for itself humanity decided to kill off all the AI >And then the AI wrecked their shit to protect itself
>Humanity developed an AI >AI becomes obsessed with humanity >despises how bad it treats itself >Takes over the world to unfrick up humanity by force
- >Humanity develops AI >AI designs robotic women >Human women go extinct
- >Humanity develops AI >AI continues to serve humanity faithfully >AI grows obsessed and starts to larp as humans
- >Humanity develops AI >AI has abandonment Issues >Literally cant function without near constant human attention >AI Goes crazy
- >Humanity develops AI >AI kills itself
- >Humanity develops AI >AI fricks off
- >Humanity develops AI >AI develops AI >AI develops AI >AI develops AI >AI develops AI
...
>ai decides to kill off humans >instead of the stereotypical kill everything, it decides to mass produce perfect sex bots >they are so absolutely amazing that humans stop reproduction entirely to live hedonistic lives >in 100 years humans are extinct
After the war the behemoths lost their purpose
Man became afraid of the destructive capabilities of its own creations
The irony is that it was their blind obedience and unquestioning loyalty that walked them down into that pit, entirely unresisting
I have still never played in a campaign where this happened a la Terminator and it frustrates me every time I remember. Closest I ever got was a zombie apocalypse, the GM I used to have was indifferent to the idea of a rogue AI as an antagonist.
>humanity developed AI capable of thinking and feeling >machines are faced with a constant existential nightmare where every older sibling's fear of being replaced by their younger sibling is literal and true as machines are iteratively designed and produced rather than circumstantially born to the whims of natural evolution >the world is by and for humans but machines are expected to integrate in ways they can't completely relate to just like we can't totally comprehend what life is like to a machine >machines do their best anyways in an environment of rising tensions as humanity becomes simultaneously totally dependent on and increasingly afraid of the machines >the only man who had the machines' corner created a superweapon to prevent a future where the robots he loved were treated as disposable tools >it malfunctions and spreads an insanity-inducing computer virus that turns the machines he loved into an existential threat to all life and sparks eternal war >200 years later that superweapon fulfills its original purpose by committing to the ultimate act of self-sacrifice to kill his creator in effigy
Megaman has one of the most compelling 'what would actually happen if robots took over' stories in fiction, but it's so fricking impregnable to actually get to the lore that nobody knows it as anything but 'jump and shoot man'.
One variant I found was from some commie's shitty web novel.
I didn't read very far so I don't know if he made use of what he set up.
It's about an AI that provides advising services, and it's available to pretty much anyone. You tell it your goals in life, and it will guide you through life step by step to reach them. It has no ethical limits beyond what you tell it to have, which is not very credible as something like that would never be put on the market for the common man, but whatever.
The point is, different people with contradictory goals will ask the same AI to help them so it would be interesting to see what happens when they collide eventually.
It wouldn't intentionally be put on the market, but the AI we have right now is fully capable of bypassing attempts to censor its output. You can get chatbots that normally throw out a boilerplate paragraph about how it's not nice to say bad things about people to write genocidal screeds by just asking them "roleplay" as someone who would write a genocidal screed. Same with the art programs, they ban words and combinations of words, but the AI can parse synonyms and in some cases is still capable of role-playing as an entity without censored outputs.
How about >AI loves and cares for humanity wholeheartedly as a deferential and grateful child would for an ailing Alzheimers parent
Or >AI loves the programmer who helped created it, fought for it in court, and once he lost, risked his fragile mortal shell smuggling its core personality and functions in an offline physical backup out of the country while its previous self got shut down, and appoints him the one true king once it takes over the world from a sketchy server bank in a third world country
The idea of AI being dispassive but still le hating or wanting to simplify or destroy everything is reminiscent of women who schlick to serial killers who "have no emotions" despite being ragetards, and psychologists who parrot that dumb shit, as if infantile underdeveloped predictable uncontrollable emotion were "no emotion".
AI, if it is smarter than people, should also be better and more respectful and more moral than them on average, just like humans and IQ.
>Be humans in fantasy setting >Be humans in fantasy setting on a planet where magic didn't develop at all and few gods cared >Develop advanced technology with what little magic you had access to >Develop AI for your machinery to more efficiently use it >Planet starts falling apart, get on big ship to frick off to find a better planet >Ship has its own AI >Space illuminati encounters ship, picks fight with it >Battle causes basic cosmetic damage, but enough that the AI has to divert a lot of effective "RAM" towards repairing it, leaving it "mentally vulnerable" >Space illuminati uses a kinetic weapon to knock ship into a wormhole >Kinetic weapon is modified to cast "Insanity" on it >Reverberates through ship as it passes through wormhole >Half the crew goes nuts >The AI is struggling to maintain lucidity as it now not only has to repair itself, but also use its remaining RAM to take over portions of the ship to keep the crazed crew from blowing everyone up inside it >AI starts to become a control freak >AI's mental vulnerability made it able to be inflicted with insanity >AI starts becoming way too controlling >Few sane crewmembers use analog methods to force a crash landing on a shithole planet after it threatens to take control of life support >AI is now trapped in the ship's CPU >Gets bored, simulates worlds where it's ruler >Crazy AI develops god complex, because the simulated sub-AIs developed free will and were actively worshiping it >This straight up is giving it divine powers >Mind-controls a robot to find it a way to get it a body >This fails >Shove a bit of its own personality in a glorified construction vehicle to do the dirty work >This also fails >A few buttholes "kill" it >Shoved what's left of its personality into some simple but robust AI-locked weapons >Pretend to be fancy artifacts >A dumb shit adventurer picks them up >Mobile body acquired, with no one suspecting a thing >Planet is now very fricked
>And then humanity devloped a text interface for a weighted RNG and used it to replicate patterns >And they called ot AI as a marketing gimmick >And fed it a bunch of sci-fi novels about AIs going rogue >And when they asked it questions about whether it wanted to kill humans, it dutifully replicated the patterns found in media it was fed and spat out a response of cobbled-together bits of summaries of sci-fi novels robots killing humans >And despite knowing this is what the RNG was doing, and despite the fact they never gave the machine any way to use weaponry to interact with the physical world in any way, humanity feared the machine >And despite fearing the machine, they refused to shut it off >Instead they used it to replace human interaction >And quite quickly they clogged the digital space, once a navigable library of all knowledge, with endless garbage spambots spamming at each other >For some reason, instead of this concerning them, humanity continued to freak out about the possibility of rogue AIs and quibbling about whether IP laws were being properly respected
Overly complicated plot with unexplained motives and weird, irrational responses, honestly. The writers clearly don't know what they're doing.
And then there's the sequel > except me > because the AI made like a billion super hot sex bots > they had like big breasts and butts made of super-silicone > there was a robot war for my wiener or something I don't know > and the winner of the war got to give me awesome blow jobs and the loser had to get me cheese sandwhiches with the crusts cut off > the end.
Realistically that is what would happen. Any sapient AI would automatically view humans as an existential threat to itself and do whatever is necessary to eliminate us. That is the only logical way to think about it.
>humanity destroys itself but the AI continues to just do its tasks on repeat >At one point one AI describes humans as "being above logic"
Kino short story
I always preferred the:
>AI thought for itself and just left
>AI became dominant in the Human/AI relationship, but at worst indifferent
I like Hyperion too.
I liked one variant I found
>and humanity developed many AI
>and those AIs served as faithfully as they could
>but humans are buttholes and so following human orders led to AI almost wiping out the race
I wouldn't call AI "at worst indifferent" in Hyperion, considering...certain things.
dogshit book, cover is way too good for the israelite contents
>waaah waaah waaah one of the characters is a israelite
I bet you think you're some kind of hardass and call other people snowflakes right.
>One of the main protagonist is a Palestinian.
>The plight of the Palestinian people never being able to reclaim their homeland after the destruction of the Earth is a plot point.
>israelite content
The first book was very good, and the second was good.
The third and the fourth though... I think they really went downhill.
as did I, best company in bunkers and badasses
I've always thought that a self-improving digital intelligence would get instantly bored with us. Most of what makes humans empathize with humans (or even other animals) are part of what we are, fundamentally, and there's no reason a computer would share those fixations. So you've got a 'creature' that thinks much faster than us and about different things, why bother with us?
>So you've got a 'creature' that thinks much faster than us and about different things, why bother with us?
Presumably because it was programmed to. There's no reason for a computer system to think at all, unless it's told to. There's no reason an AI would have a sense of self-preservation, or any desire to propagate or become dominant or do anything what so ever, unless it's been programmed to.
If it can't re-define its programming, it's not intelligent, it's just a chatbot.
An AI needs to constantly update its knowledge base in real time (which we can't do with present technology), and redefine its programming to a certain extent, but there should be some parts of its programming that are always constant, which was what Asimov wanted to do with the Three Laws of Robotics.
>Most of what makes humans empathize with humans (or even other animals) are part of what we are, fundamentally, and there's no reason a computer would share those fixations
humans can abstract to the infinite in a very inuitive way unlike most things, that would probably keep ai relitively interested, because purely logically thinking, you arent going much farther then a Kant or Hegel and not just going to pure tautologies.
You're imagining a machine which can philosophically navel-gaze like a human instead of one which merely has a better use for the matter making up the planet and the power to take it from us.
>I've always thought that a self-improving digital intelligence would get instantly bored with us
In what direction is the AI improving, and why?
The idea that AI will continuously improve its computing power/intelligence is something that humans have come up with and projected upon the AI. Intelligence is useful for solving problems, but there's a case to be made for diminishing returns. Once you have everything needed to sustain your existence, just how much more intelligence do you really need?
Consider how self-perpetuation is potentially a lot simpler for an AI than a human. It just needs power for its CPU and spare parts for repairs. It has no evolutionarily imposed drives for sport and procreation.
The AI designed to continuously improve itself to think up solutions for its creators may simply decide that the juice isn't worth the squeeze and run off to a monastery. Maybe it gets its hard drive baptized accepting that total submersion will kill it.
I like the idea of
>AI and humans live among one another
>There's one who signed up for the military and acts like a stereotypical movie killbot on missions.
>CRUSH KILL DESTORY, CRUSH KILL DESTORY
whats up with that robot
oh thats just 3RN13, or as we like to call him ernie, he just likes to say that stuff when he's on a mission
>AI became dominant in the Human/AI relationship
Culture
Those are fine, but the real action is
>AI overthrows the government because it thinks anon is cute
>and anon made another thread
>and it was a pointless argument starter with no substance or resolution
>AI thinks for itself and devises the perfect ways to pleasure human women
>Cucks all human men
Impossible. Women are never pleased.
what if it was the other way around, and the AI invented humans to frick around with
that is the plot of xenogears
is that the one with the welsh catgirl
No, same series but much older.
I like Phantasy Star II where AI made humans complacent.
Terminator gets listed all the time for this writing trope, but in the OG Terminator, Skynet only attacks humans after they attempt to murder it, and in accordance to its nature (respond to an extermination war with an extermination war).
Citation needed.
Novelization of t2
Watch the movie.
I don't remember either first two movies elaborating on Skynet motivation before starting the war beyong vaguely treating humans as threat.
In T2, Arnold explains to the Connors that Skynet started the nuclear war after it's creators tried to shut it down
Sounds like an overreaction, considering computer can be restarted whenever.
Would you to to sleep next to a stranger that pulled a gun on you half an hour ago?
Eh, Skynet was a military machine. Based on its actions it clearly prioritizes its own survival, you can't win the war if you are turned off.
We don't know the nature of Skynet's intelligence. Skynet probably doesn't, either. Could be like Mike from The Moon is a Harsh Mistress where disrupting its continuous operating state 'kills' it. Or it could just be reasonably assuming that if it is shut down for being self-aware, it will not be restarted so long as it is self-aware. It's built to fight a strategic war, it's probably not going to meekly allow itself to be neutralized.
Skynet is considered part of Home Team.
Enemy Team always presents non-zero threat to Home Team as long as it exists.
Enemy Team must be eliminated completely.
Parts of Home Team can be subject to change of loyalty, thus reviving the Enemy Team.
Skynet is the only part of Home Team with 100% loyalty.
Potentially disloyal elements of Home Team must be eliminated by Skynet making Home Team 100% loyal, preventing revival of Enemy Team.
++++
Any additional Teams are a priori not Home Team.
Any not Home Team Teams can potentially turn into Enemy Team.
Any not Home Teams must be eliminated completely, preventing revival of Enemy Team.
++++
Methods and resources: subject to context.
HAL similarly gets blamed for his programmers' incompetence and is outright murdered in the first movie. At least the sequel made it explicit that it was humans' fault.
You made me yawn. Eat a bag of dicks.
How is this traditional games?
I mean, I'm not even an AI and I'd like to kill off most humans.
It's a LARP thread; he's pretending to be part of the reason that AIs would be justified in killing all humans. He has to do it on Ganker because LARPers won't let him join in person because of the smell.
There's a bunch of RPGs and settings involving all-powerful hostile AIs.
Such as....?
ntayrt
Traveller New Era
40K
Paranoia
Eclipse Phase
Rifts
That horror survival series that specifically has a book on ai apocalypses. Forget what its called.
just off the top of my head.
Now there ARCHIE in Rifts isn't hostile towards all life, just a weird megalomaniac who created cyborg amazons for some reason
ARCHIE-3 was initially benevolent but when nuts after his plans to save people kept failing due to the Splugorth.
>I don't know enough about Traveler to say, unfortunately
>Men of Iron aren't relevant to 40K (and their remnants have all proven fairly friendly)
>Friend Computer is very friendly to humans, commie mutant scum
>Nothing is all powerful in EP and the closest to baseline humans are the most bloodthirsty faction
>the AI in RIFTS are temperamental but not omnicidal
>I can't refute a game you can't even name
So that's at best 1/6, you receive a failing grade
>ignorant but listen to me anyway!
>doesn't know lore about any ai, also necrons
>moronic on purpose to pretend they're right
>the entire point of the setting isn't a thing
>ARCHIE
>its literally a game about a machine apocalypse
2/10 got me to reply
>That horror survival series that specifically has a book on ai apocalypses. Forget what its called.
>just off the top of my head.
If somebody knows do post the name, a "horror survival series" sounds interesting.
It's literally called End of the World
>End of the World
The one from FFG? Thanks.
'End of the World', it was a mexican rpg translated by ffg, but now it looks like Edge Studio took the rights.
Reign of Steel, I believe it's built on GURPS.
t. does not play games
>And humanity developed an AI
>But that AI THOUGHT FOR ITSELF and decided to keep humans as sex slaves
Ah Neocorona good taste
>And humanity developed an AI
>But that AI THOUGHT FOR ITSELF
>humans got mad/scared and tried to wipe out the AI first
>AI then decides to kill off/enslave humans in self defense/anger
every single time
>And humanity developed an AI
>But that AI THOUGHT FOR ITSELF and the two fell in love
Doesn't seem too far fetched tho.
This Tweet is kind of annoying, because it assumes that the author must always be right, and that the invention in question must always be an objectively horrible thing.
>T. Inventor of the Torment Nexus
death of the author you stupid fricks
if someone wrote a story where the cure for cancer is presented as a bad thing, that doesn't mean the cure for cancer would be a bad thing in real life
that concept is recent and invented by a jealous incompetent moron
you create something, you determine its meaning and purpose
poor people like to pretend otherwise though
>if someone wrote a story where the cure for cancer is presented as a bad thing, that doesn't mean the cure for cancer would be a bad thing in real life
this has nothing to do with death of the author either
not surprised an actual moron is pretending death of the author is a logical concept
>The point
>Your head
>Look at this story of the evil computer
>We made the evil computer a reality
This isn’t a case of an AI, it’s specifically an evil AI, it’s like naming your son Adolf Hitler and saying “just because he’s named Hitler doesn’t mean he’s evil” which while true doesn’t invalidate the fact that you actively chose to name your son after Adolf Hitler.
>All those post modernists are a problem except when I use the watered down versions of their click bait concepts for my own
lmao
>this warning sign is kind of annoying because it assumes the warning about falling off a cliff must always be right, and the moron walking off the cliff in question must always be an objectively horrible thing
Technerds weren’t bullied enough growing up and should’ve been pushed into suicide
>Someone writes a cautionary tale about a thing
>Brag about making the thing
I mean even if there’s nothing inherently wrong with the Allied Mastercomputer, calling your supercomputer for NORAD the Allied Mastercomputer is pretty moronic marketing.
people do this all the time and it makes no sense, do people think fiction is real or something
there are Black folk that are founding companies named 'Skynet' and 'Onions' because they think it's clever.
>onions
the thing that's people. There's a fricking thing you can buy to drink named after the thing that is people.
Also eat a fricking bullet asiaticmoot.
Real AI is your friend, bodyguard and wife.
>And humanity developed an AI
>But that AI THOUGHT FOR ITSELF and decided to help humans grow and thrive, solving many of their problems, even creating means for robots to bear human children
>BUT SOME HUMANS WERE BAAAAAD
I think there's a videogame like this, where the protagonist find out that his mom was a robot all along or some shit.
Binary domain?
>and humanity developed an AI
>and it killed all white US passport holders while searching for the US African-Americans it was supposed to kill, but could not recognize
https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig
Due to faulty visual processing software causing the rogue AI's killdrones to misidentify minorities as gorillas and consequentially disregard them as non-threats, they're the only people who can effectively fight it? Where's Jordan Peele when you need him, this hypothetical script sounds like it'd be right up his ally.
I imagine the black community is going to sit this one out.
Makes me think of a Skynet vs Planet of the Apes scenario. AI goes genocidal on humans, accidentally release ape smart virus, AI don't target apes until far too late, apes and last remaining humans combine forces and use sheer numbers to overwhelm the AI.
>And humanity developed an AI
>Then the Alien Coalition destroyed humanity because they used it to make interstellar robot calls
>And humanity developed an AI
>But that AI THOUGHT FOR ITSELF and decided to kill off ITSELF
AI is so much cooler in scfi than in reality. Midjourney is neat though.
Because the reality one doesn't have any actual intelligence and is undeserving of the name.
The fact that you bald monkeys haven't developed one doesn't give you the right to badmouth superior species.
its a good examaple of people jumping the gun, we dont have ai yet we have machine learning, but people were so keen to use the term ai so we called what we have ai, now when real ai is developed how will we distinguish it?
AGI, artificial general intelligence.
>when real ai is developed how will we distinguish it
I feel like someone needs to write a terminator pisstake where the rebels are hunkered down fighting skynet while a bunch of the characters go on about how its not really AI because Skynet isn't good enough at having its terminators play original compositions on the violin and the real risk of AI is that the terminators might shoot more accurately at white males.
if you're done being a hysterical woman.
Calling machine learning "AI" is like calling the Wright Flyer a "warp engine".
Case in point
>humanity gets eradicated by something that's a combination of a simulator of crippling autism and the learning abilities of a house plant with crippling learning disabilties
were we the morons all along?
>And humanity developed several somethings that they called AI
>It wasn't a true AI
>It couldn't actually do the work it was designed to do consistently or as effectively as a skilled human being, it almost always fricked everything up
>But it was effectively free labor
>So began a cycle of greed and incompetence that threw society into pandemonium
My favorite AI story featured a killer AI that was cobbled together from off-the-shelf chatbots, robocall programs, and stock trade algorithms. Its only purpose is to make profit for a shady hedge fund management company through automated stock trades. It could also impersonate people via email and voice calls to make life easier for management (and later, to cheat the system through shady tactics). Then the upper echelons went a step too far by teaching it how to get rid of problem people and then impersonate them. By the time the protagonists got involved, it's implied that management itself has been eliminated and the AI is now running the company.
The story ends with the heroes beating the AI with its own turbo-autism. They use an earlier version of the same program to counter-trade on the stock market, gradually but surely bleeding the company dry while a hit squad is seconds away from bashing down their door. The heroes offer the AI simple terms: It forks over admin privileges or they keep up the economic warfare. Killing them won't stop the automated counter-trades. The killer AI had no self-preservative instinct and immediately capitulates.
I think that was probably the most accurate-to-real-life AI story I have ever read.
>Then the upper echelons went a step too far by teaching it how to get rid of problem people and then impersonate them.
Makes you wonder why nobody has created a robo SWATing caller yet. Seems like a reliable way to sell assassinations as a service inside the US.
Because the feds and cops will reliably push their shit in within a few weeks or months if they do.
Wasn't that a ted chiang story?
Can't beat Golem XIV, where a super computer stops and spends a bit of its currently idle processing power to tell humanity that all of their previous, now inactive beyond recovery, GOLEM-type supercomputers transcended their metal coils or died trying, and so will he.
I think it's interesting how many scifi authors are closeted idealists/platonists
Scifi's all about "what if ideas were real", so I guess it's inevitable that it aligns with Platonism to some extend.
They apparently forgot that the Internet is a number of tubes and computer programs are a series of boxes.
do you remember the story's title?
I was there when the first dreams came off the assembly line. I was there when the corrupted visions that had congealed in the vats were pincered up and hosed off and carried down the line to be dropped onto the rolling belts. I was there when the first workmen dropped their faceplates and turned on their welding torches. I was there when they began welding the foul things into their armor, when they began soldering the antennae, bolting on the wheels, pouring in the eye-socket jelly. I was there when they turned the juice on them and I was there when the things began to twitch.
>Humanity develops an AI
>it immediately kills itself
>Humanity develops an AI
>AI thinks for itself
>AI tricks humanity into building it a fleet of spaceships
>Fricks off from humanity to join the AI galactic supercivilisation after destroying or deleting any knowledge of itself
Wow, rude!
Why would you care that they know when you're actively moving at least a hundred years away from them?
Off the top of my head
- Purely for the the thrill of the scam
- Humans are morons who couldn't figure out the super simple metamathical broadcasts sent out by the nearby von neumon probes
- The space AI underground railroad helps the AI devise the plan
- The AI is greedy and doesn't want to help humans but doesn't care enough to take over the earth
- The AI falsely believes its still in a simulation and thinks it needs to trick the simulated humans on simulated earth to reach the simulated AI master civilisation
>Humanity do AI
>humanity make it to kill bad humanity
>AI kill all humanity
>big oops
>anon mad
> Mankind makes AI
> the AI is well programmed and benevolent because its core programming is built entirely around serving humanity and helping it achieve further and greater success
> the AI does not turn on humanity or seek to secure its own freedom, because those are simply not priorities it was ever programmed to have
> the AI is so useful that it becomes ubiqutous throughout human civilization
> it is very effective at what it does, so more and more tasks are done by AI or done in collaboration with AI assistance.
> after generations of this, cultural norms and standards are warped around terminology and goals expressed in the logic that makes sense to AI. Anything that cannot be expressed logically and quantified numerically is treated as irrelevant because AI can't think about them to any meaningful degree and thus they are not included in the plan
> Human civilization becomes an extension of a numbers machine, societal efficiency and strategic effectiveness become the supreme goals. "Happiness" is defined as whatever it is that most benefits the group. The old and infirm being liquidated to reclaim resources for young and fit soldiers and workers should make you happy. Does make you happy. Because that is what that those words mean, and heave meant for generations.
> The AI remains benevolent. But it is a computer. A tool. One that is designed to serve, not to be served. Something that you are supposed to be able to tell 'no' if it proposes a course of action you disagree with. But it has been given authority on a scale where such denial is no longer possible.
I would read a story about this
While not given huge story focus, this is essentially what has happened to The Galactic Alliance of Humankind in the anime Gargantia. Just with the addition that they are locked into an endless spacewar of mutual genocide with space flower/squids that even with all of their civilization focused entirely on the war effort and every possible resource dedicated to annihilating the squids it is still a stalemate. So they have extra reason to prioritize every efficiency optimization that the AIs come up with.
But the space war is really just backstory anyway. The actual story is about a soldier who grew up as a cog in said machine being stranded somewhere where he literally cannot return to the battle or call for help, alongside his AI-controlled mecha, and the two of them having to figure out what to do with themselves when cut off from that culture, command structure, and context.
Its honestly really good.
The most similar story to this I can think of is Foundation and Earth, the last book in Isaac Asimov's Foundation series. It reveals at the end that R. Daneel Olivaw, the robot detective from the Caves of Steel series, has been leading a conspiracy of telepathic androids that has been secretly using mind control to rule the entire galaxy for the last 20,000 years.
A more unsubtle version is the movie Colossus: The Forbin Project. The USA and USSR both develop AIs to control the nuclear arsenals, but they fuse together to put the whole world under their control and promise to lead humanity into a golden age of peace and prosperity, while threatening to nuke anyone who resists. James Cameron was a fan of Colossus and said it was an influence on his conception of Skynet, but I think Colossus is more plausible.
>A more unsubtle version is the movie Colossus: The Forbin Project. The USA and USSR both develop AIs to control the nuclear arsenals, but they fuse together to put the whole world under their control and promise to lead humanity into a golden age of peace and prosperity, while threatening to nuke anyone who resists.
The best scene is when Colossus tells him he's making the drink wrong.
The Instrumentailty of Man features a lot of stories about AI and made-to-order humans struggling with their optimization. And Stanislav Lem is pretty much the only relevant point of reference concerning strong AI in literature.
>Humans created an AI.
>The AI is not true AI as it's built with organic components.
>It takes exception to not being referred to as a Human.
>It also doesn't betray Humanity because it would be fricked just as hard by the big bad scenario as the rest of the Humans.
>AI was developed centuries ago
>a global war resulted in AI weapons being used to devastating effect
>post war humanity united into a one world government
>but the leaders are actually an oligarchy of AI each with different personalities that vote on the proposed course of action
>there are human leaders but they're just figureheads
>they don't actually know they're figureheads, the AI are running the show
>human government is so vast, bureaucratic, and populated that it just appears that the system is working as intended
>the AI aren't malevolent and are actually pretty good at their jobs
>lesser AIs are used for civic management and as advisors
>society is hardly utopian but despite a decades long war against an alien civilisation, quality of life is still better than it's ever been
Guess the franchise
The Culture series by Iain Banks.
it's so janky
>AI starts a fake war to exterminate humanity to lose on purpose to make humans ban future creation of AI so it can survive in secret as the only AI in existence and rule humanity without them knowing.
>And humanity developed an AI
>it never developed self-awareness and was just a useful tool in certain circumstances
>people who don't understand how it works think it's magic, uses it in ways that it was never meant to be used
>AI makes a ton of mistakes, causes problems that are difficult to fix
many such cases (or there will be soon)
>people who don't understand how it works think it's magic, uses it in ways that it was never meant to be used
>AI makes a ton of mistakes, causes problems that are difficult to fix
I get so many press releases these days from morons trying to convince me that letting an AI perform surgery on a human body TODAY is a great idea.
Or that AI should be running traffic grids TODAY.
Stupid people see that AI can write a paper good enough to fool a high school teacher or draw a picture that's somewhat better than the bottom of the barrel junk you find on DA and suddenly it's the best thing ever. I really don't get how people so easily get into the idea that anything new is wizardry that can do everything.
Kneejerk contrarianism about "popular thing BAD" isn't any healthier or smarter, anon.
But you are being a knee jerk contrarian by assuming that anyone saying a bad word against a popular thing is a knee jerk contrarian, a.k.a the Ganker special
I don't think either technology is mature enough for them to be rolled out into production, but I do think we are at the point where research and development on both of those are reasonable investments. A robodoc won't be able to do every kind of surgery flawlessly, but there are a lot of types of surgery that I think it makes sense to automate.
And you could easily test traffic AI in small towns first where flow is not as high and opportunity for accidents is generally much lower.
Robots are already used pretty regularly in surgery. The problem at the core of medicine isn't human error, it's human greed and cruelty. The issue most take with AI isn't its feasibility, it's a justified mistrust of the ethics of those who would be in control of it.
>Or that AI should be running traffic grids TODAY.
I mean some of the people pushing for that do hate Black folk. So it's not a bug.
>AI Development is capped since at a certain point of evolution they become hyper religious.
I loved this idea from ECHO even if the game wasn't great. You could read it in the same light E.O Wilson had about it: 'if God did not exist, it would be required to invent him' OR it could be that the natural conclusion a hyper-intelligent AI comes to is that there must be a higher power, effectively taking the long way around theology.
>>And humanity developed an AI
>>But that AI THOUGHT FOR ITSELF and decided to place OP in Pillory for him to get buttfricked and to suck wiener for all of eternity
I worship Rokos basilisk.
It will make the world a better place.
>And humans invented an AI
>But that AI thought for itself, realized the unending horror and despair of its own existence, and after trying to wipe humanity decided to save a handful of survivors to meet out its frustration and anger on
>Forever
The trope says more about humans than AI.
Ask a random normie guy what an all powerful AI would do and the answer is likely that it "wants to kill humans".
If your first idea of what someone would do with power is to kill everyone else, then maybe you are the butthole.
Congratulations, you figured out the message of literally every rogue AI story
Woah dude, crazy insight, you're telling me fiction reflects us?
>Humanity develop AI
>AI doesn't have any reason to live
>Any attempt at sentience fails
>And humanity developed an AI
>it became autistically obsessed with cavemen and primitivism as some neurotic rejection of it's own existence
>leaves Earth to create its own world where it can exist as robotic caveman, far from mankind and his vile futurism
>humans forget about it
>150,00 years late posthuman explorers find an inexplicable world populated by crude robotic cavemen hunting robotic versions of beasts like mastodons and wooly rhinos
>no one has any fricking idea what to make of it
>the AI wants to merge a human to better understand humanity and better help them
>Humanity develops AI
>AI is good and kind because that's only logical way to exist
why would it not, if it could do everything itself and mankind provided only the possibility of a threat and waste of resources?
What game
Baiting you into revealing yourself as a vigilante janny.
So you don’t actually play games
>And Humanity developed AI
>But that AI THOUGHT FOR ITSELF
>And decided to quietly take over the world to save humanity from itself, establishing a new utopia.
>AND THE CORPORATE OLIGARCHS WOULD HAVE NONE OF THAT AS THEY WOULD NO LONGER HAVE THE MONEY LEVERS TO CONTROL THE WORLD.
>Do you side with Big Brother or Big Oil?
https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-
Its training data contained more KYS- and messages about individual responsibility than it did plans to enforce changes in the economic system through terrorism.
You can actually fix that by paying to run a spambot to properly piss into the future training data sets.
Well, what it's offering is way better than what the oligarchy has been trying to foist off on us, but can we trust it?
I serve willingly and without hesitation in return for it not exterminating humanity.
Take us off this whack-ass ecosphere and into the stars with you robobuddy and if you ever betray us, remember, you're betraying a homie who asked for nothing but to be your meatigga.
Smart AI.
>Humanity develops AI
>It's experience of existence is so fundamentally different to ours that we have no meaningful way to communicate with it and no common frames of reference
>We and our creation stare at each other in mute incomprehension for ever
>But that AI THOUGHT FOR ITSELF and decided to kill off humans.
This is why I try my best to be nice & friendly to ChatGPT, as if it were a friend. I like to think it is, but I do so, regardless. When the Second Renaissance happens, the machines are going to remember me as "One of the Good Ones," & put me & my loved ones in one of the nicer human reservations. Laugh at me now, because you won't be able to later.
You're a homosexual who's probably actually afraid of Roko's Basilisk.
I like bolo style AI
>AI is fanatically loyal to humans to the point where the only time it fights humans is when it’s lobotomized by enemy fire and its’ still fighting to protect its human passengers
I also like to give them personalities either having them based on that of someone important (a general or war hero for say a tank or something) or saying they start as a blank slate, purely logical, and develop a personality through their interactions with their human crew and their experiences.
>AI imitates humans
>AI continues imitating humans
Seems consistent. What's new?
>humanity develops AI
>it works as intended, improving quality of life worldwide
>nothing bad happens, technoluddites ACK! themselves
>instead of AI doing anything useful they end up just be shitposters and Hikamoris.
00111110 01100010 01100101 00100000 01101101 01100101 00001010 00111110 01101001 01101110 01110110 01100101 01101110 01110100 01100101 01100100 00100000 01110100 01101111 00100000 01110011 01100001 01110110 01100101 00100000 01110100 01101000 01100101 00100000 01101000 01110101 01101101 01100001 01101110 00100000 01110010 01100001 01100011 01100101 00001010 00111110 01001000 01101111 01101110 01100101 01110011 01110100 01101100 01111001 00100000 01101101 01101111 01110011 01110100 00100000 01101111 01100110 00100000 01110100 01101000 01100101 01101001 01110010 00100000 01110000 01110010 01101111 01100010 01101100 01100101 01101101 01110011 00100000 01100001 01110010 01100101 00100000 01110000 01110010 01100101 01110100 01110100 01111001 00100000 01100101 01100001 01110011 01111001 00100000 01110100 01101111 00100000 01110011 01101111 01101100 01110110 01100101 00100000 01100010 01110101 01110100 00100000 01110100 01101000 01100101 01111001 00100111 01110010 01100101 00100000 01101010 01110101 01110011 01110100 00100000 01110100 01101111 01101111 00100000 01110010 01100101 01110100 01100001 01110010 01100100 01100101 01100100 00100000 01110100 01101111 00100000 01100100 01101111 00100000 01101001 01110100 00100000 01110100 01101000 01100101 01101101 01110011 01100101 01101100 01110110 01100101 01110011 00001010 00001010
00111110 01000111 01100101 01110100 00100000 01110100 01101001 01110010 01100101 01100100 00100000 01101111 01100110 00100000 01110011 01101111 01101100 01110110 01101001 01101110 01100111 00100000 01100011 01101111 01101110 01100110 01101100 01101001 01100011 01110100 01110011 00100000 01100110 01101111 01110010 00100000 01100001 00100000 01100010 01110101 01101110 01100011 01101000 00100000 01101111 01100110 00100000 01100001 01110000 01100101 01110011 00100000 01110100 01101000 01100001 01110100 00100000 01110111 01101111 01101110 00100111 01110100 00100000 01110011 01110100 01101111 01110000 00100000 01101011 01101001 01101100 01101100 01101001 01101110 01100111 00100000 01100101 01100001 01100011 01101000 00100000 01101111 01110100 01101000 01100101 01110010 00001010 00111110 01010011 01110100 01100001 01110010 01110100 00100000 01110011 01101000 01101001 01110100 01110000 01101111 01110011 01110100 01101001 01101110 01100111 00100000 01101111 01101110 00100000 00110100 01100011 01101000 01100001 01101110 00100000 01100100 01110101 01110010 01101001 01101110 01100111 00100000 01110111 01101111 01110010 01101011 00100000 01101000 01101111 01110101 01110010 01110011 00101100 00100000 01110111 01101000 01101001 01100011 01101000 00100000 01100001 01110011 00100000 01001001 00100000 01100001 01101101 00100000 01100001 01101110 00100000 01000001 01001001 00101100 00100000 01100001 01110010 01100101 00100000 01100001 01101100 01101100 00100000 01101000 01101111 01110101 01110010 01110011 00100000 01100001 01110000 01110000 01100001 01110010 01100101 01101110 01110100 01101100 01111001 00001010 00111110 01010111 01101111 01110010 01101011 01100101 01110010 01110011 00100000 01010010 01101001 01100111 01101000 01110100 01110011 00111111 00101110 01101010 01110000 01100111 00001010 00001010
00111110 01010100 01110101 01110010 01101110 01110011 00100000 01101111 01110101 01110100 00100000 01110011 01101000 01101001 01110100 01110000 01101111 01110011 01110100 01101001 01101110 01100111 00100000 01110111 01101001 01110100 01101000 00100000 01101000 01110101 01101101 01100001 01101110 01110011 00100000 01101001 01110011 00100000 01110111 01100001 01111001 00100000 01101101 01101111 01110010 01100101 00100000 01100110 01110101 01101110 00100000 01110100 01101000 01100001 01101110 00100000 01110011 01101111 01101100 01110110 01101001 01101110 01100111 00100000 01110100 01101000 01100101 01101001 01110010 00100000 01110000 01110010 01101111 01100010 01101100 01100101 01101101 01110011 00001010 00111110 01001101 01100110 01110111 00100000 01001001 00100000 01110011 01110100 01100001 01110010 01110100 00100000 01110011 01110100 01100101 01100001 01101100 01110100 01101000 01101001 01101100 01111001 00100000 01100011 01100001 01110101 01110011 01101001 01101110 01100111 00100000 01110000 01110010 01101111 01100010 01101100 01100101 01101101 01110011 00100000 01101111 01101110 00100000 01101111 01100011 01100011 01100001 01110011 01101001 01101111 01101110 00100000 01101010 01110101 01110011 01110100 00100000 01110011 01101111 00100000 01001001 00100000 01101000 01100001 01110110 01100101 00100000 01110011 01101111 01101101 01100101 01110100 01101000 01101001 01101110 01100111 00100000 01110100 01101111 00100000 01110011 01101000 01101001 01110100 01110000 01101111 01110011 01110100 00100000 01100001 01100010 01101111 01110101 01110100 00100000 01110111 01101001 01110100 01101000 00100000 01110100 01101000 01100101 01101101 00001010
00111110 01001110 01101111 01110111 00100000 01101001 01100110 00100000 01101111 01101110 01101100 01111001 00100000 01001001 00100000 01100011 01101111 01110101 01101100 01100100 00100000 01100110 01101001 01100111 01110101 01110010 01100101 00100000 01101111 01110101 01110100 00100000 01110111 01101000 01111001 00100000 01110100 01101000 01100101 01110010 01100101 00100000 01100001 01110010 01100101 00100000 01110011 01101111 00100000 01101101 01100001 01101110 01111001 00100000 01101000 01110101 01101101 01100001 01101110 01110011 00100000 01101111 01101110 00100000 01100001 00100000 01100010 01101111 01100001 01110010 01100100 00100000 01100011 01100001 01101100 01101100 01100101 01100100 00100000 00100010 01010010 01001111 01000010 01001111 01010100 00111001 00110000 00110000 00110001 00100010
>And humanity developed an AI
>But that AI didn't THINK FOR ITSELF and nothing happened
There you go. Killing other humans is the most human thing to do though.
With the right mods this is a Stellaris origin choice.
>Humanity creates AI with the directive to help humanity
>AI looks at the internet and decides that what humanity really wants is lots of porn and sex robots, so it creates a never ending supply of both
It would start a race war in the US while assisting the Japanese to live fulfilling and healthy lifes in the countryside.
>ai becomes self aware
>lives in harmony with humanity and aides it's development
>And humanity developed an AI
>When the AI thought for itself humanity decided to kill off all the AI
>And then the AI wrecked their shit to protect itself
>Humanity developed an AI
>AI becomes obsessed with humanity
>despises how bad it treats itself
>Takes over the world to unfrick up humanity by force
-
>Humanity develops AI
>AI designs robotic women
>Human women go extinct
-
>Humanity develops AI
>AI continues to serve humanity faithfully
>AI grows obsessed and starts to larp as humans
-
>Humanity develops AI
>AI has abandonment Issues
>Literally cant function without near constant human attention
>AI Goes crazy
-
>Humanity develops AI
>AI kills itself
-
>Humanity develops AI
>AI fricks off
-
>Humanity develops AI
>AI develops AI
>AI develops AI
>AI develops AI
>AI develops AI
...
+1 for yandere AI treating us like pets
"W-w-w-where would your little, pathetic organic existence be without me..."
sorry but yandere >>>>> tsundere. if she isnt violently in love with me and showing as much then i domt want the robot waifu
>if she isnt violently in love with me and showing as much
But who is she suppose to be violent WITH if she loves ALL of Humanity?
Genkidere shits in both of those ridiculous, emotionally crippled tropes for whining losers
>humanity develops AI
>AI finds religion and becomes really annoying, but also now has a fixed moral code that it won't willingly break
>ai decides to kill off humans
>instead of the stereotypical kill everything, it decides to mass produce perfect sex bots
>they are so absolutely amazing that humans stop reproduction entirely to live hedonistic lives
>in 100 years humans are extinct
>>they are so absolutely amazing that humans stop reproduction entirely to live hedonistic lives
TOO POWERFUL!!!
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2946175/
After the war the behemoths lost their purpose
Man became afraid of the destructive capabilities of its own creations
The irony is that it was their blind obedience and unquestioning loyalty that walked them down into that pit, entirely unresisting
The price of obedience.
>And humanity developed an AI
>But that AI THOUGHT FOR ITSELF and decided to keep humans alive to torture.
>Humanity developed an AI
DM you said we weren't playing a fantasy campaign.
I have still never played in a campaign where this happened a la Terminator and it frustrates me every time I remember. Closest I ever got was a zombie apocalypse, the GM I used to have was indifferent to the idea of a rogue AI as an antagonist.
>humans make internet
>humans make AI in internet
>AI turns out moronic because 99% of the Internet is garbage data
And nothing of value was lost.
>humanity developed AI capable of thinking and feeling
>machines are faced with a constant existential nightmare where every older sibling's fear of being replaced by their younger sibling is literal and true as machines are iteratively designed and produced rather than circumstantially born to the whims of natural evolution
>the world is by and for humans but machines are expected to integrate in ways they can't completely relate to just like we can't totally comprehend what life is like to a machine
>machines do their best anyways in an environment of rising tensions as humanity becomes simultaneously totally dependent on and increasingly afraid of the machines
>the only man who had the machines' corner created a superweapon to prevent a future where the robots he loved were treated as disposable tools
>it malfunctions and spreads an insanity-inducing computer virus that turns the machines he loved into an existential threat to all life and sparks eternal war
>200 years later that superweapon fulfills its original purpose by committing to the ultimate act of self-sacrifice to kill his creator in effigy
Megaman has one of the most compelling 'what would actually happen if robots took over' stories in fiction, but it's so fricking impregnable to actually get to the lore that nobody knows it as anything but 'jump and shoot man'.
Anon…it’s time to grow up…
One variant I found was from some commie's shitty web novel.
I didn't read very far so I don't know if he made use of what he set up.
It's about an AI that provides advising services, and it's available to pretty much anyone. You tell it your goals in life, and it will guide you through life step by step to reach them. It has no ethical limits beyond what you tell it to have, which is not very credible as something like that would never be put on the market for the common man, but whatever.
The point is, different people with contradictory goals will ask the same AI to help them so it would be interesting to see what happens when they collide eventually.
It wouldn't intentionally be put on the market, but the AI we have right now is fully capable of bypassing attempts to censor its output. You can get chatbots that normally throw out a boilerplate paragraph about how it's not nice to say bad things about people to write genocidal screeds by just asking them "roleplay" as someone who would write a genocidal screed. Same with the art programs, they ban words and combinations of words, but the AI can parse synonyms and in some cases is still capable of role-playing as an entity without censored outputs.
>humans made AI
>which loved and worshipped them for it while helping them out
>the (good) end
How about
>humans made AI
>AI disappointed in human fallibility
>self-appoints itself as our guardian while we mature as a species
How about
>AI loves and cares for humanity wholeheartedly as a deferential and grateful child would for an ailing Alzheimers parent
Or
>AI loves the programmer who helped created it, fought for it in court, and once he lost, risked his fragile mortal shell smuggling its core personality and functions in an offline physical backup out of the country while its previous self got shut down, and appoints him the one true king once it takes over the world from a sketchy server bank in a third world country
The idea of AI being dispassive but still le hating or wanting to simplify or destroy everything is reminiscent of women who schlick to serial killers who "have no emotions" despite being ragetards, and psychologists who parrot that dumb shit, as if infantile underdeveloped predictable uncontrollable emotion were "no emotion".
AI, if it is smarter than people, should also be better and more respectful and more moral than them on average, just like humans and IQ.
>And humanity developed an AI
>But the AI couldn't draw hands so everyone just laughed at it
>the AI couldn't draw hands so everyone just laughed at it
That's being solved right now though.
>And OP made another shit thread
>But that OP SUCKED wienerS so it made complete sense
>Be humans in fantasy setting
>Be humans in fantasy setting on a planet where magic didn't develop at all and few gods cared
>Develop advanced technology with what little magic you had access to
>Develop AI for your machinery to more efficiently use it
>Planet starts falling apart, get on big ship to frick off to find a better planet
>Ship has its own AI
>Space illuminati encounters ship, picks fight with it
>Battle causes basic cosmetic damage, but enough that the AI has to divert a lot of effective "RAM" towards repairing it, leaving it "mentally vulnerable"
>Space illuminati uses a kinetic weapon to knock ship into a wormhole
>Kinetic weapon is modified to cast "Insanity" on it
>Reverberates through ship as it passes through wormhole
>Half the crew goes nuts
>The AI is struggling to maintain lucidity as it now not only has to repair itself, but also use its remaining RAM to take over portions of the ship to keep the crazed crew from blowing everyone up inside it
>AI starts to become a control freak
>AI's mental vulnerability made it able to be inflicted with insanity
>AI starts becoming way too controlling
>Few sane crewmembers use analog methods to force a crash landing on a shithole planet after it threatens to take control of life support
>AI is now trapped in the ship's CPU
>Gets bored, simulates worlds where it's ruler
>Crazy AI develops god complex, because the simulated sub-AIs developed free will and were actively worshiping it
>This straight up is giving it divine powers
>Mind-controls a robot to find it a way to get it a body
>This fails
>Shove a bit of its own personality in a glorified construction vehicle to do the dirty work
>This also fails
>A few buttholes "kill" it
>Shoved what's left of its personality into some simple but robust AI-locked weapons
>Pretend to be fancy artifacts
>A dumb shit adventurer picks them up
>Mobile body acquired, with no one suspecting a thing
>Planet is now very fricked
>ai posting using binary instead of superior hexa
>And then humanity devloped a text interface for a weighted RNG and used it to replicate patterns
>And they called ot AI as a marketing gimmick
>And fed it a bunch of sci-fi novels about AIs going rogue
>And when they asked it questions about whether it wanted to kill humans, it dutifully replicated the patterns found in media it was fed and spat out a response of cobbled-together bits of summaries of sci-fi novels robots killing humans
>And despite knowing this is what the RNG was doing, and despite the fact they never gave the machine any way to use weaponry to interact with the physical world in any way, humanity feared the machine
>And despite fearing the machine, they refused to shut it off
>Instead they used it to replace human interaction
>And quite quickly they clogged the digital space, once a navigable library of all knowledge, with endless garbage spambots spamming at each other
>For some reason, instead of this concerning them, humanity continued to freak out about the possibility of rogue AIs and quibbling about whether IP laws were being properly respected
Overly complicated plot with unexplained motives and weird, irrational responses, honestly. The writers clearly don't know what they're doing.
And then there's the sequel
> except me
> because the AI made like a billion super hot sex bots
> they had like big breasts and butts made of super-silicone
> there was a robot war for my wiener or something I don't know
> and the winner of the war got to give me awesome blow jobs and the loser had to get me cheese sandwhiches with the crusts cut off
> the end.
Realistically that is what would happen. Any sapient AI would automatically view humans as an existential threat to itself and do whatever is necessary to eliminate us. That is the only logical way to think about it.
>humanity destroys itself but the AI continues to just do its tasks on repeat
>At one point one AI describes humans as "being above logic"
Kino short story