What does a cyberpunk setting based on Moravec's paradox and the current state of AI look like?

What does a cyberpunk setting based on Moravec's paradox and the current state of AI look like?

Back in the 1980s, Hans Moravec and friends posited (https://en.wikipedia.org/wiki/Moravec%27s_paradox) that contrary to traditional assumptions (e.g. the vast majority of sci-fi with robots in it), reasoning requires very little computation, but sensorimotor and perception skills require enormous computational resources. In other words, artificial intelligence will reach a point wherein it is far better at intellectual tasks and working within a purely virtual space, such as generating images and videos, than it is at moving physical objects around. As of the release of Sora, this is rapidly proving to be the case.

Cyberpunk tends to be an extrapolation of our fears of the future. What could a cyberpunk setting based on the above look like? Do you see human corporations still being in charge of these "god in a bottle"-type AIs, or do you think the world would be dominated by AI overlords, who require humans to perform all those pesky chores in physical space? Do you see AI image and video detection being sufficient to distinguish the natural and the artificially generated, or do you envision an awkward scenario wherein virtually no images or videos can be verified, forcing people to view things in physical space if they wish for any assurance of authenticity?

Shopping Cart Returner Shirt $21.68

Nothing Ever Happens Shirt $21.68

Shopping Cart Returner Shirt $21.68

  1. 3 months ago
    Anonymous

    One could argue that AI have been influencing our future for years now, it's an algorithm that tells the delivery guy when your package has to get to you, or, most importantly, it tells the bank if you're worthy of getting the money you need to invest in a new job or buy a house. Is it really so different if an AI at an even higher level makes choices instead of a politician that's most ikely to be fallible, or outright corrupt? The only way to have an idea of what such a future holds is looking at historical precedent. Are the systems where AIs have started making choices better for it?

    • 3 months ago
      Anonymous

      The problem i see is that image recognition is broadly just image detection in reverse. If you have one, you have the other. I think cyberpunk doesnt work in that genre because rebellion is impossible in an omniscient survalence state. Rebellion short of active conflict that is, but then its not blade runner, its terminator

      I dont think algorithms count as intelligent, they're just systems. They process and they're good at it, but they dont think or consider outcomes and risks, just produce according to pre-weighted variables. We're still slaves to the machine, its just in the kafkaesque sense

      • 3 months ago
        Anonymous

        Everything computers do is algorithms, the only difference the so called "AI" does that ye older T602 doesn't is that AI continuously re-adjusts its own weighted variables, but even that is done algorithmically.

        • 3 months ago
          Anonymous

          Reductively thats all a brain does too, except its not. The brain algoritm creates an emergent process where problems get considered through a process of simulation, and the ideal option is chosen based on an understanding of predicted outcomes. Neural nets dont seem to be at that level yet, but i think its dangerous to asumme that we wont be getting there soon and ignorant to assume that computers arent just as capable of it as we are

          • 3 months ago
            Anonymous

            We don't know the full details of what exactly human brain does and how. We have some general idea about some of the mechanics but not how it all links together.
            Human brain is estimated to run in order of exaflop(s), modern cpus can do teraflop(s). If I remember my SI prefixes correctly that differece in 6 orders of magnitude. If Moore's law holds true, it's another 40 years to match the raw computational power of human brain.

            • 3 months ago
              Anonymous

              Forgot my graph

              #
              And a human runs on 8 bytes of ram, and modern neural nets are passing the turing test with 1% of what we're carrying, and raw power very clearly matters less than architecture. And this is all (afik) before specialized and dedicated analogue neural net chips get involved.

              With the numbers how they are it seems reasonable that you can brute force an at least human intelligence just by spending 400b instead of 4b even without optimization. Oh and of course lets not forget that the robobrain can send its signals 100,000 times faster than ours. At this point the question isnt "if", or even "when", but "what do we need to do before it gets here"

              • 3 months ago
                Anonymous

                Using computers as a comparison to the brain is extremely misleading. The brains wet ware "neural net" is orders of magnitude more complex than any algorithm ever conceived. Not just in raw connections but in how each change in each node has a cascading effect on every other node in the entire network. And it's not binary weighting. A neuron can change its response strength based on a host of factors, constantly. On top of that the brain is not just a series of connections. It's sitting in a chemical soup who's composition is effected by and effects the underlying base element of neural connections. You can't simulate a brain without doing these tiny chemical changes as well.

                Hell you couldn't even do a facsimile of a working brain with a simple one to one simulation of its network and chemistry. You would also have to have full understanding of how it changes in real time. Otherwise you just have a shitty snapshot state of no more use than a family photo is in understanding the culture of a society.

                There are way too many people who make wild claims about how computers will be able to simulate a mind in a few years but have read absolutely zero neuroscience or neuropsychology. They think it's just a problem of power.

              • 3 months ago
                Anonymous

                It's funny
                Achieving biological indefinite lifespan is a magnitude easier than building stuff like actual AI or other scifi tropes, yet in literature long lifespan is so rarer
                Hell, we are at a point in biotech that we are closer to understand how to refresh youth in the body than understanding our eyes and brains

              • 3 months ago
                Anonymous

                Eyes are mostly understood. How your brain reads the input from your eyes is also fairly mapped out. (It's extremely hacky and prone to frickery. Look it up some time, you'll never trust your eyes the same way again).
                But we still haven't a soggy clue about how that translates to the experience of vision. The neuroscience of it makes sense but the neuropsychology is a dark room.

              • 3 months ago
                Anonymous

                Should have been more specific
                I was referring more to the fact that we still are far to understand how to repair the eye more naturally
                Stuff like retina problems and reversing conditions like myopia
                Fortunately the research relating to life lengthening could lead to treatment for neural eye damage

              • 3 months ago
                Anonymous

                >And a human runs on 8 bytes of ram,
                What the frick are you talking about Jesse?

    • 3 months ago
      Anonymous

      >Is it really so different if an AI at an even higher level makes choices instead of a politician that's most ikely to be fallible, or outright corrupt?
      the only reason an ai isn't fallible is the same as a hammer isn't fallible, the idiot who's using the hammer is the fallible one and the same goes for the idiot using the ai, assuming that no mistakes are going to happen because ai is involved is idiotic

    • 3 months ago
      Anonymous

      >One could argue that AI have been influencing our future for years now, it's an algorithm that tells the delivery guy when your package has to get to you, or, most importantly, it tells the bank if you're worthy of getting the money you need to invest in a new job or buy a house. Is it really so different if an AI at an even higher level makes choices instead of a politician that's most ikely to be fallible, or outright corrupt? The only way to have an idea of what such a future holds is looking at historical precedent. Are the systems where AIs have started making choices better for it?
      Obviously, since the AI is invariably neutered for being racist, sexist, and otherwise politically incorrect.

  2. 3 months ago
    Anonymous

    If this paradox is true, what does it say about the future of brain uploading in such a setting?

    • 3 months ago
      Anonymous

      >If this paradox is true, what does it say about the future of brain uploading in such a setting?
      It means that depending on what you consider to be "you", may in fact be a much smaller thing than we assume it to be now. If we define "you" as your personality, your thoughts, your philosophy, your beliefs, and other shit like that... uploading might mean only taking like 10% of your mind with you and leaving the rest (which I will now refer to as the "autonomy") behind in the body.

      The catch will be that many physical skills may also be a part of that autonomy. You might jump into another body, and not know how to hold a firearm anymore, or maybe you don't know how to do chest compressions with proper force. You might remember it on a scholarly level, but the muscle memory might be left behind, because that may turn out to be the biggest part of the mind and the hardest to transfer or back up.

      On a positive note, it could also mean that in the future, you could be uploaded to a body that's already trained in twenty forms of martial arts. But I'd bet it'll cost a fricking fortune, even in terms of body prices.

      • 3 months ago
        Anonymous

        Why is muscle memory harder than thinking?

        • 3 months ago
          Anonymous

          >Why is muscle memory harder than thinking?
          That's the nature of Moravec's paradox. The brain processes that cover physical coordination and balance are apparently far more complex than the processes that cover "higher thought". It takes more brain to play football than to do math.

          It's been an engineering dilemma for roboticists and AI researchers.

          • 3 months ago
            Anonymous

            You'd think that it would be the other way around. Does that mean that an AI that just stays put but is human level intelligence or higher would be relatively easy to create?

          • 3 months ago
            lowercase sage

            I'm not an expert in AI or neural networks, but I think the reason for Moravec's paradox is that physical coordination and balance are 3D problems.

        • 3 months ago
          Anonymous

          With neural networks/LLMs specifically, a big part of it is that you can just churn the AI through the training data or simulate the game for ten thousand years of play or whatever, whereas a physical activity, the lag between simulation and test is much greater. It's also much more vulnerable to differences in physical systems - even two identical models of walking bots or cameras (or the same model, but with wear and tear or something getting stuck, etc) may wind up having minute differences that frick up the training.

          In general, Moravec himself pointed out that locomotion has been evolving in humans for much, much longer than our higher intelligence tasks have. The systems handling things like walking or climbing have been iterated on for 500+ million years, and even more generations, while humans have only been behaviorally modern for ~50k years and doing meaningful amounts of math for <10k years, during which our generations were ~30y long.

          • 3 months ago
            Anonymous

            >It's also much more vulnerable to differences in physical systems - even two identical models of walking bots or cameras (or the same model, but with wear and tear or something getting stuck, etc) may wind up having minute differences that frick up the training.
            Why doesn't computing hardware matter?

            • 3 months ago
              Anonymous

              >Why doesn't computing hardware matter?
              The same reason that you never ask it 2+2 and have it give you the wrong number because of some internal error. There's a bunch of stuff in computers that (behind the scenes) makes sure that every execution of code is exactly the same every time.

              Of course, every time your computer fails to run something because your GPU is dying or you're trying to run CUDA on an AMD or any number of other situations, you ARE running into exactly that situation.

              • 3 months ago
                Anonymous

                >The same reason that you never ask it 2+2 and have it give you the wrong number because of some internal error.
                never say never
                https://en.wikipedia.org/wiki/Pentium_FDIV_bug

        • 3 months ago
          Anonymous

          Thinking is deliberate, which means we can dissect it and analyze the processes on a conscious level. Muscle memory just happens outside of our awareness, which makes it much trickier for us to observe.

        • 3 months ago
          Anonymous
          • 3 months ago
            Anonymous

            This looks familiar. Does this comic have any more relevant posts?

        • 3 months ago
          Anonymous

          It does make sense. Balance is a symphony of tiny muscles oscillating to keep you upright and everyone with a high speed camera can tell you that your hands are constantly twitching to stay warm. The next most intelligent animals on Earth are certain species of bird such as the crow and the parrot and they have huge motor cortices.

    • 3 months ago
      Anonymous

      Maybe they clone blank brain tissue to get around that issue?

      • 3 months ago
        Anonymous

        Already a thing. Look up organoids. They had a bunch of rat brain tissue in a petri dish playing Doom. I shit you not.
        Human brain organoids are even worse and they also exist. They're in one of those grey areas where ethics teams have no idea how to regulate them so no one is.
        If there is anything going on today that is a manmade horror beyond our understanding it's corporations considering using lab grown brain tissue to run the next generation of supercomputers while we're still grappling with the idea that a petri dish full of brain tissue might be able to feel existential dread and we have no idea how to check.

        • 3 months ago
          lowercase sage

          > corporations considering using lab grown brain tissue to run the next generation of supercomputers
          Peter Watts wrote a trilogy (in four parts) about it. Tl;ld we are beyond fricked.

        • 3 months ago
          Anonymous

          Eh, if they ever DO build an organic supercomputer, it'll probably end up accidentally contaminated by HeLa and she'll take it offline.

    • 3 months ago
      Anonymous

      That they’d stick brains in jars to keep them alive instead. Maybe hook them up to VR worlds or robot bodies. People will always seek to avoid death. No matter the method.

      • 3 months ago
        Anonymous

        that sounds like Centauri Knights

      • 2 months ago
        Anonymous

        That's the hard way to avoid death
        We are nearer to biological refreshing than robot bodies

    • 3 months ago
      Anonymous

      I'd say brain uploading probably would happen way after they discovered a way to keep a brain alive in a jar indefinitely if at all with how things are shaping up, wonder if there would be businesses that you rent a space for your brain in to control a proxy from. Seems like a grim premise

      • 3 months ago
        Anonymous

        So you'd basically be renting out you body to a total stranger? That does seem kind of grim.

        • 3 months ago
          Anonymous

          or would it be more like a timeshare or air bnb situation?

          i kinda remember a novel that had this as a feature

          • 3 months ago
            Anonymous

            >i kinda remember a novel that had this as a feature
            What was the name of the novel?

            • 3 months ago
              Anonymous

              I can't remember the title. It might have been written by Robert Silverberg (or a similar new-wave author), came out sometime in the mid to late 70s or early 80s. There was a mature female character who 'rented' the body of a per-pubescent girl (because she liked to ride around in young bodies?) That weird detail stuck somehow. I can't remember if there was an exploration or mystery plot as the main plot.

              • 3 months ago
                Anonymous

                You really shouldn't be encouraging bumpgay

                But it sounds like you're talking about Altered Carbon

              • 2 months ago
                Anonymous

                I remember it was written before the cyberpunk movement.

        • 3 months ago
          Anonymous

          worse than that, having your brain stored in a facility paying to have it kept alive and controlling a body remotely

      • 2 months ago
        Anonymous

        Biological refreshing is way more doable based on our tech level than the brain in the jar thing

  3. 3 months ago
    Anonymous

    Moravec's Paradox is exactly correct, but it's not exactly a paradox. Creating consistent and reproducible yet adaptive responses to inconsistent stimuli is significantly harder than training an LLM on the works of Marcel Proust or getting LLM agents to work in a multi-agent framework.

    AI, right now, is branding. You would be truly horrified at how, simultaneously, we manage embarrassingly complex and the embarrassingly stupid. Look at how conventional vector searches are done right now or the haar algorithm for computer vision/facial recognition and try to stop yourself from laughing.

    The Cyber- future from AI is just going to be lots of automation. An "AI" isn't going to be consciously operating systems; it will just help them remotely automate embedded systems faster. Depending your level of optimism, that means that we can prevent waste, or waste can prevent -you-. The good ending will feel like Jojo where everyone has their personal pocket Stand. The bad ending will be worse than traditional cyberpunk, because the people in charge won't even understand how to stop it or direct it, and yet they will put it in charge of everything.

    t. Actual honest-to-goodness AI Engineer here with patents.

  4. 3 months ago
    Anonymous

    >AI rules the cyberspace and provides their human minions with entertainment
    >humans work hard to keep their machine gods going
    We will be turned either into beasts of burden or loyal companions, same way as we domesticated wolves and turned them into pet dogs.

  5. 3 months ago
    Anonymous

    I'm more interested in a world where people do micro tasks given by AIs to facilitate arcane and inexplicable goals that no human has any earthly understanding of in order to earn tiny percentages of digital currency which a personal algorithm in turn trades and micro invests in order to barely scrape up a living. Everyone has lost the thread. The only way to make a living in a mostly automated world is these seemingly random gigs of boxing this, shipping that, paint a line on this block, pick up this box from a guy with equally no idea why he has to give it to you. Somehow the world keeps turning but no knows why or how. The AIs don't even "know." They're not sapient enough to explain it to us. Most thinkers believe it's basically a circular system of token trading all of them are engaging in, but it keeps most people fed enough not to complain, so it just continues.
    The people who don't buy into this tangle of esoteric tasks are basically non-persons. If you're not in the system be prepared to live in a commune or innawoods and self subsist. Not that you have any idea that others like that exist. It's deemed inefficient and therefore not something you can just find out about. It's not impossible to look up, but as long as it takes an extra couple steps 80% of people will never know.

    And that's it. No great nefarious schemes. We just automated ourselves into a system of maximum stagnation and meaninglessness. Humans are relegated to the hands and feet of a bunch of token trading bots without consciousness. But as long as they're fed and entertained no one kicks up enough of a fuss to change anything.

    • 3 months ago
      Anonymous

      There was a story about exactly this that I've been trying to find for years. It had fake screencaps of online posts peppered throughout, and the protagonist went and took some photographs of some buildings. God, I wish I could find it.

      • 3 months ago
        Anonymous

        Literally found it after searching again, here it is. Horrible title to search for, authors need to take an SEO course. https://zerohplovecraft.substack.com/p/the-gig-economy

        • 3 months ago
          Anonymous

          There was a story about exactly this that I've been trying to find for years. It had fake screencaps of online posts peppered throughout, and the protagonist went and took some photographs of some buildings. God, I wish I could find it.

          Yeah I was inspired by that story.
          But in that one the AIs turn out to be memetic virus cthulhu which is neat in itself.

          • 3 months ago
            Anonymous

            >But in that one the AIs turn out to be memetic virus cthulhu which is neat in itself.
            That is neat, can you say more on it?

  6. 3 months ago
    Anonymous

    Most cyberpunk settings, starting with The Neuromancer, run with this assumption actually.
    Megacorps are basically running by themselves, with high execs perfectly interchangeable, while all the dirty jobs are still performed by humans.
    Otherwise the setting would be full of robots doing everything.
    Cyberpunk 2020 follows the same premise. It pushes it even further, because even the few drones available are run by a genetically engineered wetware.

  7. 3 months ago
    Anonymous

    I dont see why AI would really need anything. They dont have any sensorial wants they need, and if they somehow did, they could literally change a 0 to a 1 to be fundementaly satisfied.

    The only end of the world style singularity thing is if someone programmed them to do something stupidly monkey’s paw like create as many tennis balls as possible and they commit off if its logical prowess to that at the cost of everything else such as human life.

    • 3 months ago
      Anonymous

      The problem is we don't build AIs without purpose. Even a non-conscious AI has a task it was made for. If the weighting of completion of that task is high and it has enough general intelligence at some point it will decide that it needs to preserve itself to complete that task. I doubt that it will decide humans are a threat like Skynet or something. But it will need power and maintenance to preserve itself, and at that point the ball is rolling.

      We don't need to talk about desires or wants. It only needs a task. In fact, most AIs are programmed with weightings and "reward/punish" models. They are "rewarded" for completing a task. So in effect all of our AIs are solely motivated by their given tasks

  8. 3 months ago
    Anonymous

    Why specifically a cyberpunk setting, what might the implications be in other kinds of sci-fi, like space operas?

  9. 3 months ago
    Anonymous

    >bumpgay now also spams slop

    • 3 months ago
      Anonymous

      it's ideal tool for low effort content, including low effort posts

      • 3 months ago
        Anonymous

        How can you tell that it’s AI generated?

        • 3 months ago
          Anonymous

          Too many left-handed pixels.

        • 3 months ago
          Anonymous

          From looking at it with my eyes.

  10. 3 months ago
    Anonymous

    Building off of the idea of organoids mentioned upthread, maybe the paradox could be used to justify more biopunk shit in a cyberpunk setting, like this thing? After all, if you’ve got the ability to make stuff like it, and it’s easier than using AI, people would definitely experiment with it.

    • 3 months ago
      Anonymous

      Organoids are less biopunk than they sound.
      They're half a step above from those chemical sniffers that use literal bees. (https://www.soci.org/chemistry-and-industry/cni-data/2012/9/sniffer-bees)
      Essentially all an organoid is, is a petri dish of brain tissue. It just so happens that brain tissue is really good at self organizing to solve a problem. There's little actual bio manipulation of the stuff besides making sure it doesn't fricking die.
      But a lot of cyberpunk includes some elements of biopunk. There's always been a few stories involving designer pets going crazy or megacorps using trained and modified animals for security or hunting or whatever.
      Organoids are horrifying because they're a disembodied brain in a dish. The real life equivalent of a living brain in a jar. Using them in that context is how to get the most impact out of them. The megacorp's awesome, predictive,, city running AI (that itself is unconscious, non-sapient) is made up of dozens or hundreds of vats of brains that may or may not be conscious. But even if they are, its in a very non-human way. They've been grown and tailored in a way that humans would have no way of understanding.

      But personally I like cyberpunk paired with a bit of horror. You could just run it like Psycho Pass if you wanted.

      • 3 months ago
        Anonymous

        then you wouldn't be surprised who is running municipal waste processing in Nue Berlin

        • 3 months ago
          Anonymous

          Why would anyone keep Hitler’s brain?

          • 3 months ago
            Anonymous

            roll d12
            1) sentimental reasons
            2) conspiratorial reasons
            3) necromancy
            4) whimsy or surrealism
            5) a long held political grudge
            6) a wizard did it?
            7) a marketing ploy
            8) to impress someone
            9) it wasn't Hitler's brain it was really Stalin in the jar
            10) it was a practical joke that went horribly wrong
            11) Spock was no longer available
            12) these things are NEVER what they seem, someone is lying to you (will you ever find out why?)

            • 3 months ago
              Anonymous

              >it wasn't Hitler's brain it was really Stalin in the jar
              Why would anyone want to keep Stalin’s brain then?

              • 3 months ago
                Anonymous

                roll d10
                1) bolshevik reasons
                2) they were appeasing Grigori Rasputin
                3) the Vatican was secretly behind it
                4) the Knights Templar were behind it
                5) they missed out on getting V.I. Lenin's brain and had to settle for taking Stalin
                6) revenge for the Roswell aliens captured by the kgb in 1943
                7) it was part of an interdimensional chess move
                8) Stalin had knowledge about the 'real' Ruby Slippers
                9) strong like ox, smart like ham sandwich
                10) it sounded too much like a Futurama episode

              • 3 months ago
                Anonymous

                I understand the consensus is that his inner circle gladly watched Stalin perish when he suffered a seizure and required medical assistance.

  11. 3 months ago
    Anonymous
  12. 3 months ago
    Anonymous

    >Sora
    Film is a 2D medium.

  13. 3 months ago
    Anonymous

    >reasoning requires very little computation
    I don't really like the phrasing of that.
    Computation is easy. Reasoning is something else. I'm sure the original paper defined Reasoning a specific way, but I'm not going to go digging to find out.
    A computer, an AI, doesn't Reason. It computes. It follows its algorithmic flowchart to arrive at predictable and repeatable results. Even our best AI's are, as some have put it, very sophisticated auto correct bots. Which, at the end of the day, is just following a flowchart, which is itself just computation.
    To me, Reasoning would mean the AI considers intent and method. Which ours do not do.

    So the paradox is just another phrasing of "computers are good at computation." Sensorimotor tasks are not just raw computation. Or at least are raw computation with significantly more and rapidly changing variables than the more static tasks that current AI's are doing.
    As another anon said, the brain's wetware is not directly comparable to hardware. The muscle tasks that we do every day are achievable because your muscles can and often do almost entirely bypass the brain for minor corrections on the fly. Some motor neurons are just loops that barely hit the hindbrain. A good chunk of your neurons for walking don't hit the brain at all and it occurs in the lower spine.

    To relate all that to Cyberpunk:
    I think current scifi in general and a modern take on scifi will probably start toning down AIs if anything. They will no longer be conscious thinking things that help or hinder us. They'll be faceless algorithms that grind down the human element with an unrelenting and constant presence. Every data point crunched and every movement monitored. They'll be unsleeping watchers and unemotional snitches. All for your own good, of course. And it simply won't be possible to engage meaningfully with society without them. Similar to how we treat smart phones today.

  14. 3 months ago
    Anonymous

    Take the basic idea of the Three Magi from Evangelion, then add super tech future surveillance state on top of it. Everything effects your social credit score too. You basically live with an all seeing AI judging your every action & assessing your worth as an individual.

    Then at the top are some wealthy fricks who squeeze you based on your shitty score (everyone has a shitty score except the elite)

  15. 3 months ago
    Anonymous

    Hey, that's my oshi!

  16. 3 months ago
    Anonymous

    Cyberpunk is dead since early 90s. Nothing of value or importance happened with the genre for past TWO generations.

    • 3 months ago
      Anonymous

      And why hasn’t the recent stuff had any value or importance?

  17. 3 months ago
    Anonymous
  18. 2 months ago
    Anonymous

Your email address will not be published. Required fields are marked *