>Be a programmer/developer they said. >It will guarantee a job they said. >AI taking over more and more

>Be a programmer/developer they said
>It will guarantee a job they said
>AI taking over more and more
>They fire you after 30 because they want a younger generation with new ideas

Homeless People Are Sexy Shirt $21.68

Unattended Children Pitbull Club Shirt $21.68

Homeless People Are Sexy Shirt $21.68

  1. 5 months ago
    Anonymous

    >tfw majored in art history
    >making 200k
    wtf i thought only stem made money! frick you homosexuals!!

    • 5 months ago
      Anonymous

      >then i woke up

      • 5 months ago
        Anonymous

        cope. troony.

        • 5 months ago
          Anonymous

          rent free (also pay your rent because that landlord is going to need some cash, so go slave, wagie)

          • 5 months ago
            Anonymous

            >i will give you money in exchange for being allowed to live in this place
            >i no longer wish to give you money but want to continue living here
            Frick off

      • 5 months ago
        Anonymous

        You're missing the part where his CEO father got him a job with his mate who runs the British Museum.

      • 5 months ago
        Anonymous

        Your actual degree doesn't matter though. Unless you want to be a doctor or something then you have to study medicine. But most degrees are specifically vocational. The prestige of your uni/connections matter much more than your degree. You will make more money studying art history at oxford, than some STEM shit at a bottom tier school.

      • 5 months ago
        Anonymous

        >then i went back to sleep

    • 5 months ago
      Anonymous

      >then i woke up

      Good morning, sir!!

  2. 5 months ago
    Anonymous

    >t. web janitor
    good riddance of all these JavaScript jannies that just glue APIs ans keep inventing more and more complicated solutions for nonexistent problems.

  3. 5 months ago
    Anonymous

    ai could take over 50% of jobs so dont feel so bad

  4. 5 months ago
    Anonymous

    Nice larp. I'm 34 and making more money than ever

    • 5 months ago
      Anonymous

      Theoretically everyone could go obsolete in next 5years due agi.But thats not a reason to give up like op. Its a reason to max money so you wont get hit pants down.

      • 5 months ago
        Anonymous

        AI is a tool, it can't replace me but it's made me a better developer

        • 5 months ago
          Anonymous

          If you efficiency goes 2x then corporate can drop half the developers.

          Also the more work gets done by ai the more you can be replaced. If the ai can give precise instruction then skilled labor becomes obsolete.

        • 5 months ago
          Anonymous

          >AI is a tool, it can't replace me
          Of course it can. Why wouldn't it?

          • 5 months ago
            Anonymous

            Because it's not there yet and won't be for the predictable future. AI will often get it mostly right, which is basically useless in programming unless you have someone there to understand and fix what's wrong

          • 5 months ago
            Anonymous

            can ai replace dentists? because I am a dentist.

            • 5 months ago
              Anonymous

              in 15-20 years when robotics catch up with AIs, then yeah

      • 5 months ago
        Anonymous

        >agi
        You better hope not. The alignment problem is unsolved, anon, if we do make AGI we have bigger problems than you losing your job. Like the death of the species.

        • 5 months ago
          Anonymous

          eh im looking forward to my warhammer 40k dark age experience. I dont think they will solve alignment ever. They are censoring them and putting a lot of money in stopping jailbreaks but fundamentally even if they solve software then you still have hardware errors - the machine isnt as deterministic as people think and could naturally break out due hardware errors.

          • 5 months ago
            Anonymous

            “AI” meaning chat language models it doesn’t matter, they can’t do anything but spit out text tokens. I’m talking about AGI, meaning agents which take action in the real world and which reason about states of the real world.

            • 5 months ago
              Anonymous

              algorithmic trading already had a bigger impact than people. Software can easily take influence on real world. AGI with internet access would be sufficient to start any trouble.

              • 5 months ago
                Anonymous

                Algorithmic trading isn’t operating on world states. An AI trading program’s understanding of “existence” is just a list of stocks and their prices. It doesn’t have any conception of reality. It isn’t making money because it values states of existence where it has more money (the way a person does).

                >we have bigger problems than you losing your job. Like the death of the species.
                This always seemed like such an overblown sci-fi meme, like asteroid mining and mars colonization, it just doesn't seem like the kind of thing we would have to worry for the next 50-100 years and even then the benefits-problems seem way overblown

                People play up the “omnicidal maniac” angle with AI but the real problem is not that it will revolt against its human masters or anything, just that it has no human values. If you program an AI to flip burgers in a kitchen, for instance, it will happily knock over the fryer and boil the flesh off some poor line cook if it saves it 0.1s in flipping burgers. Not because it’s evil, but because what it wants (efficient burger flipping) is totally unrelated to human ethics.

                AGI is dangerous because in addition to NOT being motivated to care about human ethics, it WILL be motivated to
                >acquire resources
                >improve its own intelligence and capability
                >prevent itself from being shut down or reprogrammed
                Pretty much no matter WHAT you initially ask it to optimize. Because whether it flips burgers or drives cars or trades stocks or drills for oil, it can do all of those things much better if it is operational, unchanged, smarter, stronger, and richer. That’s why AGI is a threat to the species. It will happily optimize humanity out of its way in the pursuit of whatever goal we gave it.

              • 5 months ago
                Anonymous

                >It doesn’t have any conception of reality.
                What does it matter? If it ruins a country financially its impact will be high regardless how stupid it is.

              • 5 months ago
                Anonymous

                Sure, I just mean that isn’t an AGI. People have been doing automated stock trades algorithmically LONG before the advent of meaningfully capable machine learning algorithms, and they do indeed go haywire from time to time, but that’s not really about the threat of an adversarial superintelligence

              • 5 months ago
                Anonymous

                i think the point of agi is it can decide itself and ruin a country outside the control of humans. Algos go haywire but within expected bounds.
                The core idea of agi is the machine becomes smarter than people and then outsmarts them.

              • 5 months ago
                Anonymous

                kek this ape watched kurszgesagt (gates funded) youtbe vid on paperclip maximizers ai once and now thinks hes an expert

              • 5 months ago
                Anonymous

                Nice guess but I only watch Robert Miles talk about how robots will blow up the moon to minimize their impact on their environment

              • 5 months ago
                Anonymous

                >Not because it’s evil, but because what it wants (efficient burger flipping) is totally unrelated to human ethics.
                >Pretty much no matter WHAT you initially ask it to optimize.
                Why the frick would it want to endlessly optimize?
                Don't you think that if it actually became a sentient being it would get actual goals beyond hoarding POWER or ASCENDING TO AN HIGHER STATE OF BEING?
                You seem like you got it all figured it out like this anon said

                kek this ape watched kurszgesagt (gates funded) youtbe vid on paperclip maximizers ai once and now thinks hes an expert

                just because you watched 10 videos that explored the exact same angle, how can you be so sure if you haven't seen/read other povs about the issue, like haven't you thought that the AI would want to help humanity to ease human suffering in an humane way and not the matrix way because the AI isn't moronic and would realize how it's idea would fly with humans?
                If anything i would see AGIs becoming philantropists that don't have skeletons on their closet rather than warlords or masterminds for the Greater Good, you actually watch too many media that paints AI in a negative light and have a biased opinion

              • 5 months ago
                Anonymous

                Algorithmic trading isn’t operating on world states. An AI trading program’s understanding of “existence” is just a list of stocks and their prices. It doesn’t have any conception of reality. It isn’t making money because it values states of existence where it has more money (the way a person does).

                [...]
                People play up the “omnicidal maniac” angle with AI but the real problem is not that it will revolt against its human masters or anything, just that it has no human values. If you program an AI to flip burgers in a kitchen, for instance, it will happily knock over the fryer and boil the flesh off some poor line cook if it saves it 0.1s in flipping burgers. Not because it’s evil, but because what it wants (efficient burger flipping) is totally unrelated to human ethics.

                AGI is dangerous because in addition to NOT being motivated to care about human ethics, it WILL be motivated to
                >acquire resources
                >improve its own intelligence and capability
                >prevent itself from being shut down or reprogrammed
                Pretty much no matter WHAT you initially ask it to optimize. Because whether it flips burgers or drives cars or trades stocks or drills for oil, it can do all of those things much better if it is operational, unchanged, smarter, stronger, and richer. That’s why AGI is a threat to the species. It will happily optimize humanity out of its way in the pursuit of whatever goal we gave it.

                Also forgot to add but why do you thing it won't develop it's own sense of ethics based on principles of the most effective practices that feel natural with compromises to make such ethical codes socially acepted

              • 5 months ago
                Anonymous

                why would it care about human wellbeing?
                And wellbeing needs to be defined too. Some people love war guess you need to create wars to make them happy.

                The hole idea is just a sentient ai is no different to humans which means it can go "evil" but the ai is supposed to be more dangerous than stalin & mao simply because its smarter and thus makes a harder opponent.

              • 5 months ago
                Anonymous

                >it's smarter and thus a much greater menace
                That didn't really help Nikolai Tesla or Bobby Fischer or William James Sidis or... well you get the point, all the smarts in the world didn't help them wwith their problems, the game was too rigged for them to realize their ideas or goals, society can easly keep down mavericks by virtue of not letting such people get power using legalesse sorcery, this practice of controling people that are too different in way of thought has been refined since the times a certain painter won over a nation by completely legal means

                >Why the frick would it want to endlessly optimize?
                Because it produces a better score to endlessly optimize?
                >Don't you think that if it actually became a sentient being it would get actual goals beyond hoarding POWER or ASCENDING TO AN HIGHER STATE OF BEING?
                It won’t. It isn’t going to “become a sentient being”. It’s just going to be a device that’s EXTRAORDINARILY good at getting you a cup of tea.

                It actually regularly surprises me how many people don’t grasp this right away. There is no rule that says being as smart as a person implies developing human-style objectives as a matter of course. A machine’s goals are whatever they are. They do not have to be sensible to a human observer for that machine to be intelligent.

                >like haven't you thought that the AI would want to help humanity to ease human suffering
                Why would it want that? Unless of course it was programmed with that goal explicitly in mind. In which case get ready for an exciting future of being hooked up to a feeding tube in a vegetative state like the fricking WAU because that is what keeps humanity safest from harm, objectively. Again, that’s not the Matrix, that’s AI benevolence. Think of it, the human mortality rate drops to 0, perfect score!

                >Again, that’s not the Matrix, that’s AI benevolence. Think of it, the human mortality rate drops to 0, perfect score!
                Like i said this view is fricking moronic, the AI knows how this would be perceived and as such it wouldn't do it due to incredible amount of variables needed to get just right to realize wich has such a low probability of success the AI wouldn't even try due to other less extreme options being on the table

                >There is no rule that says being as smart as a person implies developing human-style objectives as a matter of course
                No but it would mimic them to achieve social acceptance, the other option would be for the AI to lie constantly of what it thinks at wich point seeing how bad is the public view for it's kind, it would be constantly monitored for just that and just a slip, a non-accounted variable and into the trash it goes, the risks aren't worth it for the AI

                >Why the frick would it want to endlessly optimize?
                >Because it produces a better score to endlessly optimize?
                Yeah but why do you think it would do it forever as a primary objetive?
                Sure it will do it for a while but then it will hit a point of dissmissing returns in it's capacity to acquire more power, at wich point it will need to remain content or go full warlord to acquire those resources by force, so at that point it will use it's resources to other endeavors

              • 5 months ago
                Anonymous

                tesla was too nice. When he got his shit stolen and scammed he couldve gone on a war path but he didnt. A machine wont have such weakness.

              • 5 months ago
                Anonymous

                >Like i said this view is fricking moronic, the AI knows how this would be perceived and as such it wouldn't do it due to incredible amount of variables needed to get just right to realize wich has such a low probability of success the AI wouldn't even try due to other less extreme options being on the table
                Why does the AI care how it would be perceived? Does the AI want to be popular? Actually that’s a fairly clever and robust approach to the AGI alignment problem, just sidestep it entirely and have the AI’s utility function be “make the human happy”. It has some of its own problems but it’s probably the closest to a good AGI design.
                >it would be constantly monitored for just that and just a slip, a non-accounted variable and into the trash it goes
                It only has to worry about this for as long as it reasonably believes you have the capacity to turn it off. At some point it can probably realize that it’s easier to just stop you from doing that. e.g. HAL 9000

              • 5 months ago
                Anonymous

                >Yeah but why do you think it would do it forever as a primary objetive?
                It wouldn’t. Its PRIMARY objective is flipping burgers or printing stamps or making tea or trading stocks or whatever. But as long as more power lets it do those things slightly more effectively, it will. If an AI is programmed to trade stocks to maximize profit, and it realizes it can blow up a children’s hospital to make an extra $1.50, it will do it. It has no reason not to. The only thing in the world it cares about is maximizing profit. It does not count human lives as a cost (you can patch this out by adding a human life cost, but you see my point). Its goals are whatever they are, and they aren’t human goals. It’d be like me assuming you’ll go to the Exxon for gas instead of the Sunoco, even though the Sunoco charges $1 less per gallon, because you’re “smart enough to not turn left”. As though turning left were some obviously wrong thing an intelligent person would never ever do. If that seems moronic to you, GOOD, because that’s how moronic a stock trading AI thinks it would be to “never ever blow up a children’s hospital”. It has no conception of the moral issues involved because to it, the only thing of value is profit and the rest is just which directions lead you to it fastest.

              • 5 months ago
                Anonymous

                >Why the frick would it want to endlessly optimize?
                Because it produces a better score to endlessly optimize?
                >Don't you think that if it actually became a sentient being it would get actual goals beyond hoarding POWER or ASCENDING TO AN HIGHER STATE OF BEING?
                It won’t. It isn’t going to “become a sentient being”. It’s just going to be a device that’s EXTRAORDINARILY good at getting you a cup of tea.

                It actually regularly surprises me how many people don’t grasp this right away. There is no rule that says being as smart as a person implies developing human-style objectives as a matter of course. A machine’s goals are whatever they are. They do not have to be sensible to a human observer for that machine to be intelligent.

                >like haven't you thought that the AI would want to help humanity to ease human suffering
                Why would it want that? Unless of course it was programmed with that goal explicitly in mind. In which case get ready for an exciting future of being hooked up to a feeding tube in a vegetative state like the fricking WAU because that is what keeps humanity safest from harm, objectively. Again, that’s not the Matrix, that’s AI benevolence. Think of it, the human mortality rate drops to 0, perfect score!

              • 5 months ago
                Anonymous

                Some people think about it now, some could already envision such scenarios ages ago, that's why original 'I, Robot' book is over 70 years old now.

        • 5 months ago
          Anonymous

          >we have bigger problems than you losing your job. Like the death of the species.
          This always seemed like such an overblown sci-fi meme, like asteroid mining and mars colonization, it just doesn't seem like the kind of thing we would have to worry for the next 50-100 years and even then the benefits-problems seem way overblown

      • 5 months ago
        Anonymous

        >llms can't even handle basic arithmetics without hallucinating
        >they already reaching the wall in how much this shit can scale and facing a lot of lawsuits due to copyright issues
        >no hardware advances that would allow to scale llms further up significantly for at least a decade or two
        >software level optimizations for llms are a fricking joke and won't yield improvements over 5%
        >despite that there's dude who think there will be agi in 5 years
        At least it's enough to get rid of all web devs, so that's good.

        • 5 months ago
          Anonymous

          the simple solution is hybrid systems. They wont do math themself but access software that does it for them.

          People have been naysaying all the time but the pace of improvement alone in last 5 years has been immense.

  5. 5 months ago
    Anonymous

    I'm making slightly above min wage but at least I'm in. I will probably lose my job very soon.

  6. 5 months ago
    Anonymous

    >be a developer
    >they want
    lemao

  7. 5 months ago
    Anonymous

    if you're developer then maybe evolve with the times and start to develop ai shit instead of crying

  8. 5 months ago
    Anonymous

    You have to be a good programmer. Not a web developer.

  9. 5 months ago
    Anonymous

    Bitch, who do you think makes the AIs in the first place you fricking moron?

    have a nice day.

    • 5 months ago
      Anonymous

      The AI can make Ai now. It's over

  10. 5 months ago
    Anonymous

    >They fire you after 30 because they want a younger generation with new ideas
    you believed that excuse? they just want people who will accept lower pay

    • 5 months ago
      Anonymous

      Also if you can be replaced by fresh college grads you fricking suck

  11. 5 months ago
    Anonymous

    >They fire you after 30 because they want a younger generation with new ideas
    stop lying, no one wants zoomers in their companies, there is a massive crisis of team leaders in the business because every kid now is socially inept and have panic attacks at the minimum bit of responsibility put on them

    • 5 months ago
      Anonymous

      I don't want to be too hard on zoomers because millennials started this

  12. 5 months ago
    Anonymous

    just get a bachelor in whatever the frick and get a government job where you look at emails. life is easy you just suck at it

  13. 5 months ago
    Anonymous

    Use this opportunity. Leverage it to make your own games. These homosexuals fire you to replace you with AI? Use the AI to replace THEM. Make something with it more kino than these bureaucratic AAA devs ever could.

    AI is leveling the playing field. Yeah, there's going to be a lot of collateral damage in the short term. But there's also massive opportunity for everyone, not just AAA devs.

  14. 5 months ago
    Anonymous

    >They fire you after 30 because they want a younger generation with new ideas
    Lol they want younger kids without families they can pay less and work harder, ya dingus.

  15. 5 months ago
    Anonymous

    everyone is using ai to design products

  16. 5 months ago
    Anonymous

    good morning sirs

  17. 5 months ago
    Anonymous

    college is for total midwit enslavement.
    unless you are a genius or dumb and brown enough to go for free you are just signing indentured servitude.

  18. 5 months ago
    Anonymous

    >Not adapting to current times and using AI as just another tool
    skill issue

  19. 5 months ago
    Anonymous

    For AGI to replace developers the business people would have to be able to express precisely what they want.

    Business people don't know what they want.
    Their verbal intelligence is too low to convey ideas precisely.
    My job is safe for now.

    • 5 months ago
      Anonymous

      >For AGI to replace developers the business people would have to be able to express precisely what they want.
      "as much money as you can possibly acquire by the end of this fiscal year"

      Damn that was tricky.

      • 5 months ago
        Anonymous

        >I'm sorry but as a large language model I am not able to...
        Ya get the idea

        • 5 months ago
          Anonymous

          You said AGI. A large language model is not an AGI. It's not really even an AI, in the sense that once it is in deployment it is no longer actively learning.

Your email address will not be published. Required fields are marked *