The robots are coming. Restructure the economy. Go.
  • Just sex robot ai then?
    Big Business still wants a piece of the ai pie.
    Steam: Ruffnekk
    Windows Live: mr of unlocking
    Fightcade2: mrofunlocking
  • Yes, sex workers (oldest profession in the world) will be first in line to lose their jobs!
    No one is safe!
    Steam: Ruffnekk
    Windows Live: mr of unlocking
    Fightcade2: mrofunlocking
  • I just have.
    You haven't explained why humans will lose control of their ability to control the rollout. But there's not much point in trying to get you to.
  • Funkstain wrote:
    Gpt4 having such an impact on the predictions makes me question the predictors. SG you’re gonna have to explain what tech simulated intelligence is. All the current hype is LLMs and “we don’t know what’s going on inside them man!” when everything points to data in data out on a huge scale and nothing at all like actual generalised intelligence
    We'll get to intelligence in a minute but the basic problem is this: Throughout human history we had discovery + understanding = progression Now in many areas the progression is coming first. This will soon apply to pretty much all areas. We can try and pick apart what the AI is doing but it's accelerating away from us and we'll never catch up. So the question then is what do we ALLOW it to do? You'd think we'd put the brakes on when it comes to critical systems but we haven't seen it with driving. Why? Because it's better than us. Results matter. We think it might seem obvious what driving AI is doing but we really don't know because it understands patterns better than we do. Chess computers are doing batshit moves that nobody would ever do a few years ago. Yes we're learning from it but the gap is getting bigger. So how do we define intelligence? It's easiest to break it up by skill, so we have chess IQ, driving IQ, engineering IQ, problem solving IQ, logic IQ etc. Each bot does it's own thing and it does it better. Do we allow it to use solutions we don't understand? Of course we do! And if WE didn't THE BADDIES would. Then we link these bots together and they're all working to solve all the problems and it's coming up with things that we could possibly never do in a million years. Is it sentient? No. Does it matter? No. AI will be designing new forms of quantum computer and designing its own language and it's doing all this mad shit and we're sitting there clueless as this mental shitstorm of progression rolls away from us at a breathtaking pace. And it's working amazingly until it all goes wrong. What do we do then? Do we stop? Could we stop? Do we pull the plug and all go back to something that will seem like the stone age? No, because not everyone will stop. This tech is available to anyone with a computer. It might stop if it kills us all but it might not. It doesn't need to be sentient, it just needs no humans in the loop to carry on pointlessly optimising with nobody to optimise for. By far the most concerning thing is there doesn't appear to be much of a cap on progression in sight. Maybe physical laws like the speed of light. Now it might be great. It could be the best thing ever but it's a bit of a coin flip at this stage.

    There are so many unsubstantiated assertions in this post.
  • monkey wrote:
    I just have.
    You haven't explained why humans will lose control of their ability to control the rollout. But there's not much point in trying to get you.

    Because that's how capiltalism works. And even with restrictions it doesn't stop people doing it. You can't make everyone in the world stop and that's what it will take.
    "Plus he wore shorts like a total cunt" - Bob
  • Unlikely wrote:
    Funkstain wrote:
    Gpt4 having such an impact on the predictions makes me question the predictors. SG you’re gonna have to explain what tech simulated intelligence is. All the current hype is LLMs and “we don’t know what’s going on inside them man!” when everything points to data in data out on a huge scale and nothing at all like actual generalised intelligence
    We'll get to intelligence in a minute but the basic problem is this: Throughout human history we had discovery + understanding = progression Now in many areas the progression is coming first. This will soon apply to pretty much all areas. We can try and pick apart what the AI is doing but it's accelerating away from us and we'll never catch up. So the question then is what do we ALLOW it to do? You'd think we'd put the brakes on when it comes to critical systems but we haven't seen it with driving. Why? Because it's better than us. Results matter. We think it might seem obvious what driving AI is doing but we really don't know because it understands patterns better than we do. Chess computers are doing batshit moves that nobody would ever do a few years ago. Yes we're learning from it but the gap is getting bigger. So how do we define intelligence? It's easiest to break it up by skill, so we have chess IQ, driving IQ, engineering IQ, problem solving IQ, logic IQ etc. Each bot does it's own thing and it does it better. Do we allow it to use solutions we don't understand? Of course we do! And if WE didn't THE BADDIES would. Then we link these bots together and they're all working to solve all the problems and it's coming up with things that we could possibly never do in a million years. Is it sentient? No. Does it matter? No. AI will be designing new forms of quantum computer and designing its own language and it's doing all this mad shit and we're sitting there clueless as this mental shitstorm of progression rolls away from us at a breathtaking pace. And it's working amazingly until it all goes wrong. What do we do then? Do we stop? Could we stop? Do we pull the plug and all go back to something that will seem like the stone age? No, because not everyone will stop. This tech is available to anyone with a computer. It might stop if it kills us all but it might not. It doesn't need to be sentient, it just needs no humans in the loop to carry on pointlessly optimising with nobody to optimise for. By far the most concerning thing is there doesn't appear to be much of a cap on progression in sight. Maybe physical laws like the speed of light. Now it might be great. It could be the best thing ever but it's a bit of a coin flip at this stage.
    There are so many unsubstantiated assertions in this post.

    Sigh.
    "Plus he wore shorts like a total cunt" - Bob
  • monkey wrote:
    I just have.
    You haven't explained why humans will lose control of their ability to control the rollout. But there's not much point in trying to get you.

    Because that's how capiltalism works. And even with restrictions it doesn't stop people doing it. You can't make everyone in the world stop and that's what it will take.

    LLMs - chatGPT is less good now than when it started. It's not improving exponentially. It's been intentionally kneecapped because of impending legal problems and to avoid new ones.

    Self-driving cars - These have been in the works for years and years. The govt allowed testing to start happening in 2013. They're still not legal yet. They're not driving around and winding up motorists yet. Not an AI product.

    Software development - gone probably (barring some kind of ouroboros situation). The full SG scenario. Almost no new software will be getting written by conventional means in, I dunno, 6 years. AI tools everywhere. All natural (but technical) language instructions, diagrams and flow charts. Legacy codebases and ongoing stuff is a different story. There's databases out there written in the 60s, still up and running in banks and hospitals, because no one wants to touch them. People aren't hot swapping that stuff with Mystery Black Box tech.

    So it's a mixed and complicated scenario and not just ITS AI BiTCHES GET REKt.
  • acemuzzy
    Show networks
    PSN
    Acemuzzy
    Steam
    Acemuzzy (aka murray200)
    Wii
    3DS - 4613-7291-1486

    Send message
    I briefly think "it would be nice to view the world with the level of certainty that SG manages" and then I realise that would kinda suck cos half the stuff I felt certain about would be wrong. Everything I see is uncertainty and questions. Fuck knows what the future will be like. It's a pretty extremis version getting presented as fact, though, I think there's a 99% chance we'll still have plumbers in 2044 but really who the fuck knows.
  • I guess what I’m looking for is relatively clear explanation of functionally, technically, how is our current development of ML models and LLMs going to lead to creative original multi-faceted multi-contextual product? Because asserting it is so doesn’t persuade me.

    I do believe that people will pursue whatever harmful goal they like in pursuit of money and power. And I also believe ai as we call it is harming and can / will harm us a lot more in many ways both predictable and otherwise: misinformation, spoofing, amplification of falsehoods, and division. I also believe it can and will provide gigantic leaps in capability and problem solving, where that problem needs data crunching and useful prompted outputs, from the prosaic (what’s this cloudy bit on the X-ray most likely to be) to the more interesting (proteins, viral research, maybe even really tough things like fusion at scale).

    If that’s what you’re talking about then sure yeah.


    There’s a huge difference, I mean so much difference that you’re not talking about the same thing at all, you’re not talking about “level one or two” or whatever up the scale to level ten of true agency-based original intelligence, between being able to output probable / possible solutions and interpretations of huge incomprehensibly big data sets, that we’d never be able to do with our brains given centuries to do it, that previous brute computing power methods would take all the power of the sun to interpret in much more basic ways and actual intelligence. There is no such thing as chess intelligence: there’s just knowing the rules and being able to use all the data and possible moves to win. Humans used another dimension or two because they couldn’t possibly memorise all data and move sets: instinct, psychology etc; and we call that, together with what the human knew of strategy and data, chess intelligence. chess ai doesn’t need that. Is it still intelligence? You can repurpose these data crunching output driven things in many ways but without prompts they are nothing. Without guided training they are just a big data lake. And they tend not to be multipurpose multi contextual and never original: just optimised pattern spotters; a small part of one form of intelligence.
  • LLMs are just ways to interact with a computer. GPT just finds stuff that humans have put on the internet. That's not what I'm talking about. Does old computing still work? Of course it does. But AI is not like what's come before.

    There's always over-hype of new tech in the short term but under-hype in the long. AI is going to be batshit insane and the people shouting loudest about the dangers are the ones developing it. It's not stopping them of course but there we are. Most of the stuff I do these days is AI but I'll swap it out sometimes because it gets a bit samey and I'm lucky in that I can pick and choose jobs.

    There's no shortage of intellect in this forum which makes the naivety on this somewhat disturbing.
    "Plus he wore shorts like a total cunt" - Bob
  • acemuzzy
    Show networks
    PSN
    Acemuzzy
    Steam
    Acemuzzy (aka murray200)
    Wii
    3DS - 4613-7291-1486

    Send message
    Lol. I'm out.
  • The problem with ai driving illustrates this quite well actually. Humans can easily, almost instantly assess hundreds of data points contextually: that red thing is a T-shirt not a traffic light signal, yes, but also imagine similar instant decisions on input (sight, sound, feel, smell) across an immense range of options AND those being in a safety context almost always 100% correct! There’s nothing out there that can do that. Shitty road markings? No problem for humans.

    Yes humans are also irrational so you get speeders, over confidence, alcohol drivers and so on. And ai could help there: too fast. Too close. Too drunk or tired. Turn off car.

    But that amazing ability to almost instantly (within safe margin of time) assess countless variables effortlessly… show me self driving software anywhere near that.
  • But AI is not like what's come before.
    I do not have the time or inclination to dissect your previous posts over the months but try to follow my logic with this brief intervention:
    There's always over-hype of new tech in the short term but under-hype in the long.
    Some references to support this assertion ("always" being the crux) would help. Also why this is an issue, if indeed it is a thing.
    AI is going to be batshit insane
    Let's go back to the unsubstantiated assertion thing I mentioned earlier.
    and the people shouting loudest about the dangers are the ones developing it.
    No, the people who are being heard most are the ones developing it.
    It's not stopping them of course but there we are. Most of the stuff I do these days is AI but I'll swap it out sometimes because it gets a bit samey and I'm lucky in that I can pick and choose jobs. There's no shortage of intellect in this forum which makes the naivety on this somewhat disturbing.

    And that kind of "I know better than you" is a position that nobody should adopt.
  • Funkstain wrote:
    But that amazing ability to almost instantly (within safe margin of time) assess countless variables effortlessly… show me self driving software anywhere near that.

    But that's exactly what it does! And it does it without calculation using the trained model. That's the difference between AI and conventional computing. Remember IBM's Deep Blue? That lost to Kasparov because it tried to brute force all the possible moves and it lost because that couldn't compete with his brain and experience. AI whoops everyone's ass now because it doesn't use the same technique.
    "Plus he wore shorts like a total cunt" - Bob
  • Unlikely wrote:
    But AI is not like what's come before.
    I do not have the time or inclination to dissect your previous posts over the months but try to follow my logic with this brief intervention:
    There's always over-hype of new tech in the short term but under-hype in the long.
    Some references to support this assertion ("always" being the crux) would help. Also why this is an issue, if indeed it is a thing.
    AI is going to be batshit insane
    Let's go back to the unsubstantiated assertion thing I mentioned earlier.
    and the people shouting loudest about the dangers are the ones developing it.
    No, the people who are being heard most are the ones developing it.
    It's not stopping them of course but there we are. Most of the stuff I do these days is AI but I'll swap it out sometimes because it gets a bit samey and I'm lucky in that I can pick and choose jobs. There's no shortage of intellect in this forum which makes the naivety on this somewhat disturbing.
    And that kind of "I know better than you" is a position that nobody should adopt.

    I know about physics and AI. And a bit about parenting.
    "Plus he wore shorts like a total cunt" - Bob
  • And cooking.
    "Plus he wore shorts like a total cunt" - Bob
  • Aye, AI will have a gigantic impact in our capitalist world order.
    Cost cutting all over the place in the name of profit. People losing their jobs, especially at the lower levels. The 1-5% will insulate themselves of course. Economies will become seriously unbalanced. Autoritharianism will eventually replace democracy.

    AI will no doubt be able to do great things scientifically, but with Big Business at the helm I seriously doubt we'll see much of that if any. It'll just be job cuts and profit maximization going forward while maintaining the status quo. If the US world order has its way and keeps going as is.

    Not sure about a China led world order tho...
    Steam: Ruffnekk
    Windows Live: mr of unlocking
    Fightcade2: mrofunlocking
  • In a capitalist world where noone works except AI, how can one make a profit?

    Can never work.
    Doomed.
    Oxymoron.

    etc.
    Steam: Ruffnekk
    Windows Live: mr of unlocking
    Fightcade2: mrofunlocking
  • Paul the sparky
    Show networks
    Xbox
    Paul the sparky
    PSN
    Neon_Sparks
    Steam
    Paul_the_sparky

    Send message
    Bring on The Culture
  • The problem is that we aren't prepared. The schools and govs are behind the curve. Even big tech doesn't quite understand it although they know the theory because the theory is simple, it's just linear regression. I got paired with a Google guy on a video call and he just kept saying "FUUUCK!" a lot. It's hard to know what to do but I think we should be having the conversation.
    "Plus he wore shorts like a total cunt" - Bob
  • The people at Davos aren't prepared.
    Capitalism in general isn't prepared.

    Society as a whole isn't prepared.
    Steam: Ruffnekk
    Windows Live: mr of unlocking
    Fightcade2: mrofunlocking
  • bad_hair_day
    Show networks
    Twitter
    @_badhairday_
    Xbox
    Bad Hair Day
    PSN
    Bad-Hair-Day
    Steam
    badhairday247

    Send message
    hunk wrote:
    In a capitalist world where no one works except AI, how can one make a profit?
    Presumably limitless AI / robot labour will produce a world of utopian abundance. It’s a more promising view than a race to the bottom for profit.

    retroking1981: Fuck this place I'm off to the pub.
  • Bless you and the horse you rode in on.
  • The problem is that we aren't prepared. The schools and govs are behind the curve. Even big tech doesn't quite understand it although they know the theory because the theory is simple, it's just linear regression. I got paired with a Google guy on a video call and he just kept saying "FUUUCK!" a lot. It's hard to know what to do but I think we should be having the conversation.
    Why don't you explain to us the stuff you explained to the Google guy then?
  • In a capitalist world where no one works except AI, how can one make a profit?
    Presumably limitless AI / robot labour will produce a world of utopian abundance. It’s a more promising view than a race to the bottom for profit.

    Except, what you're describing isn't capitalism?
    Why would capitalists (read billionaires and their corporations) willingly ever give up the current system?
    Steam: Ruffnekk
    Windows Live: mr of unlocking
    Fightcade2: mrofunlocking
  • hunk wrote:
    The people at Davos aren't prepared. Capitalism in general isn't prepared. Society as a whole isn't prepared.

    The internet made tech firms rich but this is something else. The sheer pace of it is nuts yet they're sitting on this and somebody somewhere will make a decision on what to deploy. Meta seem keen on making it open source which normally I'd agree with but it's also making me nervous. Too much power.
    "Plus he wore shorts like a total cunt" - Bob
  • monkey wrote:
    The problem is that we aren't prepared. The schools and govs are behind the curve. Even big tech doesn't quite understand it although they know the theory because the theory is simple, it's just linear regression. I got paired with a Google guy on a video call and he just kept saying "FUUUCK!" a lot. It's hard to know what to do but I think we should be having the conversation.
    Why don't you explain to us the stuff you explained to the Google guy then?

    He was more impressed with the results they were getting. Overwhelmed really.
    "Plus he wore shorts like a total cunt" - Bob
  • In fairness it was impressive.
    "Plus he wore shorts like a total cunt" - Bob
  • What results were they getting?

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!