You haven't explained why humans will lose control of their ability to control the rollout. But there's not much point in trying to get you to.SpaceGazelle wrote:I just have.
SpaceGazelle wrote:We'll get to intelligence in a minute but the basic problem is this: Throughout human history we had discovery + understanding = progression Now in many areas the progression is coming first. This will soon apply to pretty much all areas. We can try and pick apart what the AI is doing but it's accelerating away from us and we'll never catch up. So the question then is what do we ALLOW it to do? You'd think we'd put the brakes on when it comes to critical systems but we haven't seen it with driving. Why? Because it's better than us. Results matter. We think it might seem obvious what driving AI is doing but we really don't know because it understands patterns better than we do. Chess computers are doing batshit moves that nobody would ever do a few years ago. Yes we're learning from it but the gap is getting bigger. So how do we define intelligence? It's easiest to break it up by skill, so we have chess IQ, driving IQ, engineering IQ, problem solving IQ, logic IQ etc. Each bot does it's own thing and it does it better. Do we allow it to use solutions we don't understand? Of course we do! And if WE didn't THE BADDIES would. Then we link these bots together and they're all working to solve all the problems and it's coming up with things that we could possibly never do in a million years. Is it sentient? No. Does it matter? No. AI will be designing new forms of quantum computer and designing its own language and it's doing all this mad shit and we're sitting there clueless as this mental shitstorm of progression rolls away from us at a breathtaking pace. And it's working amazingly until it all goes wrong. What do we do then? Do we stop? Could we stop? Do we pull the plug and all go back to something that will seem like the stone age? No, because not everyone will stop. This tech is available to anyone with a computer. It might stop if it kills us all but it might not. It doesn't need to be sentient, it just needs no humans in the loop to carry on pointlessly optimising with nobody to optimise for. By far the most concerning thing is there doesn't appear to be much of a cap on progression in sight. Maybe physical laws like the speed of light. Now it might be great. It could be the best thing ever but it's a bit of a coin flip at this stage.Funkstain wrote:Gpt4 having such an impact on the predictions makes me question the predictors. SG you’re gonna have to explain what tech simulated intelligence is. All the current hype is LLMs and “we don’t know what’s going on inside them man!” when everything points to data in data out on a huge scale and nothing at all like actual generalised intelligence
monkey wrote:You haven't explained why humans will lose control of their ability to control the rollout. But there's not much point in trying to get you.SpaceGazelle wrote:I just have.
Unlikely wrote:There are so many unsubstantiated assertions in this post.SpaceGazelle wrote:We'll get to intelligence in a minute but the basic problem is this: Throughout human history we had discovery + understanding = progression Now in many areas the progression is coming first. This will soon apply to pretty much all areas. We can try and pick apart what the AI is doing but it's accelerating away from us and we'll never catch up. So the question then is what do we ALLOW it to do? You'd think we'd put the brakes on when it comes to critical systems but we haven't seen it with driving. Why? Because it's better than us. Results matter. We think it might seem obvious what driving AI is doing but we really don't know because it understands patterns better than we do. Chess computers are doing batshit moves that nobody would ever do a few years ago. Yes we're learning from it but the gap is getting bigger. So how do we define intelligence? It's easiest to break it up by skill, so we have chess IQ, driving IQ, engineering IQ, problem solving IQ, logic IQ etc. Each bot does it's own thing and it does it better. Do we allow it to use solutions we don't understand? Of course we do! And if WE didn't THE BADDIES would. Then we link these bots together and they're all working to solve all the problems and it's coming up with things that we could possibly never do in a million years. Is it sentient? No. Does it matter? No. AI will be designing new forms of quantum computer and designing its own language and it's doing all this mad shit and we're sitting there clueless as this mental shitstorm of progression rolls away from us at a breathtaking pace. And it's working amazingly until it all goes wrong. What do we do then? Do we stop? Could we stop? Do we pull the plug and all go back to something that will seem like the stone age? No, because not everyone will stop. This tech is available to anyone with a computer. It might stop if it kills us all but it might not. It doesn't need to be sentient, it just needs no humans in the loop to carry on pointlessly optimising with nobody to optimise for. By far the most concerning thing is there doesn't appear to be much of a cap on progression in sight. Maybe physical laws like the speed of light. Now it might be great. It could be the best thing ever but it's a bit of a coin flip at this stage.Funkstain wrote:Gpt4 having such an impact on the predictions makes me question the predictors. SG you’re gonna have to explain what tech simulated intelligence is. All the current hype is LLMs and “we don’t know what’s going on inside them man!” when everything points to data in data out on a huge scale and nothing at all like actual generalised intelligence
SpaceGazelle wrote:monkey wrote:You haven't explained why humans will lose control of their ability to control the rollout. But there's not much point in trying to get you.SpaceGazelle wrote:I just have.
Because that's how capiltalism works. And even with restrictions it doesn't stop people doing it. You can't make everyone in the world stop and that's what it will take.
I do not have the time or inclination to dissect your previous posts over the months but try to follow my logic with this brief intervention:SpaceGazelle wrote:But AI is not like what's come before.
Some references to support this assertion ("always" being the crux) would help. Also why this is an issue, if indeed it is a thing.SpaceGazelle wrote:There's always over-hype of new tech in the short term but under-hype in the long.
Let's go back to the unsubstantiated assertion thing I mentioned earlier.SpaceGazelle wrote:AI is going to be batshit insane
No, the people who are being heard most are the ones developing it.SpaceGazelle wrote:and the people shouting loudest about the dangers are the ones developing it.
SpaceGazelle wrote:It's not stopping them of course but there we are. Most of the stuff I do these days is AI but I'll swap it out sometimes because it gets a bit samey and I'm lucky in that I can pick and choose jobs. There's no shortage of intellect in this forum which makes the naivety on this somewhat disturbing.
Funkstain wrote:But that amazing ability to almost instantly (within safe margin of time) assess countless variables effortlessly… show me self driving software anywhere near that.
Unlikely wrote:I do not have the time or inclination to dissect your previous posts over the months but try to follow my logic with this brief intervention:SpaceGazelle wrote:But AI is not like what's come before.Some references to support this assertion ("always" being the crux) would help. Also why this is an issue, if indeed it is a thing.SpaceGazelle wrote:There's always over-hype of new tech in the short term but under-hype in the long.Let's go back to the unsubstantiated assertion thing I mentioned earlier.SpaceGazelle wrote:AI is going to be batshit insaneNo, the people who are being heard most are the ones developing it.SpaceGazelle wrote:and the people shouting loudest about the dangers are the ones developing it.And that kind of "I know better than you" is a position that nobody should adopt.SpaceGazelle wrote:It's not stopping them of course but there we are. Most of the stuff I do these days is AI but I'll swap it out sometimes because it gets a bit samey and I'm lucky in that I can pick and choose jobs. There's no shortage of intellect in this forum which makes the naivety on this somewhat disturbing.
Presumably limitless AI / robot labour will produce a world of utopian abundance. It’s a more promising view than a race to the bottom for profit.hunk wrote:In a capitalist world where no one works except AI, how can one make a profit?
Why don't you explain to us the stuff you explained to the Google guy then?SpaceGazelle wrote:The problem is that we aren't prepared. The schools and govs are behind the curve. Even big tech doesn't quite understand it although they know the theory because the theory is simple, it's just linear regression. I got paired with a Google guy on a video call and he just kept saying "FUUUCK!" a lot. It's hard to know what to do but I think we should be having the conversation.
bad_hair_day wrote:Presumably limitless AI / robot labour will produce a world of utopian abundance. It’s a more promising view than a race to the bottom for profit.In a capitalist world where no one works except AI, how can one make a profit?
hunk wrote:The people at Davos aren't prepared. Capitalism in general isn't prepared. Society as a whole isn't prepared.
monkey wrote:Why don't you explain to us the stuff you explained to the Google guy then?SpaceGazelle wrote:The problem is that we aren't prepared. The schools and govs are behind the curve. Even big tech doesn't quite understand it although they know the theory because the theory is simple, it's just linear regression. I got paired with a Google guy on a video call and he just kept saying "FUUUCK!" a lot. It's hard to know what to do but I think we should be having the conversation.
It looks like you're new here. If you want to get involved, click one of these buttons!