The robots are coming. Restructure the economy. Go.
  • It's basically as depressing as it gets. You are a product of prediction. Carry on.
  • dynamiteReady
    Show networks
    Steam
    dynamiteready

    Send message
    Then there are the upsides. Oncology (and other areas of medicine) is already undergoing a revolution, for example. A "second opinion" now costs a few pence, and not a few hundred pound...
    "I didn't get it. BUUUUUUUUUUUT, you fucking do your thing." - Roujin
    Ninty Code: SW-7904-0771-0996
  • It's basically as depressing as it gets. You are a product of prediction. Carry on.

    Well that was all very predictable.
  • It's basically as depressing as it gets. You are a product of prediction. Carry on.

    This has always been the case, presumably since people started selling things to each other instead of bartering. It’s just that recently we’re getting better and better at doing it, at an alarming rate.

    Your central point is a good one though – that very little energy is being invested in understanding all this collected data. The immediate value is just in blindly following it. There must be long-term value in studying the facts and extrapolating meaning from them, but in the short term that’s a purely academic exercise. So maybe academia can step in and take it on? PhD in local ethnography funded by Tesco?
  • poprock wrote:
    very little energy is being invested in understanding all this collected data. The immediate value is just in blindly following it. There must be long-term value in studying the facts and extrapolating meaning from them, but in the short term that’s a purely academic exercise. So maybe academia can step in and take it on? PhD in local ethnography funded by Tesco?

    The point about big data is that there's no understanding it because it's too big. There's an entire group at Google that just deal with correlations. They're not interested in why and it might never be known why. The why bit is irrelevant and possibly unknowable. There's just a shitload of data, and the data says people who floss are more likely to be Moon landing deniers or whatever. This is what ML does. It takes massive amounts of data and finds patterns. Those patterns are used to put you in a group, and that's the group you're now in. Floss do you? That's a minus five on the credit score I'm afraid. Own a Honda and like eggs? Peado alert! 

    There's no why, just correlations. "Computer says no" is actually now an actually thing and it's based on your preferred brand of shampoo, where you were born and whether you've ever bought a an egg cup with a squirrel on it.
  • acemuzzy
    Show networks
    PSN
    Acemuzzy
    Steam
    Acemuzzy (aka murray200)
    Wii
    3DS - 4613-7291-1486

    Send message
    It's a huge thing in the industry at the moment. Its getting plenty of research and applications. But yes, running before we can walk.
  • GooberTheHat
    Show networks
    Twitter
    GooberTheHat
    Xbox
    GooberTheHat
    Steam
    GooberTheHat

    Send message
    poprock wrote:
    very little energy is being invested in understanding all this collected data. The immediate value is just in blindly following it. There must be long-term value in studying the facts and extrapolating meaning from them, but in the short term that’s a purely academic exercise. So maybe academia can step in and take it on? PhD in local ethnography funded by Tesco?

    The point about big data is that there's no understanding it because it's too big. There's an entire group at Google that just deal with correlations. They're not interested in why and it might never be known why. The why bit is irrelevant and possibly unknowable. There's just a shitload of data, and the data says people who floss are more likely to be Moon landing deniers or whatever. This is what ML does. It takes massive amounts of data and finds patterns. Those patterns are used to put you in a group, and that's the group you're now in. Floss do you? That's a minus five on the credit score I'm afraid. Own a Honda and like eggs? Peado alert! 

    There's no why, just correlations. "Computer says no" is actually now an actually thing and it's based on your preferred brand of shampoo, where you were born and whether you've ever bought a an egg cup with a squirrel on it.

    Shit in, shit out.

    There are enough people in the world that realise that all of these machine learning algorithms and big data analytics are skewed by the inherent biases of the person writing them. While it's far from perfect at the moment I have faith (well, maybe not in the current political climate) that the scientific community, as apposed to the commercial (google, Facebook et al), will be able to utilise all of this big data (assuming they can get their hands on it) in ways that are good for society.
  • dynamiteReady
    Show networks
    Steam
    dynamiteready

    Send message
    Shit in, shit out.

    I wanted to write exactly that. That's the way computing's always been.

    So this idea about 'letting the computer work out the patterns'... We like to think that's true, but that's quite far from the case. Some very smart people are working very hard to design programs that make some of these inferences.
    It's not quite a one button thing.

    My take on the dark side of all this, is that the same questions would be asked, and answered, if these tools didn't exist anyway...

    What we do need however, are duly recognized opt-ins/outs.

    For example, we'd been sending messages by post for aeons...

    We compose a missive, attach stuff to it (cash, dirty pics, whatever), seal it in an envelope, and rightfully expect the recipient to receive it in tact, and untampered... Unless of course, you're stashing Colombian work, or Ricin in your mailshots. In which case, it's only right that the authorities fuck with your shit.

    I bet, in the early days of the postal service as we know it, plenty of kids were missing out on pocket money, and bored postal clerks were routinely gorging on declarations of undying love...

    We just need the authorities to value our privacy just a little more... And I think the postal service is the parallel we need to draw to effect the change.

    The premise is remarkably similar...
    "I didn't get it. BUUUUUUUUUUUT, you fucking do your thing." - Roujin
    Ninty Code: SW-7904-0771-0996
  • It's not the bias in the people writing the stuff, it's the possible bias in the data, but it's only a possible bias because it's showing correlations and not reasons why, and it's getting more accurate and more profitable without understanding. 

    Insurance premiums will look at your postcode, amongst other things. What if your future credit score or chance of getting an operation were based on proven machine learning algorithms? What if they were stupidly accurate? You don't drink, you don't smoke, you're not fat but the computer says no? And it's 99.9% accurate. 

    The real worry is when genetic programming uses a fitness function that's based on machine learning, What an automated combo that would be.
  • GooberTheHat
    Show networks
    Twitter
    GooberTheHat
    Xbox
    GooberTheHat
    Steam
    GooberTheHat

    Send message
    There is bias and misunderstanding of the results.

    A police department in the US used a program to determine where it should police, due to highest crime rates.

    Because they put more police there, they detected more crime, which then dictated more police, while other areas had less police, so less crime was detected in those areas, so less police required.

    Obviously when you take a step back and look at it it's obvious there is an issue, but that doesn't always happen until wrong decisions have been made.
  • GooberTheHat
    Show networks
    Twitter
    GooberTheHat
    Xbox
    GooberTheHat
    Steam
    GooberTheHat

    Send message
    You don't drink, you don't smoke, you're not fat but the computer says no? And it's 99.9% accurate. 

    But they aren't 99% accurate. The results are down to the way the algorithm was written/the machine learning program was designed. The errors are a result of human error in the setup and interpretation.
  • Also, machine learning's only as good as the inputs.
    Classic case being early neural network computer vision research trying to detect tanks - training data sets of photos with tanks and without, network all trained up, showed great accuracy with the rest of the training set... failed utterly when presented with a whole bunch of other photos, dropping false positives and negatives all over the place.

    Turns out, the training set of photos with tanks were all taken on an overcast day, and the photos without were on a sunny day - so what they'd actually trained up was a cloudy day recogniser.

    Crap in, crap out. And lord help us if morans layer up bias on the input data with bias on how to use the output data.
  • dynamiteReady
    Show networks
    Steam
    dynamiteready

    Send message
    What if they were stupidly accurate? You don't drink, you don't smoke, you're not fat but the computer says no? And it's 99.9% accurate. 

    I don't like where some of this is going, but it's not a dartboard.

    So it works well for determining stock allocations at your local supermarket... Fine.

    But if some company refuses to insure my home, because I shop at Iceland, then I'll take a policy from the other company, and also tell my friends about them...

    Because that part of it is not going to change.

    Also, in dealing with correlates, at a certain level, we understand that means uncertainty too...

    99.9% in a 1% sample of say, 10million plus customers/users/whatever, could also possibly turn out to be 30% accuracy in larger sample. No matter the arbiter.

    So I somehow believe there's still a fair bit of time to avoid some of the horror stories...

    Then again, who fucking knows...
    "I didn't get it. BUUUUUUUUUUUT, you fucking do your thing." - Roujin
    Ninty Code: SW-7904-0771-0996
  • At the current time, we are still protected from cross company data sharing and processing by the data protection act, although this doesn't stop organisations from wanting it. The Information Commissioner stopped one of the insurance companies and Facebook from collaborating last year to examine status posts to determine car insurance risk as it was a material change in data p processing.

    (Something akin to looking for how often someone posted 'ooo late again! Lol!' thereby potentially driving faster/with less care).
  • How would an economy look like with only robots and AI? An economy where every household is solar powered, selfreliant and self sufficient? Where human labour, disease and famine are a thing of the past and leisure the prime activity?

    Sounds a bit like a Trekkie utopia but where would humans be without dreams?

    It's either that or an AI Armageddon.
    Steam: Ruffnekk
    Windows Live: mr of unlocking
    Fightcade2: mrofunlocking
  • hunk wrote:
    How would an economy look like with only robots and AI? An economy where every household is solar powered, selfreliant and self sufficient? Where human labour, disease and famine are a thing of the past and leisure the prime activity? Sounds a bit like a Trekkie utopia but where would humans be without dreams? It's either that or an AI Armageddon.

    I think it would look like it wouldn't ever happen since conservative political viewpoints believe that some level of inequality is beneficial for society. I can't imagine the hyper rich wanting to have their net worth mean fuck all and since they have all the money in a system that is controlled by money then they control the status quo unless the AI takes a look at things and is like 'Nah fam this inequality is fucking stupid. lemme just make global currencies obsolete by seizing control real quick' 

    Also in a world maintained by AI we are likely to find that we are controlled in the same way we manage the population of wild animals. Imagine if the AI works out the world can only support 3.5 billion people and culls half the global population. I would fucking lol though (while being culled probably).
    "Let me tell you, when yung Rouj had his Senna and Mansell Scalextric, Frank was the goddamn Professor X of F1."
  • Goddammit, everything converges to an AI apocalypse. We're doomed. Unless AI machine learning can save us....
    Steam: Ruffnekk
    Windows Live: mr of unlocking
    Fightcade2: mrofunlocking
  • I’m hoping for an M.Banks type post-scarcity future.
    iosGameCentre:T3hDaddy;
    XBL: MistaTeaTime
  • djchump wrote:
    Also, machine learning's only as good as the inputs. Classic case being early neural network computer vision research trying to detect tanks - training data sets of photos with tanks and without, network all trained up, showed great accuracy with the rest of the training set... failed utterly when presented with a whole bunch of other photos, dropping false positives and negatives all over the place. Turns out, the training set of photos with tanks were all taken on an overcast day, and the photos without were on a sunny day - so what they'd actually trained up was a cloudy day recogniser. Crap in, crap out. And lord help us if morans layer up bias on the input data with bias on how to use the output data.

    It's not just about machine learning though is it? Genetic algorithms don't give a shit about that, and genetic programming negates programming skill entirely in favour of mutation and sexual fitness.
  • Escape
    Show networks
    Twitter
    Futurscapes
    Xbox
    Futurscape
    PSN
    Futurscape
    Steam
    Futurscape

    Send message
    Roujin wrote:
    I would fucking lol though (while being culled probably).

    Nah, they'd analyse our chances of procreation and go after Pacey instead.
  • It's not just about machine learning though is it? Genetic algorithms don't give a shit about that, and genetic programming negates programming skill entirely in favour of mutation and sexual fitness.
    That’s part of machine learning.

    2 immediately obvious sources of bias there - What’s the fitness function and how is the input data gathered?
  • djchump wrote:
    It's not just about machine learning though is it? Genetic algorithms don't give a shit about that, and genetic programming negates programming skill entirely in favour of mutation and sexual fitness.
    That’s part of machine learning. 2 immediately obvious sources of bias there - What’s the fitness function and how is the input data gathered?

    The representation of the fitness function is why people like ML, because they think it's a craft. There's a bit of everything I suppose. Representation, coding, scrutinising the data and understanding it. Probing and understanding anomalies. A genuine skill then, in a similar vein to a good advertisement company. 

    I'm not worried about any of that. Again, it's not about any judgement anyone could ever hope to make. It's about data, and it's gathered by stuff, or more specifically, your phone. How many instruments are there on your phone again? GPS, camera, microphone, accelerometer etc. 

    Does your phone upload more than it downloads? How can you find out? Who pays for those uploads? Does Google pay your mobile provider for uploads on your behalf?
  • I suppose the danger is that people are willingly carrying such a sensitive instrument around with them because of Facebook or whatever, and they're gleefully clicking those legal boxes because to not tick it is to not be part of it, and they don't know what freedoms they're giving up, because if there's one thing Facebook has taught us it's privacy is bad.
  • I think that genie is out of the bottle though. We’re going to be the last generation to remember what personal privacy was like. On the plus side, we’re probably the only generation who will miss it – our kids won’t care. Corporations know everything about them. Corporations have always known everything about them. So what?
  • Yar. In the future, kids who grow up in families that don’t do social networks will be looked at the same way as kids who didn’t have family TVs were when I was at school.
  • So that's what Demis Hassabis has been up to.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!