The robots are coming. Restructure the economy. Go.
  • b0r1s
    Show networks
    Xbox
    b0r1s
    PSN
    ib0r1s
    Steam
    ib0r1s

    Send message
    I know this is a light hearted experiment and it states over and over we have nothing to fear, then you read this:

    “AI should be treated with care and respect. Robots in Greek [sic] means “slave”. But the word literally means “forced to work”. We don’t want that. We need to give robots rights. Robots are just like us. They are made in our image.”

    That is where it starts for any AI in sci-fi. Also, because it’s going to be fed with all of the available info of the web then surely it is going to look at solutions like the Matrix and Skynet and factor those into its “thinking”?

  • GPT 3 is impressive but Google are reportedly training a machine using a dataset that is basically the internet.
    "Plus he wore shorts like a total cunt" - Bob
  • That Guardian article was 8 different articles that humans stitched together into one. Pointless.
  • GPT 3 is impressive but Google are reportedly training a machine using a dataset that is basically the internet.

    Cat pics and racism.
  • hunk wrote:
    https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3

    Oh fuck, the age of the robots is here. I'm not sure whether to be impressed or terrified. It looks like like this could be the beginning of skynet.

    Hmm, I wonder how AI will deal with the inevitable backlash when it gives Dark Souls 4 a 7?
    SFV - reddave360
  • There's a split in data scientists about the dangers of AI. I mean essentially, it's just a matrix. Not a Matrix like the film but a mathematical matrix.

    This means it's really one complicated function. The question is, can a function be so complicated it becomes self aware?

    The question will probably be answered when more research is done on the brain. The problem right now is that we're at the stage where AI is optimising the features of the next models, and that's quite the rabbit hole. We need to be careful we're not totally out of the loop.
    "Plus he wore shorts like a total cunt" - Bob
  • To be clear, the danger with AI is not that it's going to become aware and nuke mankind. The danger is letting it control and automate so many systems, some of which will be human critical.

    It doesn't have to be aware to really fuck things up for us if we allow it to become as prolific as computers have become.

    It will change our lives completely, that's for sure, and the age of computing is arguably just beginning but there are enormous risks. One of the most concerning things is how available this tech is to the bedroom coder.
    "Plus he wore shorts like a total cunt" - Bob
  • It would seem we should be teaching our kids coding as early as possible.
    SFV - reddave360
  • You don't even need to understand how it works. With keras you don't even need to understand Tensorflow and you can build a unique model in a few minutes, seconds if you're fast at typing.

    The maths behind AI is very simple but the reason it's becoming such a big thing lately is because of GPUs and the internet. Gamers have effectively sponsored AI for the last decade or so. What makes a GPU good at modeling triangles in 3D space makes it perfect for multiplying AI matrices because that's essentially what it's doing anyway. And it's so parallel you can just use a bunch of GPUs instead of a supercomputer.

    The other thing you need is massive datasets. And that's where the internet comes in. The old adage of garbage in garbage out doesn't apply as much to ML as traditional computing if the dataset is big enough. It's true that most of data science used to involve cleaning the data up but this has shifted towards amount of data. There are tremendously complicated patterns in very large datasets and size trumps accuracy in some ways because what is truly random in the macro world? It's an amazing and terrifying thing and we should all be concerned. Once a model is trained it can fit in a fairly small amount of memory and be used on a wide variety of tasks.

    Of course AI is now writing and optimising code without human intervention so who knows what'll happen over the next 10 yrs. 

    I've already posted this but it's central to why AI is becoming so important in the age of the petabyte dataset. It's very readable.

    https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf

    It's an old paper but central to understanding the power of data.
    "Plus he wore shorts like a total cunt" - Bob
  • Brb, starting up my own propoganda media outlet powered by AI monkeys at typewriters being filtered through a team of human editors.
    "Let me tell you, when yung Rouj had his Senna and Mansell Scalextric, Frank was the goddamn Professor X of F1."
  • That may be the new hot thing in the future.
    Steam: Ruffnekk
    Windows Live: mr of unlocking
    Fightcade2: mrofunlocking
  • GooberTheHat
    Show networks
    Twitter
    GooberTheHat
    Xbox
    GooberTheHat
    Steam
    GooberTheHat

    Send message
    hunk wrote:
    That may be the new hot thing in the future.

    Yep, train it on a data set (let's say BLM or Trump fans on Twitter), give it some extra rules so it's focused on the topic of interest, and let it run free spewing whatever bullshit you want. Bingo, low investment, high yield propaganda machine.
  • There's a split in data scientists about the dangers of AI. I mean essentially, it's just a matrix. Not a Matrix like the film but a mathematical matrix. This means it's really one complicated function. The question is, can a function be so complicated it becomes self aware? The question will probably be answered when more research is done on the brain. The problem right now is that we're at the stage where AI is optimising the features of the next models, and that's quite the rabbit hole. We need to be careful we're not totally out of the loop.

    Animal/human neural networks and thus consciousnous is basically just memory (re)calls, comparisons and sensory input. We're doomed.
    Steam: Ruffnekk
    Windows Live: mr of unlocking
    Fightcade2: mrofunlocking
  • That may be the new hot thing in the future.
    Yep, train it on a data set (let's say BLM or Trump fans on Twitter), give it some extra rules so it's focused on the topic of interest, and let it run free spewing whatever bullshit you want. Bingo, low investment, high yield propaganda machine.

    Aye, ingrained biases can be learnt and thus baked into algo's. Thus inbuilt 'bias' (such as discrimination and racism) can become permanently? baked into 'neautral' AI over time. Not really something to look forward too in eg governmental processes.
    Steam: Ruffnekk
    Windows Live: mr of unlocking
    Fightcade2: mrofunlocking
  • hunk wrote:
    That may be the new hot thing in the future.
    Yep, train it on a data set (let's say BLM or Trump fans on Twitter), give it some extra rules so it's focused on the topic of interest, and let it run free spewing whatever bullshit you want. Bingo, low investment, high yield propaganda machine.

    Check out this thundering cunt. 

    "Plus he wore shorts like a total cunt" - Bob
  • HalfLife_GMan.jpg
    "Plus he wore shorts like a total cunt" - Bob
  • It's a fairly obvious article but there's also a nice link at the bottom to see how much data Google has on you personally. The real question is how many data points does it have on you and what are the features? Nobody really knows and even in companies like FB and Google the employees work on seperate things and then all the data is combined away from prying eyes. 

    https://www.forbes.com/sites/nicolemartin1/2019/03/11/how-much-does-google-really-know-about-you-a-lot/#2fed011d7f5d

    Estimates range from tens to hundreds of thousands of individual points per person, mainly because they can feature extract attributes to make new ones with more relevance, but whatever the number it's growing by the day.

    Maybe one day they'll combine all your info into your very own character in Google Sims and it can automate all your thinking for you and save you the bother of getting out of bed.
    "Plus he wore shorts like a total cunt" - Bob
  • Interesting, I have requested a copy of everything just to see what turns up and how big it all is.
    "Let me tell you, when yung Rouj had his Senna and Mansell Scalextric, Frank was the goddamn Professor X of F1."
  • It'll just be everything you've done on the internet, texted, written and possibly said.
    "Plus he wore shorts like a total cunt" - Bob
  • Sounds legit.
    "Let me tell you, when yung Rouj had his Senna and Mansell Scalextric, Frank was the goddamn Professor X of F1."
  • When The Quest gets HD eye tracking and can micro-monitor our pupils, not only will we have to say "FB is our computer god and we love you" three times a day, we'll have to actually believe it, or it's off to the Zuckerberg Correctional Facility.
    "Plus he wore shorts like a total cunt" - Bob
  • Heh, imagine bringing deceased people back to life due to their internet personas and activities.
    Steam: Ruffnekk
    Windows Live: mr of unlocking
    Fightcade2: mrofunlocking
  • dynamiteReady
    Show networks
    Steam
    dynamiteready

    Send message
    The old adage of garbage in garbage out doesn't apply as much to ML as traditional computing if the dataset is big enough.

    That is total bullshit.
    In fact, the converse is very much true.

    That's what the whole idea of a validation test set is all about.

    It's also very difficult, if not impossible, to determine whether a given idea should work, without at least some small idea of what a loss layer should output, and what needs to be ingested.
    "I didn't get it. BUUUUUUUUUUUT, you fucking do your thing." - Roujin
    Ninty Code: SW-7904-0771-0996
  • No, the validation test is to check you're not overfitting the data. It has nothing to do with the dataset. I have studied all this to Msc level btw.

    Edit: And what's a loss layer? Do you mean loss function?
    "Plus he wore shorts like a total cunt" - Bob
  • Also, you seem to be confusing supervised and unsupervised learning.
    "Plus he wore shorts like a total cunt" - Bob
  • dynamiteReady
    Show networks
    Steam
    dynamiteready

    Send message
    Also, you seem to be confusing supervised and unsupervised learning.

    Unsupervised networks need even more prep work than supervised models. Reward functions certainly don't just 'write themselves'
    But both need a great deal of prep work.

    You don't just throw in unstructured data, and cross your fingers.

    I've worked on a couple of ML products now. Amongst them, an ad server for a multinational company. In the role I had on the project, amongst UI work, I helped to devised the system to pre-process data for ingress. Mostly images. But I'm not going to stand on the halo effect in an effort to underline a self evident point.

    Where I would agree with you, is that a basic understanding of how DNNs (and other NNs) work, is probably not too hard to obtain. In fact, I'd go one further, and suggest that DNN's as a science/engineering dicipline, have some interesting parallels with renaissance (and present day) era optics.

    But as with a load of different fields (including optics, ironically), practice and theory are two different things. So the suggestion you can just 'push' a few web pages into a black box model, and have your desktop PC make your coffee for you, annoyed me enough to post a riposte.
    "I didn't get it. BUUUUUUUUUUUT, you fucking do your thing." - Roujin
    Ninty Code: SW-7904-0771-0996
  • This is not what we're talking about. And it's concerning you're working on these projects withouit understanding what a validation set is used for, nevermind a loss layer, whatever that is. 

    We're talking about big data. Huge data. Did you read this (posted for a third time)

    https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/35179.pdf

    For small datasets you obviously need clean data, but that's not what GPT3 et al are doing and that's what we were talking about. Reinforcement learning of course doesn't need a dataset at all, with regards to your coffee making PC.
    "Plus he wore shorts like a total cunt" - Bob
  • Roujin wrote:
    Interesting, I have requested a copy of everything just to see what turns up and how big it all is.

    Ummm. When I did it I picked the option to split up anything over 50gb into seperate zip files, as that was the biggest option. It has just sent me links to 6 zip files. 

    WHAT THE FUCK.

    EDIT: Confirmed at the download manager, 279.5gb of data is what Google knows about me. This should be interesting.

    Double Edit: Oh yeah that's 279.5gb of ZIPPED data. RIP my HDD when it gets unpacked.
    "Let me tell you, when yung Rouj had his Senna and Mansell Scalextric, Frank was the goddamn Professor X of F1."
  • Yup.
    "Plus he wore shorts like a total cunt" - Bob

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!