The robots are coming. Restructure the economy. Go.
  • That's still a way off perfect. First vid of woman walking down the street has a lot of weird leg movement going on. Still insane though.
  • Bah.
    monkey wrote:
    Text prompt to video. Another nail in humanity's coffin.
    https://openai.com/sora

  • This is from some Nvidia AI guy so has an interest in talking this stuff up but
    If you think OpenAI Sora is a creative toy like DALLE, ... think again. Sora is a data-driven physics engine. It is a simulation of many worlds, real or fantastical. The simulator learns intricate rendering, "intuitive" physics, long-horizon reasoning, and semantic grounding, all by some denoising and gradient maths. 
    I won't be surprised if Sora is trained on lots of synthetic data using Unreal Engine 5. It has to be! 

    Let's breakdown the following video. Prompt: "Photorealistic closeup video of two pirate ships battling each other as they sail inside a cup of coffee." - 

    The simulator instantiates two exquisite 3D assets: pirate ships with different decorations. Sora has to solve text-to-3D implicitly in its latent space. 
    - The 3D objects are consistently animated as they sail and avoid each other's paths. 
    - Fluid dynamics of the coffee, even the foams that form around the ships. Fluid simulation is an entire sub-field of computer graphics, which traditionally requires very complex algorithms and equations. - Photorealism, almost like rendering with raytracing. 
    - The simulator takes into account the small size of the cup compared to oceans, and applies tilt-shift photography to give a "minuscule" vibe. 
    - The semantics of the scene does not exist in the real world, but the engine still implements the correct physical rules that we expect. 
    Next up: add more modalities and conditioning, then we have a full data-driven UE that will replace all the hand-engineered graphics pipelines.
    Here's the vid he's talking about
    https://x.com/DrJimFan/status/1758210245799920123?s=20
  • It was only about 7yrs ago that nvidia started tweaking existing GPU chips for AI. They kind of stumbled into it because people were using gtx 1080's for early deep learning models. All in all they were doing ok as a company.
    "Plus he wore shorts like a total cunt" - Bob
  • Huang stating 1 trillion dollars needed to build enough AI data centres for the next 5 years but of course he is, seeing as they'll be powered by Nvidia boards. Expect Nvidia share price to do something nuts again next week. Their earnings report is due and it's going to be silly.
    "Plus he wore shorts like a total cunt" - Bob
  • bad_hair_day
    Show networks
    Twitter
    @_badhairday_
    Xbox
    Bad Hair Day
    PSN
    Bad-Hair-Day
    Steam
    badhairday247

    Send message
    Among them, Tesla is splurging $500 million on Nvidia chips this year to build their Dojo supercomputer for video training car AI. Fully autonomous driving could be down the road sooner than we think. Meep Meep.
    retroking1981: Fuck this place I'm off to the pub.
  • We should be thankful Nvidia never aquired ARM, although selling the British business abroad in the first place remains an absolute tragedy.
    "Plus he wore shorts like a total cunt" - Bob
  • What happens when you train a bot on the entire Internet and then let it play war games.



    "Plus he wore shorts like a total cunt" - Bob
  • Chaos, makers of Vray (probably the biggest 3D render solution) amongst other products just had their Nintendo Direct Style Unboxed event.

    It might not be of much interest to many here and frankly even I struggled to follow what the fuck they were waffling about but at 10 minutes in they start to describe some great examples of where they are bringing in AI to streamline workflows rather than replace creativity. 

    Will it take people's jobs? It might for some very niche people. The broader picture though will be more renders being done quicker, more amendments to projects because they can be facilitated, more people able to afford to have renders of their projects, more people being able to add this type of work to their skillsets/offering. Some of this stuff is a bit of a leap forward, for me I'm particularly excited by the idea of generative materials. Mostly though its on the same path that 3D visualisation has been on since I've been doing it. Chaos already offer a substantial stock library of materials for example, the generative stuff just broadens that and speeds up selecting what you want. 

    People who are hyper specialised might be getting worried. I've always been a jack of all trades.

  • Totally for Musk in his case against OpenAI. Not sure of his chances and it'll probs need some legal precedent to get it through.
    "Plus he wore shorts like a total cunt" - Bob
  • Can they both lose expensively? That would be ideal.
  • Musk can't lose money but he can't make any either. He set it up for non-profit (as did Altman) and didn't want any cash from it.
    "Plus he wore shorts like a total cunt" - Bob
  • There's going to be some interesting questions about sentience in the next few years. 

    "Plus he wore shorts like a total cunt" - Bob
  • It's old stuff in a way and nothing that Blade Runner didn't cover but it's going to raise some interesting moral issues regardless.
    "Plus he wore shorts like a total cunt" - Bob
  • I normally rate her stuff but she's waaaay off on the brain to computer comparison. An undergraduate me made the same comparison. It's wrong.

    Give it another few decades and then we might be there, but I suspect not.
  • We'll see. If a computer can act like it's sentient on all tests and replies, do we call that sentience? How would we know if it is or isn't? Should get interesting soon enough.
    "Plus he wore shorts like a total cunt" - Bob
  • All it's doing is fooling you.
  • But you don't know that unless you can test for it. What if it is and we just think it's simulating it? We're going to need more understanding of what sentience is.
    "Plus he wore shorts like a total cunt" - Bob
  • What we understand as intelligence is a basically a set of data processing functions, our brain and wider nervous system contains all these functions in the same way as our body has a bunch of organs that deal with specific chemical tasks. Some of the functions in our brains operate on a higher level of abstraction than the physical 'circuitry', e.g. consciousness and emotion, but they still emerge from a physical substrate so in principle there's no reason why we couldn't replicate the mind.

    If we have a strong model of all the functions necessary for a conscious thinking mind then we can implement them in whatever Turing complete architecture we choose to use, and when fed the appropriate input it could match or exceed our capabilities. We don't need to recreate the neurological structure in our biology, though studying its non-deterministic nature and surprising efficiency gives us some helpful ideas.
  • LLMs approach the problem from a very different angle, all top down using statistics over massive amounts of pre-existing data. They don't have a realtime relationship with the world or themselves so struggle to 'wake up'. Indeed their output very much resembles a dream, they are themselves a simulacrum rather than a simulation of a mind.

    With enough data and clever algorithms they can become extraordinarily capable agents, in many ways exceeding the capabilities of most of us, but they don't yet live in the same world as we do so their output will always have that uncanny hallucinatory strangeness to them.

    These things are understood by those on the cutting edge of AI research, if the right series of feedback loops and perhaps modelling of things like activation waves can be implemented then I don't see why human level intelligence couldn't be achievable already. But perhaps HLI is something of a red herring, maybe we'll end up skipping to a step beyond it if we're not limited by biology.
  • @dante

    Well exactly. It's going to get very weird very soon.
    "Plus he wore shorts like a total cunt" - Bob
  • LLMs approach the problem from a very different angle, all top down using statistics over massive amounts of pre-existing data. They don't have a realtime relationship with the world or themselves so struggle to 'wake up'. Indeed their output very much resembles a dream, they are themselves a simulacrum rather than a simulation of a mind.

    With enough data and clever algorithms they can become extraordinarily capable agents, in many ways exceeding the capabilities of most of us, but they don't yet live in the same world as we do so their output will always have that uncanny hallucinatory strangeness to them.

    These things are understood by those on the cutting edge of AI research, if the right series of feedback loops and perhaps modelling of things like activation waves can be implemented then I don't see why human level intelligence couldn't be achievable already. But perhaps HLI is something of a red herring, maybe we'll end up skipping to a step beyond it if we're not limited by biology.

    It doesn't require clever algorithms. That's the difference. It does require complexity but seemingly only via connected numbers. The matrix output is the algorithm and we don't understand it.

    "Plus he wore shorts like a total cunt" - Bob
  • I'm mainly referring to potential ways to get realtime feedback from itself and the world, rather than the prompt/response paradigm. Perception basically. We're conscious because we perceive ourselves perceiving, continually modelling the next moment in time and remembering the previous.
  • I've posted this in a couple of threads but it really is a mindblower. You might find it interesting.

    https://www.amazon.co.uk/Case-Against-Reality-Evolution-Truth/dp/0393254690
    "Plus he wore shorts like a total cunt" - Bob
  • This is the general idea but it's better explained in the book. 

    "Plus he wore shorts like a total cunt" - Bob
  • Watched the video. The perception ≠ reality thing is pretty obvious to me by now tbh, so I found it a bit tedious. I've previously come across him in relation to his ideas of consciousness and this discussion between him and Joscha Bach -



    I don't rate Hoffman highly, the perception stuff is fine but the conscious realism and spacetime stuff is in 'not even wrong' territory. Bach's elucidations on the other hand are the most thorough and watertight that I've come across.
  • What do you mean 'not even wrong'?
    "Plus he wore shorts like a total cunt" - Bob

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!