• Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    3
    ·
    4 days ago

    I think it was Guy Kawasaki in 1997 who introduced me to the idea of eating your own dog food. In other words, use your own product.

    Given how much Altman is pushing this dog and pony show, I’m happy to trust ChatGPT with his medical fate, which will no doubt reveal just how much this AI is Assumed Intelligence, or in less technical terms, snake oil.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      4 days ago

      I want him to live in the same world we would where anytime you need help with anything you need to go through multiple layers of poorly scripted chatbots to get to a real person that has to work through their poorly written script to escalate up 5 chains to do the thing you need help with.

  • Nougat@fedia.io
    link
    fedilink
    arrow-up
    25
    ·
    4 days ago

    OpenAI CEO Sam Altman thinks some jobs will be ‘totally, totally gone’ thanks to cocaine, but he still wouldn’t trust cocaine with his ‘medical fate’

  • SoupBrick@pawb.social
    link
    fedilink
    English
    arrow-up
    14
    ·
    4 days ago

    “Some jobs will be totally, totally gone (but not mine). You can totally trust AI to make the same or better medical decisions than professionals (but I wouldn’t)!”

    • Bronzebeard@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 days ago

      LLMs are text generators. They are not computational or analytical engines. If you need words (Copy Writer), it could conceivably do most of that job. It is not the right tool for making actual decisions. There are other machine learning models that can handle those things better than an LLM. Conceivably at some point these things can be run together to handle the language processing and data analysis separately. They’re not quite there yet.

  • reddig33@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 days ago

    I wouldn’t trust ChatGPT either, but I don’t think it’s designed for medical uses to begin with.

    There are AI engines used in medical fields and they can be advantageous in making connections that we haven’t found before. But ChatGPT ain’t it.

    • SpaceNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      Expert systems are a great example of AI perfectly stored for medical applications. A hallucinating chatbot has very limited utility, even where such technology is already in use; one of my wife’s physicians’ Dax Copilot likes to invent inaccurate details, for example.

  • xep@fedia.io
    link
    fedilink
    arrow-up
    6
    ·
    4 days ago

    I’ve stopped caring about anything this waste of carbon dioxide says.

  • ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 days ago

    It’s definitely going to put sci-fi authors who write about A.G.I. out of work or at least make them add 1,000 years to their timeline.