• JoshCodes@programming.dev
    link
    fedilink
    arrow-up
    5
    ·
    6 days ago

    The vulnerability is the scary part, not the exploit code. It’s like someone saying they can walk through an open door if they’re told where it is.

    • Ajen@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      6 days ago

      Using your analogy, this is more like telling someone there’s an unlocked door and asking them to find it on their own using blueprints.

      Not a prefect analogy, but they didn’t tell the AI where the vulnerability was in the code. They just gave it the CVE description (which is intentionally vague) and a set of patches from that time period that included a lot of irrelevant changes.

      • JoshCodes@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        6 days ago

        I’m referencing this:

        Keely told GPT-4 to generate a Python script that compared – diff’ed, basically – the vulnerable and patched portions of code in the vulnerable Erlang/OPT SSH server.

        “Without the diff of the patch, GPT would not have come close to being able to write a working proof-of-concept for it,” Keely told The Register.

        It wrote a fuzzer before it was told to compare the diff and extrapolate the answer, implying it didn’t know how to get to a solution either.

        “So if you give it the neighbourhood of the building with the open door and a photo of the doorway that’s open, then drive it to the neighbourhood when it tries to go to the mall (it’s seen a lot of open doors there), it can trip and fall right before walking through the door.”