• Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 month ago

    It is lazy. It will be sloppy, shoddily made garbage.

    The shame is entirely on the one who chose to use the slop machine in the first place.

    • krimson@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      I laugh at all these desperate “AI good!” articles. Maybe the bubble will pop sooner than I thought.

      • Deflated0ne@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        30 days ago

        Its gonna suck. Because of course they’re gonna get bailed out. It’s gonna be “too big to fail” all over again.

        Because “national security” or some such nonsense.

    • pheonixdown@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      30 days ago

      The way I see it is, the usefulness of straight LLM generated text is indirectly proportional to the importance of the work. If someone is asking for text for the sake of text and can’t be convinced otherwise, give 'em slop.

      But I also feel that properly trained & prompted LLM generated text is a force multiplier when combined with revision and fact checking, also varying indirectly proportional with experience and familiarity with the topic.

  • YappyMonotheist@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    If it’s not shameful, why not disclose it?

    Regardless, I see its uses in providing structure for those who have issues expressing themselves competently, but not in providing content, and you should always check all the sources that the LLM quotes to make sure it’s not just nonsense. Basically, if someone else (or even yourself with a bit more time) could’ve written it, I guess it’s “okay”.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      if that task is offloaded to spicy autocomplete, all and any learning of this skill is avoided, so it’s not mega useful

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 month ago

        That presumes that is how people are using AI. I use it all the time, but AI never replaces my own judgement or voice. It’s useful. It’s not life-changing.

  • cerebralhawks@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    It’s going to be plagiarism so yes, it is.

    I’ve asked Copilot at work for word help. I’ll ask out something like, what’s a good word that sounds more professional than some other word? And it’ll give me a few choices and I’ll pick one. But that’s about it.

    They’re useful, but I won’t let them do my work for me, or give them anything they can use (we have a corporate policy against that, and yet IT leaves Copilot installed/doesn’t switch to something like Linux).

  • Poayjay@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    Imagine if the AI bots learn to target and prioritize content not generated by AI (if they aren’t already). Labeling your content as organic makes it so much more appetizing for bots.