• Cocopanda@futurology.today
    link
    fedilink
    English
    arrow-up
    17
    ·
    9 hours ago

    I mean. Reddit was botted to death after the Jon Stewart event in DC. People and corporations realized how powerful Reddit was. Sucks that the site didn’t try to stop it. Now Ai just makes it easier.

      • 𝚝𝚛𝚔@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Fair point about AI-generated comments. What’s your take on how this affects online discussions? Are we losing genuine interactions or gaining new insights?

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          45 minutes ago

          On political topics it is very likely that we just gain a few hundred more repetitions of the same arguments that were already going in circles before.

      • ProdigalFrog@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        At least here we have Fediseer to vet instances, and the ability to vet each sign-ups.

        I think eventually when we’re more targeted, we’ll have to circle the wagons so to speak, and only limit communications to more carefully moderated instances that root out the bots.

  • oxysis@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    3
    ·
    11 hours ago

    This is deeply unethical, when doing research you need to respect the people who participate and you have to respect what their story is. So by using a regurgitative artificial idiot (RAI) to make them their mind is not respecting them or their story.

    The people who are being experimented on were not given compensation for their time and the work they contributed. While it isn’t required it is good practice in research to not actively burn bridges with people so that they will want to participate in more studies.

    These people were also not given knowledge they were participating in a study nor were they given the choice to leave with their contributions at their will. Which entirely makes the study unpublishable since the data was not gathered with fucking consent.

    This isn’t even taking into account any of the other things which cross ethical lines. All the “researchers” involved should never be allowed to ever conduct or participate in a study of any kind again. Their university should be fined and heavily scrutinized for their work in enabling this shit. These assholes have done damage to all researchers globally who will now have a harder time pitching real studies to potential participants because they could remember this story and how “researchers” took advantage of unknowing individuals. Shame on these people and hope they face real consequences.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      4 hours ago

      This is deeply unethical,

      I feel like maybe we’ve gone too far on research ethics restrictions.

      We couldn’t do the Milgram experiment today under modern ethical guidelines. I think that it was important that it was performed, even at the cost of the stress that participants experienced. And I doubt that it is the last experiment for which that is true.

      If we want to mandate some kind of careful scrutiny of such experiments and some after-the-fact compensation be paid to participants in experiments in which trauma-producing deception is imposed, maybe that’d be reasonable.

      That doesn’t mean every study that violates present ethics standards should be greenlighted, but I do think that the present bar is too high.

      • iknowitwheniseeit@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        From the link you provide:

        In 2012, Australian psychologist Gina Perry investigated Milgram’s data and writings and concluded that Milgram had manipulated the results, and that there was a “troubling mismatch between (published) descriptions of the experiment and evidence of what actually transpired.” She wrote that “only half of the people who undertook the experiment fully believed it was real and of those, 66% disobeyed the experimenter”.[26][27] She described her findings as “an unexpected outcome” that “leaves social psychology in a difficult situation.”[28]

        I mean, maybe it shouldn’t have been done?

    • restingboredface@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      9 hours ago

      These researchers conducted research in a manner that was totally unethical and they deserve to be stripped of tenure and lose any research funding they have.

      It already sounds like the university is preparing to just protect them and act like it’s no big deal, which is discouraging but I suppose not surprising.

  • BossDj@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    9 hours ago

    What they should do is convince a smaller subsection of reddit users to break off to a new site, maybe entice them with promises of a FOSS platform. Maybe a handful of real people and all the rest LLM bots. They’ll never know

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    11 hours ago

    I haven’t seen this question asked.

    how can the results be trusted that they were actually interacting with real humans?

    what’s the percentage of bot-to-bot contamination?

    this study looks more like a hacky farce that is only meant to bring attention to our manipulation and less like actual science.

    any professional that puts their name on this steaming pile should be ashamed of themselves.

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    11 hours ago

    Reddit: “Nobody gets to secretly experiment on Reddit users with AI-generated comments but us!”

    • SharkAttak@kbin.melroy.org
      link
      fedilink
      arrow-up
      2
      ·
      1 hour ago

      Feels like a shitty sci-fi where they found robot impostors!, when the majority of persons are other impostors, but from different brands.

    • Zenoctate@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      They literally have some AI thing called “answers” which is shitty practice of pushing AI by reddit

  • jabathekek@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    20
    ·
    13 hours ago

    To me it was kind of obvious. There were a bunch of accounts that would comment these weird sentences and all of them had variants of JohnSmith1234 as their username. Part of the reason I left tbh.

    • 9point6@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      I was gonna say, anyone with half a brain who has poked their head into Reddit over the past year or two will have seen a shitload of obvious bots in the comments.

  • Sixty@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    2
    ·
    edit-2
    14 hours ago

    Worthless research.

    That subreddit bans you for accusing others of speaking in bad faith or for using ChatGPT.

    Even if a user called it out, they’d be censored.

    Edit: you know what, it’s unlikely they didn’t read the side bar. So, worse than worthless. Bad faith disinfo.

    • yesman@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      14 hours ago

      accusing others of speaking in bad faith

      You’re not allowed to talk about bad faith in a debate forum? I don’t understand. How could that do anything besides shield the sealions, JAQoffs, and grifters?

      And please don’t tell me it’s about “civility”. Bad faith is the civil accusation when the alternative is your debate partner is a fool.

      • Sixty@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        15
        ·
        edit-2
        13 hours ago

        I won’t tell you.about civiity because

        How could that do anything besides shield the sealions, JAQoffs, and grifters?

        Not shield, but amplify.

        That’s the point of the subreddit. I’m not defending them if that’s at all how I came across.

        ChatGPT debate threads are plaguing /r/debateanatheist too. Mods are silent on the users asking to ban this disgusting behavior.

        I didn’t think it’d be a problem so quickly, but the chuds and theists latched onto ChatGPT instantly for use in debate forums.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          13 hours ago

          To be fair for a gish gallop style of bad faith argument the way religious people like to use LLMs are probably a good match. If all you want is a high number of arguments it is probably easy to produce those with an LLM. Not to mention that most of their arguments have been repeated countless times anyway so the training data probably has them in large numbers. It is not as if they ever cared if their arguments were any good anyway.

          • Sixty@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            12 hours ago

            I agree and recognized that. I’m more emotionally upset about it tbh. The debates aren’t for the debaters, it’s to hopefully disillusion and remove indoctrinated fears from those on the fence willing to read them. It’s oft repeated there when people ask “what’s the point, same stupid debate for centuries.” Well religions unfortunately persist, and haven’t lost any ground globally. Gained, actually. Not our fault they have no new ideas.

  • Dr. Bob@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    13 hours ago

    With all the bots on the site why complain about these ones?

    Edit: auto$#&"$correct

  • Neuromorph@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    13 hours ago

    Good i spent at least the last 3 years on reddit making asinine comments, phrases, and punctuation to throw off any AI botS