

Who’s trying to flee? It’s the fascists that have to go
Who’s trying to flee? It’s the fascists that have to go
Google is constantly making changes that break it, the developers fix them quickly… but if your distro doesn’t keep the newest version in their repo then you’d have to install it from their git repo.
Generally they have a patch for any breaking changes within the day.
How does Newpipe work?
I see you’re channeling the spirits of Social Media
So brave
How would anything similarly public, like a forum, be better?
Forums were the primary way that groups would talk with one another pre-global scale social media.
They could contain public subforums, but the majority of all of the forums that I’ve been a part of were not viewable without an account, which was manually approved or required a small payment (to make bans have a chance to actually stick).
AI, which is inherently a misrepresentation of truth
Oh, you’re one of those
In the US criminal justice system, Sentencing happens after the Trial. A mistrial requires rules to be violated during the Trial.
Also, there were at least 3 people in that room that both have a Juris Doctor and know the Arizona Court Rules, one of them is representing the defendant. Not a single one of them had any objections about allowing this statement to be made.
They can’t appeal on this issue because the defense didn’t object to the statement and, therefore, did not preserve the issue for appeal.
AI should absolutely never be allowed in court. Defense is probably stoked about this because it’s obviously a mistrial. Judge should be reprimanded for allowing that shit
You didn’t read the article.
This isn’t grounds for a mistrial, the trial was already over. This happened during the sentencing phase. The defense didn’t object to the statements.
From the article:
Jessica Gattuso, the victim’s right attorney that worked with Pelkey’s family, told 404 Media that Arizona’s laws made the AI testimony possible. “We have a victim’s bill of rights,” she said. “[Victims] have the discretion to pick what format they’d like to give the statement. So I didn’t see any issues with the AI and there was no objection. I don’t believe anyone thought there was an issue with it.”
This is just weird uninformed nonsense.
The reason that outbursts, like gasping or crying, can cause a mistrial is because they can unfairly influence a jury and so the rules of evidence do not allow them. This isn’t part of trial, the jury has already reached a verdict.
Victim impact statements are not evidence and are not governed by the rules of evidence.
It’s ludicrous that this was allowed and honestly is grounds to disbar the judge. If he allows AI nonsense like this, then his courtroom can not be relied upon for fair trials.
More nonsense.
If you were correct, and there were actual legal grounds to object to these statements then the defense attorney could have objected to them.
Here’s an actual attorney. From the article:
Jessica Gattuso, the victim’s right attorney that worked with Pelkey’s family, told 404 Media that Arizona’s laws made the AI testimony possible. “We have a victim’s bill of rights,” she said. “[Victims] have the discretion to pick what format they’d like to give the statement. So I didn’t see any issues with the AI and there was no objection. I don’t believe anyone thought there was an issue with it.”
It depends, but it’d be really hard to tell. I type around 90-100 WPM, so my comment only took me a few minutes.
If they’re responding within a second or two with a giant wall of text it could be a bot, but it may just be a person who’s staring at the notification screen waiting to reply. It’s hard to say.
I would have gotten away with it if it were not for you kids!
I think the simplest way to explain it is that the average person isn’t very skilled at rhetoric. They argue inelegantly. Over a long time of talking online, you get used to talking with people and seeing how they respond to different rhetorical strategies.
In these bot infested social spaces it seems like there are a large number of commenters who just seem to argue way too well and also deploy a huge amount of fallacies. This could be explained, individually, by a person who is simply choosing to argue in bad faith; but, in these online spaces there seem to be too many commenters who seem to deploy these tactics compared to the baseline that I’ve established in my decades of talking to people online.
In addition, what you see in some of these spaces are commenters who seem to have a very structured way of arguing. Like they’ve picked your comment apart into bullet points and then selected arguments against each point which are technically on topic but misleading in a way.
I’ll admit that this is all very subjective. It’s entirely based on my perception and noticing patterns that may or may not exist. This is exactly why we need research on the topic, like in the OP, so that we can create effective and objective metrics for tracking this.
For example, if you could somehow measure how many good faith comments vs how many fallacy-laden comments in a given community there would likely be a ratio that is normal (i.e. there are 10 people who are bad at arguing for every 1 person who is good at arguing and, of those skilled arguers 10% of them are commenting in bad faith and using fallacies) and you could compare this ratio to various online topics to discover the ones that appear to be botted.
That way you could objectively say that on the topic of Gun Control on this one specific subreddit we’re seeing an elevated ratio of bad faith:good faith scoring commenters and, therefore, we know that this topic/subreddit is being actively LLM botted. This information could be used to deploy anti-bot counter measures (captchas, for example).
The research in the OP is a good first step in figuring out how to solve the problem.
That’s in addition to anti-bot measures. I’ve seen some sites that require you to solve a cryptographic hashing problem before accessing. It doesn’t slow a regular person down, but it does require anyone running a bot to provide a much larger amount of compute power to each bot which increases the cost to the operator.
They label ‘AI’ only the LLM generated content.
All of Google’s search algorithims are “AI” (i.e. Machine Learning), it’s what made them so effective when they first appeared on the scene. They just use their algorithms and a massive amount of data about you (way more than in your comment history) to target you for advertising, including political advertising.
If you don’t want AI generated content then you shouldn’t use Google, it is entirely made up of machine learning who’s sole goal is to match you with people who want to buy access to your views.
I just mean that trying to apply the Nazi bar meme to an entire country because people are not immediately surrendering and fleeing the fascists seems kind of counter productive.