• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: March 4th, 2024

help-circle

  • I think the tariffs might actually have one niche benefit of mitigating scalping. It’s going to be even more expensive for scalpers to get their hands on an actual product now, and people are already struggling to justify the increased retail costs, as it is. I’d have to imagine it’ll be much harder for scalpers to find a buyer these days, as anybody who can afford scalpers’ fees plus tariffs likely wouldn’t have any issue getting their hands on one, to begin with.








  • Just for what it’s worth, you don’t need CSAM in the training material for a generative AI to produce CSAM. The models know what children look like, and what naked adults look like, so they can readily extrapolate from there.

    The fact that you don’t need to actually supply any real CSAM to the training material is the reasoning being offered for supporting AI CSAM. It’s gross, but it’s also hard to argue with. We allow for all types of illegal subjects to be presented in porn; incest, rape, murder, etc. While most mainstream sites won’t allow those types of material, none of them are technically outlawed - partly because of freedom of speech and artistic expression and yadda yadda, but also partly because it all comes with the understanding that it’s a fake, made-for-film production and that nobody involved had their consent violated, so it’s okay because none of it was actually rape, incest, or murder. And if AI CSAM can be made without actually violating the consent of any real people, then what makes it different?

    I don’t know how I feel about it, myself. The idea of “ethically-sourced” CSAM doesn’t exactly sit right with me, but if it’s possible to make it in a truly victimless manner, then I find it hard to argue outright banning something just because I don’t like it.






  • Chozo@fedia.iotoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    1 month ago

    I assume that this is using a highly-curated, custom model, and not some off-the-shelf GPT that just anybody can use, so it probably won’t be suggesting that patients eat glue or anything crazy.

    From what I can tell, it sounds like this is actually a fairly valid use for a chatbot, handling a lot of the tedious tasks that nurses are charged with. Most of what it seems to be doing, any untrained receptionist could also do (like scheduling appointments or reading dosage instructions), so this would free up nurses for actually important tasks like administering medications and triaging patients. It doesn’t seem like it’s going to be issuing prescriptions or anything where real judgement would be necessary.

    As long as hospital staff are realistic about what tasks the chatbot should handle, this actually seems like a pretty decent place to implement a (properly-tuned) LLM.