Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 0 Posts
  • 34 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle

  • The US is not “openly bankrolling” any AI companies. The closest thing would be OpenAI’s recent contract with the military:

    https://www.theverge.com/news/688041/openai-us-defense-department-200-million-contract

    Whereas China is definitely funding AI efforts in their country because they’re communist (sort of). That’s literally how communism works: The government funds stuff.

    China isn’t really communist in the traditional sense but they definitely use government funds to prop up business they feel will give the country a strategic advantage. They do this directly (here’s a check to pay people) and indirectly (we’ll subsidize all shipping for your business and make sure you get sweetheart deals with other businesses you rely on).

    The Chinese government is in the business of picking winners and losers in the market and they’re open about it. It’s not a secret. That’s literally how their government is setup.

    The US has ways of picking winners but they’re not nearly as direct and there’s a whole lot of rules that must be followed or competitors will sue and win. Then the whole process falls apart.

    TL;DR: You’re directly wrong and you’re framing the story wrong as well.

    Aside: If OpenAI goes bankrupt after wasting billions of rich investor dollars the citizens of the US will not have lost billions as a result. Whereas in China…



  • Riskable@programming.devtoTechnology@lemmy.worldTeachers Are Not OK
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    6
    ·
    28 days ago

    Correction: Education is not OK.

    AI is just giving poor kids the same opportunities rich kids have had for decades. Opportunities for cheating the system that was made specifically not to give students the best education possible but instead to bring them up to speed on the bare minimum required to become factory workers.

    Except we don’t have very many factories any more. And we don’t have jobs for all these graduates that pay a living wage.

    The banks are going to have to get involved soon. They’re going to have to figure out a way to load up working-age people with long term debt without college being involved.



  • To be fair, the world of JavaScript is such a clusterfuck… Can you really blame the LLM for needing constant reminders about the specifics of your project?

    When a programming language has five hundred bazillion absolutely terrible ways of accomplishing a given thing—and endless absolutely awful code examples on the Internet to “learn from”—you’re just asking for trouble. Not just from trying to get an LLM to produce what you want but also trying to get humans to do it.

    This is why LLMs are so fucking good at writing rust and Python: There’s only so many ways to do a thing and the larger community pretty much always uses the same solutions.

    JavaScript? How can it even keep up? You’re using yarn today but in a year you’ll probably like, “fuuuuck this code is garbage… I need to convert this all to [new thing].”




  • I’m not convinced that humans don’t reason in a similar fashion. When I’m asked to produce pointless bullshit at work my brain puts in a similar level of reasoning to an LLM.

    Think about “normal” programming: An experienced developer (that’s self-trained on dozens of enterprise code bases) doesn’t have to think much at all about 90% of what they’re coding. It’s all bog standard bullshit so they end up copying and pasting from previous work, Stack Overflow, etc because it’s nothing special.

    The remaining 10% is “the hard stuff”. They have to read documentation, search the Internet, and then—after all that effort to avoid having to think—they sigh and start actually start thinking in order to program the thing they need.

    LLMs go through similar motions behind the scenes! Probably because they were created by software developers but they still fail at that last 90%: The stuff that requires actual thinking.

    Eventually someone is going to figure out how to auto-generate LoRAs based on test cases combined with trial and error that then get used by the AI model to improve itself and that is when people are going to be like, “Oh shit! Maybe AGI really is imminent!” But again, they’ll be wrong.

    AGI won’t happen until AI models get good at retraining themselves with something better than basic reinforcement learning. In order for that to happen you need the working memory of the model to be nearly as big as the hardware that was used to train it. That, and loads and loads of spare matrix math processors ready to go for handing that retraining.







  • I just wrote a novel (finished first draft yesterday). There’s no way I can afford professional audiobook voice actors—especially for a hobby project.

    What I was planning on doing was handling the audiobook on my own—using an AI voice changer for all the different characters.

    That’s where I think AI voices can shine: If someone can act they can use a voice changer to handle more characters and introduce a great variety of different styles of speech while retaining the careful pauses and dramatic elements (e.g. a voice cracking during an emotional scene) that you’d get from regular voice acting.

    I’m not saying I will be able to pull that off but surely it will be better than just telling Amazon’s AI, “Hey, go read my book.”




  • If you hired someone to copy Ghibli’s style, then fed that into an AI as training data, it would completely negate your entire argument.

    It is not illegal for an artist to copy someone else’s style. They can’t copy another artist’s work—that’s a derivative—but copying their style is perfectly legal. You can’t copyright a style.

    All of that is irrelevant, however. The argument is that—somehow—training an AI with anything is somehow a violation of copyright. It is not. It is absolutely 100% not a violation of copyright to do that!

    Copyright is all about distribution rights. Anyone can download whatever TF they want and they’re not violating anyone’s copyright. It’s the entity that sent the person the copyright that violated the law. Therefore, Meta, OpenAI, et al can host enormous libraries of copyrighted data in their data centers and use that to train their AI. It’s not illegal at all.

    When some AI model produces a work that’s so similar to an original work that anyone would recognize it, “yeah, that’s from Spirited Away” then yes: They violated Ghibli’s copyright.

    If the model produces an image of some random person in the style of Studio Ghibli that is not violating anyone’s copyright. It is not illegal nor is it immoral. No one is deprived of anything in such a transaction.


  • Riskable@programming.devtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    3 months ago

    I think your understanding of generative AI is incorrect. It’s not just “logic and RNG”…

    If it runs on a computer, it’s literally “just logic and RNG”. It’s all transistors, memory, and an RNG.

    The data used to train an AI model is copyrighted. It’s impossible for something to exist without copyright (in the past 100 years). Even public domain works had copyright at some point.

    if any of the training data is copyrighted, then attribution must be given, or at the very least permission to use this data must be given by the current copyright holder.

    This is not correct. Every artist ever has been trained with copyrighted works, yet they don’t have to recite every single picture they’ve seen or book they’ve ever read whenever they produce something.


  • Riskable@programming.devtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    3 months ago

    I’m still not getting it. What does generative AI have to do with attribution? Like, at all.

    I can train a model on a billion pictures from open, free sources that were specifically donated for that purpose and it’ll be able to generate realistic pictures of those things with infinite variation. Every time it generates an image it’s just using logic and RNG to come up with options.

    Do we attribute the images to the RNG god or something? It doesn’t make sense that attribution come into play here.