

I know a guy that can’t speak anymore. He only says, “MIPI!”
Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast
I know a guy that can’t speak anymore. He only says, “MIPI!”
The US is not “openly bankrolling” any AI companies. The closest thing would be OpenAI’s recent contract with the military:
https://www.theverge.com/news/688041/openai-us-defense-department-200-million-contract
Whereas China is definitely funding AI efforts in their country because they’re communist (sort of). That’s literally how communism works: The government funds stuff.
China isn’t really communist in the traditional sense but they definitely use government funds to prop up business they feel will give the country a strategic advantage. They do this directly (here’s a check to pay people) and indirectly (we’ll subsidize all shipping for your business and make sure you get sweetheart deals with other businesses you rely on).
The Chinese government is in the business of picking winners and losers in the market and they’re open about it. It’s not a secret. That’s literally how their government is setup.
The US has ways of picking winners but they’re not nearly as direct and there’s a whole lot of rules that must be followed or competitors will sue and win. Then the whole process falls apart.
TL;DR: You’re directly wrong and you’re framing the story wrong as well.
Aside: If OpenAI goes bankrupt after wasting billions of rich investor dollars the citizens of the US will not have lost billions as a result. Whereas in China…
The AI said that trying to reason with you is a waste of precious tokens.
Correction: Education is not OK.
AI is just giving poor kids the same opportunities rich kids have had for decades. Opportunities for cheating the system that was made specifically not to give students the best education possible but instead to bring them up to speed on the bare minimum required to become factory workers.
Except we don’t have very many factories any more. And we don’t have jobs for all these graduates that pay a living wage.
The banks are going to have to get involved soon. They’re going to have to figure out a way to load up working-age people with long term debt without college being involved.
To me, this is like saying, “4chan has turned into a cesspool!” Yeah: It was like that from the start. YOU were the ones that assumed it was ever safe!
You’re posting stuff on the public Internet to a website for adults where literally anyone can sign up and comment FFS.
If you want good moderation you need community moderation from people in that community. Not some giant/evil megacorp!
There’s all sorts of tools and platforms that do this properly, easily, and for free. If you don’t like Meta’s websites move off of them already!
To be fair, the world of JavaScript is such a clusterfuck… Can you really blame the LLM for needing constant reminders about the specifics of your project?
When a programming language has five hundred bazillion absolutely terrible ways of accomplishing a given thing—and endless absolutely awful code examples on the Internet to “learn from”—you’re just asking for trouble. Not just from trying to get an LLM to produce what you want but also trying to get humans to do it.
This is why LLMs are so fucking good at writing rust and Python: There’s only so many ways to do a thing and the larger community pretty much always uses the same solutions.
JavaScript? How can it even keep up? You’re using yarn today but in a year you’ll probably like, “fuuuuck this code is garbage… I need to convert this all to [new thing].”
Define, “reasoning”. For decades software developers have been writing code with conditionals. That’s “reasoning.”
LLMs are “reasoning”… They’re just not doing human-like reasoning.
That just means they’d be great CEOs!
According to Wall Street.
I’m not convinced that humans don’t reason in a similar fashion. When I’m asked to produce pointless bullshit at work my brain puts in a similar level of reasoning to an LLM.
Think about “normal” programming: An experienced developer (that’s self-trained on dozens of enterprise code bases) doesn’t have to think much at all about 90% of what they’re coding. It’s all bog standard bullshit so they end up copying and pasting from previous work, Stack Overflow, etc because it’s nothing special.
The remaining 10% is “the hard stuff”. They have to read documentation, search the Internet, and then—after all that effort to avoid having to think—they sigh and start actually start thinking in order to program the thing they need.
LLMs go through similar motions behind the scenes! Probably because they were created by software developers but they still fail at that last 90%: The stuff that requires actual thinking.
Eventually someone is going to figure out how to auto-generate LoRAs based on test cases combined with trial and error that then get used by the AI model to improve itself and that is when people are going to be like, “Oh shit! Maybe AGI really is imminent!” But again, they’ll be wrong.
AGI won’t happen until AI models get good at retraining themselves with something better than basic reinforcement learning. In order for that to happen you need the working memory of the model to be nearly as big as the hardware that was used to train it. That, and loads and loads of spare matrix math processors ready to go for handing that retraining.
The only reason we’re not there yet is memory limitations.
Eventually some company will come out with AI hardware that lets you link up a petabyte of ultra fast memory to chips that contain a million parallel matrix math processors. Then we’ll have an entirely new problem: AI that trains itself incorrectly too quickly.
Just you watch: The next big breakthrough in AI tech will come around 2032-2035 (when the hardware is available) and everyone will be bitching that “chain reasoning” (or whatever the term turns out to be) isn’t as smart as everyone thinks it is.
The big difference is that updates in Linux happen in the background and aren’t very intrusive. Your hard drive will be used here and there as it unpacks packages but the difference between say, apt, and Windows update is stark. Windows update slows everything down quite a lot.
her goal isn’t to get them to stop, it’s to get them to recognize what garbage writing is and how to fix it so it isn’t garbage anymore.
I wish English teachers did this instead of… Whatever TF they’re doing instead.
This is something they should’ve been doing all along. Long before the invention of LLMs or computers.
Move to Japan 👍
I use Kubuntu with KDE Connect. It lets me control everything using my phone 👍
I can play/pause whatever from my lock screen and can use my phone’s keyboard like it’s connected to the computer. It’s fantastic 👍
I just wrote a novel (finished first draft yesterday). There’s no way I can afford professional audiobook voice actors—especially for a hobby project.
What I was planning on doing was handling the audiobook on my own—using an AI voice changer for all the different characters.
That’s where I think AI voices can shine: If someone can act they can use a voice changer to handle more characters and introduce a great variety of different styles of speech while retaining the careful pauses and dramatic elements (e.g. a voice cracking during an emotional scene) that you’d get from regular voice acting.
I’m not saying I will be able to pull that off but surely it will be better than just telling Amazon’s AI, “Hey, go read my book.”
bows in solemn gesture
Are there any other websites that still let you put in your AIM and ICQ numbers? Or brag about your super low user ID? 19437 BTW 🤣
If you hired someone to copy Ghibli’s style, then fed that into an AI as training data, it would completely negate your entire argument.
It is not illegal for an artist to copy someone else’s style. They can’t copy another artist’s work—that’s a derivative—but copying their style is perfectly legal. You can’t copyright a style.
All of that is irrelevant, however. The argument is that—somehow—training an AI with anything is somehow a violation of copyright. It is not. It is absolutely 100% not a violation of copyright to do that!
Copyright is all about distribution rights. Anyone can download whatever TF they want and they’re not violating anyone’s copyright. It’s the entity that sent the person the copyright that violated the law. Therefore, Meta, OpenAI, et al can host enormous libraries of copyrighted data in their data centers and use that to train their AI. It’s not illegal at all.
When some AI model produces a work that’s so similar to an original work that anyone would recognize it, “yeah, that’s from Spirited Away” then yes: They violated Ghibli’s copyright.
If the model produces an image of some random person in the style of Studio Ghibli that is not violating anyone’s copyright. It is not illegal nor is it immoral. No one is deprived of anything in such a transaction.
I think your understanding of generative AI is incorrect. It’s not just “logic and RNG”…
If it runs on a computer, it’s literally “just logic and RNG”. It’s all transistors, memory, and an RNG.
The data used to train an AI model is copyrighted. It’s impossible for something to exist without copyright (in the past 100 years). Even public domain works had copyright at some point.
if any of the training data is copyrighted, then attribution must be given, or at the very least permission to use this data must be given by the current copyright holder.
This is not correct. Every artist ever has been trained with copyrighted works, yet they don’t have to recite every single picture they’ve seen or book they’ve ever read whenever they produce something.
Yes. My AHEK-95 is everything I’ve ever wanted in a keyboard.
Learn all about it from Chyrosran22:
https://youtu.be/iv6Rh8UNWlI