• 0 Posts
  • 42 Comments
Joined 4 months ago
cake
Cake day: April 4th, 2025

help-circle
  • While that is sort of true, it’s only about half of how they work. An LLM that isn’t trained with reinforcement learning to give desired outputs gives really weird results. Ever notice how ChatGPT seems aware that it is a robot and not a human? An LLM that purely parrots the training corpus won’t do that. If you ask it “are you a robot?” It will say “Of course not dumbass I’m a real human I had to pass a CAPTCHA to get on this website” because that’s how people respond to that question. So you get a bunch of poorly paid Indians in a call center to generate and rank responses all day and these rankings get fed into the algorithm for generating a new response. One thing I am interested in is the fact that all these companies are using poorly paid people in the third world to do this part of the development process, and I wonder if this imparts subtle cultural biases. For example, early on after ChatGPT was released I found it had an extremely strong taboo against eating dolphin meat, to the extent that it was easier to get it to write about about eating human meat than dolphin meat. I have no idea where this could have come from but my guess is someone really hated the idea and spent all day flagging dolphin meat responses as bad.

    Anyway, this is another, more subtle way more subtle issue with LLMs- they don’t simply respond with the statistically most likely outcome of a conversation, there is a finger in the scales in favor of certain responses, and that finger can be biased in ways that are not only due to human opinion, but also really hard to predict.



  • How much do those tapes cost if purchased in bulk? I am trying to figure out how much a petabyte storage system costs, and how much physical space this would take up, and how much electricity it would require to run. I had a lot of trouble finding this information on Google because I know so little about tape storage and don’t know what all I would need. I am probably not going to actually do anything with this but I am curious because I had this idea for a product and can’t get it out of my mind. The most important part (for the hardware portion) is having nearly a petabyte of physical, local storage. I am aware this would be quite expensive and relatively large, but the product would be intended for governments and companies not individuals.





  • American cars have sucked compared to Asian cars since the 1970s. I don’t understand why people are acting all surprised that this is true in respect to BYD. Sure in the past products designed in China were stereotyped as poor quality knock offs of western designed goods, but in the past decade Chinese engineers have increasingly proven themselves as perfectly capable of making solid, innovative designs that improve upon those of their competitors. I think it’s kind of fucked up that everyone is so suddenly upset about China’s role in the world economy since everyone was completely fine using them for cheap labor over the past several decades and are just mad that Chinese companies are beating them at high skill labor and technology. Chinese companies do have an “unfair advantage” given how much they are backed by the Chinese government but American companies receive all sorts of money from the government for all sorts of things as well.



  • Sauce plus and dropout as well. Basically run by youtubers to make content without relying on YouTube. A lot of this is running on pre-existing tech for running a streaming service and I assume it’s dependent on AWS (Amazon) hosting but yeah lots of smaller paid streaming services run by youtubers because YouTube sucks. I believe sauce plus is essentially the same as Floatplane on the backend, they mentioned working with LTT to make it.




  • American evangelicals when the government suggests getting a vaccine for a deadly virus- “IT’S THE MARK OF THE BEAST DON’T GET IT OR YOU’LL GO TO HELL”

    American evangelicals when people they voted for say you need to wear something on your wrist to participate in society - “This is fine”

    A wearable computer is much more similar in form to what is described in the Book of Revelation than a vaccine is, but these dumbasses don’t see that because they’re not operating on logic but instead are just doing what they’re told.


  • Driving is culturally specific, even. The way rules are followed and practiced is often regionally different

    This is one of the problems driving automation solves trivially when applied at scale. Machines will follow the same rules regardless of where they are which is better for everyone

    The ethics of putting automation in control of potentially life threatening machines is also relevant

    You’d shit yourself if you knew how many life threatening machines are already controlled by computers far simpler than anything in a self driving car. Industrially, we have learned the lesson that computers, even ones running on extremely simple logic, just completely outclass humans on safety because they do the same thing every time. There are giant chemical manufacturing facilities that are run by a couple guys in a control room that watch a screen because 99% of it is already automated. I’m talking thousands of gallons an hour of hazardous, poisonous, flammable materials running through a system run on 20 year old computers. Water chemical additions at your local water treatment plant that could kill thousands of people if done wrong, all controlled by machines because we know they’re more reliable than humans

    With humans we can attribute cause and attempted improvement, with automation its different.

    A machine can’t drink a handle of vodka and get behind the wheel, nor can it drive home sobbing after a rough breakup and be unable to process information properly. You can also update all of them all at once instead of dealing with PSA canpaigns telling people not to do something that got someone killed. Self driving car makes a mistake? You don’t have to guess what was going through its head, it has a log. Figure out how to fix it? Guess what, they’re all fixed with the same software update. If a human makes that mistake, thousands of people will keep making that same mistake until cars or roads are redesigned and those changes have a way to filter through all of society.

    I just don’t see a need for this at all. I think investing in public transportation more than reproduces all the benefits of automated cars without nearly as many of the dangers and risks.

    This is a valid point, but this doesn’t have to be either/or. Cars have a great utility even in a system with public transit. People and freight have to get from the rail station or port to wherever they need to go somehow, even in a utopia with a perfect public transit system. We can do both, we’re just choosing not to in America, and it’s not like self driving cars are intrinsically opposed to public transit just by existing.


  • While I agree focusing on public transport is a better idea, it’s completely absurd to say machines can never possibly drive as well as humans. It’s like saying a soul is required or other superstitious nonsense like that. Imagine the hypothetical case in which a supercomputer that perfectly emulates a human brain is what we are trying to teach to drive. Do you think that couldn’t drive? If so, you’re saying a soul is what allows a human to drive, and may as well be saying that God hath uniquely imbued us with the ability to drive. If you do think that could drive, then surely a slightly less powerful computer could. And maybe one less powerful than that. So somewhere between a casio solar calculator and an emulated human brain must be able to learn to drive. Maybe that’s beyond where we’re at now (I don’t necessarily think it is) but it’s certainly not impossible just out of principle. Ultimately, you are a computer at the end of the day.



  • This is not surprising if you’ve studied anything on machine learning or even just basic statistics. Consider if you are trying to find out the optimal amount of a thickener to add to a paint formulation to get it to flow the amount you want. If you add it at 5%, then 5.1%, then 5.2%, it will he hard to see how much of the difference between those batches is due to randomness or measurement uncertainty than if you see what it does at 0%, then 25% then 50%. This is a principle called Design of Experiments (DoE) in traditional statistics, and a similar effect happens when you are training machine learning models- datapoints far outside the norm increase the ability of the model to predict within the entire model space (there is some nuance here, because they can become over-represented if care isn’t taken). In this case, 4chan shows the edges of the English language and human psychology, like adding 0% or 50% of the paint additives rather than staying around 5%.

    At least that’s my theory. I haven’t read the paper but plan to read it tonight when I have time. At first glance I’m not surprised. When I’ve worked with industrial ML applications, processes that have a lot of problems produce better training data than well controlled processes, and I have read papers on this subject where people have improved performance of their models by introducing (controlled) randomness into their control setpoints to get more training data outside of the tight control regime.


  • This is a mischaracterization of how AI is used for coding and how it can lead to job loss. The use case is not “have the AI develop apps entirely on its own” it’s “allow one programmer to do the work of 3 programmers by using AI to write or review portions of code” and “allow people with technical knowledge who are not skilled programmers to write code that’s good enough without the need for dedicated programmers.” Some companies are trying to do the first one, but almost everyone is doing the second one, and it actually works. That’s how AI leads to job loss. A team of 3 programmers can do what used to take a team of 10 or so on.