

I’ve seen this movie.
“The only way to keep things from crashing is to plug Skynet Grok in.”
Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.
I’ve seen this movie.
“The only way to keep things from crashing is to plug Skynet Grok in.”
LLMs can be good at openings. Not because it is thinking through the rules or planning strategies, but because opening moves are likely in most general training data from various sources. It’s copying the most probable reaction to your move, based on lots of documentation. This can of course break down when you stray from a typical play style, as it has less to choose from in the options of probability, and only a few moves in there won’t be any more since there’s a huge number of possible moves.
I.e., there’s no calculations involved. When you play a LLM at chess, you’re playing a list of common moves in history.
An even simpler example would be to tell the LLM that its last move was illegal. Even knowing the rules you just told it, it will agree and take it back. This comes from being trained to give satisfying replies to a human prompt.
I’ve heard the only way to win is to lock down your shelter and strike first.
It can be bad at the very thing it’s designed to do. It can repeat phrases often, something that isn’t great for writing. But why wouldn’t it, it’s all about probability so common things said will pop up more unless you adjust the variables that determine the randomness.
There’s some very odd pieces on high dollar physical chess sets too.
And unmonitored? Don’t trust anything from Google anymore.
What makes this better than Ollama?
This is exactly what a President, an elected service worker sworn to protect the rights of the public, should be doing.
Not.
To phrase a Monty Python firing squad skit, “You MISSED???”
Have we got images of the font they’re using?
Figures, I just purchased an account last week. That’s fine, it’s not that expensive anyway and hopefully this helps more people decide to move from Google, like I’m in the process of doing.
That’s a bit of a reach. We should have stayed in the trees though, but the trees started disappearing and we had to change.
Lots of attacks on Gen Z here, some points valid about the education that they were given from the older generations (yet it’s their fault somehow). Good thing none of the other generations are being fooled by AI marketing tactics, right?
The debate on consciousness is one we should be having, even if LLMs themselves aren’t really there. If you’re new to the discussion, look up AI safety and the alignment problem. Then realize that while people think it’s about preparing for a true AGI with something akin to consciousness and the dangers that we could face, we have have alignment problems without an artificial intelligence. If we think a machine (or even a person) is doing things because of the same reasons we want them done, and they aren’t but we can’t tell that, that’s an alignment problem. Everything’s fine until they follow their goals and the goals suddenly line up differently than ours. And the dilemma is - there’s not any good solutions.
But back to the topic. All this is not the fault of Gen Z. We built this world the way it is and raised them to be gullible and dependent on technology. Using them as a scapegoat (those dumb kids) is ignoring our own failures.
All companies will eventually try to become monopolies if they get large enough. It’s the nature of capitalism, to do whatever it takes for the bottom line of profit and company growth. That’s why regulations are a good thing, to put limits where a company alone will never do.
AI certainly can be a tool to combat it. Such things should have been hardcoded within these neural nets to have some type of watermarking way before it became a problem, but now as far as it’s gone and in the open, it’s a bit too late for that remedy.
But when tools are put out to detect what is and isn’t AI, trust will develop in THOSE AI systems, and then they could be manipulated to claim actual real events aren’t true. The real problem is that the humans in all of this from the beginning are losing their ability to critically examine and verify what they’re being shown. I.e., people are gullible, always have been to a point, but are at the height now of believing anything they’re told without question.
A company would look at this and determine not that LLMs might have something going on that would be bad for long term business. They would see the bigger net dollar amount and figure that they just had to calculate when to “reset” the LLM. It’s just another IT problem where the solution isn’t to address the problem but to find a workaround that reduces cost while continuing operations.
Ollama.com is another method of self hosting. Figuring out which model type and size for what equipment you have is key, but it’s easy to swap out. That’s just an LLM, where you go from there depends on how deep you want to get into the code. An LLM by itself can work, it’s just limited. Most of the addons you see are extra things to give memory, speech, avatars, and other extras to improve the experience and abilities. Or you can program a lot of that yourself if you know Python. But as others have said, the more you try to get out, the more robust a system you’ll need, which is why you find the best ones online in cloud format. But if you’re okay with slower responses and lower features, self hosting is totally doable, and you can do what you want, especially if you get one of the “Jailbroke” models that has had some of the safety limits modified out of them to some degree.
Also as mentioned, be careful not to get sucked in. Even a local model can be convincing enough sometimes to fool someone wanting to see things. Lots of people recognize that danger, but then belittle people who are looking for help in that direction (while marketing realizes the potential profits and tries very hard to sell it to the same people).
I haven’t seen the sequel to it yet, and sort of was fine leaving it open-ended. I can see how there are dark parts to that episode, mainly from sticking with Dark Mirror’s premise that tech can be used badly. It also paints a not-so-great picture of the real people, hero worship, maybe the gaming industry? The sim copies seem to make out the best of anyone. Definitely a favorite, if I’d rate it on dark vs. positive, it’s 8/10 positive, whereas San Junipero was a 10/10 in the end. Actually San was a 9/10, as it did show that some used the tech there as escape and didn’t grow like the main characters finally did.
This felt like reading about someone who complains that The Daily Show doesn’t have enough positive news stories on it. Dark Mirror fills a niche that people look for, it’s not something that making people think a certain way.
And no mention at all of San Junipero. I guess that would break selling it as pessimism porn when there’s examples otherwise.
Right? It’s the biggest circus of all. Do you really want all those clowns looking for something else to do?
You don’t have to be intelligent to ruin things. Look at Trump.
At least a thinking machine would have a reason for doing what it might do, instead of bumbling along and overshooting any safeguards left. Which given Musk’s attitude, Grok would be the first and last safeguard for everything. So yeah, this is worse than Terminator.