Well, yeah, kind of at this point. LLMs can be interpreted as natural language computers
Watch time is pretty important on YouTube afaik, initial clocks themselves don’t count for that much
What? Since when does Valve prohibit companies from redirecting customers to non-Valve purchasing flows? Because that’s what this ruling is about, it says Apple can’t prohibit apps from telling users to go buy off-platform for lower prices. Valve isn’t doing that with Steam afaik, actually I’m not aware of any other platform that does this
“The planning thing in poems blew me away,” says Batson. “Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going.”
How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you’re gonna say, and then just output the next token necessary to continue that sentence. It’s going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that’s something I felt was kinda obvious these models must be doing on one level or another.
I’d be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the “thinking” they have already done for previous tokens
Huh? I’m streaming from my Jellyfin just fine when I’m on the go, with no tailscale or other VPN set up
It doesn’t need to be the latest android version per se, but I wouldn’t want to use a phone that’s not getting security patches anymore