

Do you not see any value in engaging with views you don’t personally agree with? I don’t think agreeing with it is a good barometer for whether it’s post-worthy
Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP). He/him.
(header photo by Brian Maffitt)
Do you not see any value in engaging with views you don’t personally agree with? I don’t think agreeing with it is a good barometer for whether it’s post-worthy
I think you’ve tilted slightly too far towards cynicism here, though “it might not be as ‘fair’ as you think” is probably also still largely true for people that don’t look into it too hard. Part of my perspective is coming from this random video I watched not long ago which is basically an extended review of the Fairphone 5 that also looks at the “fair” aspect of things.
Misc points:
So yes, they are a long way from selling “100% fair” phones, but it seems like they’re inching the needle a bit more than your summary suggests, and that’s not nothing. It feels like you’ve skipped over lots of small-yet-positive things which are not simply “low economy of scale manufacturing” efforts.
Unfortunately it’s hard for the rest of us to tell if you actually think you want a video to save you from having to read 18 sentences or if you’re just taking the piss lol
For platforms that don’t accept those types of edits, the link OP tried to submit: https://www.theverge.com/news/690815/bill-gates-linus-torvalds-meeting-photo
That video of them interviewing people on the street with it was pretty fun!
So they literally agree not using an LLM would increase your framerate.
Well, yes, but the point is that at the time that you’re using the tool you don’t need your frame rate maxed out anyway (the alternative would probably be alt-tabbing, where again you wouldn’t need your frame rate maxed out), so that downside seems kind of moot.
Also what would the machine know that the Internet couldn‘t answer as or more quickly while using fewer resources anyway?
If you include the user’s time as a resource, it sounds like it could potentially do a pretty good job of explaining, surfacing, and modifying game and system settings, particularly to less technical users.
For how well it works in practice, we’ll have to test it ourselves / wait for independent reviews.
It sounds like it only needs to consume resources (at least significant resources, I guess) when answering a query, which will already be happening when you’re in a relatively “idle” situation in the game since you’ll have to stop to provide the query anyway. It’s also a Llama-based SLM (S = “small”), not an LLM for whatever that’s worth:
Under the hood, G-Assist now uses a Llama-based Instruct model with 8 billion parameters, packing language understanding into a tiny fraction of the size of today’s large scale AI models. This allows G-Assist to run locally on GeForce RTX hardware. And with the rapid pace of SLM research, these compact models are becoming more capable and efficient every few months.
When G-Assist is prompted for help by pressing Alt+G — say, to optimize graphics settings or check GPU temperatures— your GeForce RTX GPU briefly allocates a portion of its horsepower to AI inference. If you’re simultaneously gaming or running another GPU-heavy application, a short dip in render rate or inference completion speed may occur during those few seconds. Once G-Assist finishes its task, the GPU returns to delivering full performance to the game or app. (emphasis added)
Would be curious to read the LLM output.
It looks like it’s available in the linked study’s paper (near the end)
It covers the breadth of problems pretty well, but I feel compelled to point out that there are a few times where things are misrepresented in this post e.g.:
The MSRP for a 5090 is $2k, but the MSRP for the 5090 Astral – a top-end card being used for overclocking world records – is $2.8k. I couldn’t quickly find the European MSRP but my money’s on it being more than 2.2k euro.
NVENC isn’t much of a moat right now, as both Intel and AMD’s encoders are roughly comparable in quality these days (including in Intel’s iGPUs!). There are cases where NVENC might do something specific better (like 4:2:2 support for prosumer/professional use cases) or have better software support in a specific program, but for common use cases like streaming/recording gameplay the alternatives should be roughly equivalent for most users.
Production apparently stopped on these for several months leading up to the 50-series launch; it seems unreasonable to harshly judge the pricing of a product that hasn’t had new stock for an extended period of time (of course, you can then judge either the decision to stop production or the still-elevated pricing of the 50 series).
I personally find this take crazy given that DLSS2+ / FSR4+, when quality-biased, average visual quality comparable to native for most users in most situations and that was with DLSS2 in 2023, not even DLSS3 let alone DLSS4 (which is markedly better on average). I don’t really care how a frame is generated if it looks good enough (and doesn’t come with other notable downsides like latency). This almost feels like complaining about screen space reflections being “fake” reflections. Like yeah, it’s fake, but if the average player experience is consistently better with it than without it then what does it matter?
Increasingly complex manufacturing nodes are becoming increasingly expensive as all fuck. If it’s more cost-efficient to use some of that die area for specialized cores that can do high-quality upscaling instead of natively rendering everything with all the die space then that’s fine by me. I don’t think blaming DLSS (and its equivalents like FSR and XeSS) as “snake oil” is the right takeaway. If the options are (1) spend $X on a card that outputs 60 FPS natively or (2) spend $X on a card that outputs upscaled 80 FPS at quality good enough that I can’t tell it’s not native, then sign me the fuck up for option #2. For people less fussy about static image quality and more invested in smoothness, they can be perfectly happy with 100 FPS but marginally worse image quality. Not everyone is as sweaty about static image quality as some of us in the enthusiast crowd are.
There’s some fair points here about RT (though I find exclusively using path tracing for RT performance testing a little disingenuous given the performance gap), but if RT performance is the main complaint then why is the sub-heading “DLSS is, and always was, snake oil”?
obligatory: disagreeing with some of the author’s points is not the same as saying “Nvidia is great”