• 0 Posts
  • 10 Comments
Joined 6 months ago
cake
Cake day: January 26th, 2025

help-circle
  • the H200 has a very impressive bandwith of 4.89 TB/s, but for the same price you can get 37 TB/s spread across 58 RX 9070s, but if this actually works in practice i don’t know.

    Your math checks out, but only for some workloads. Other workloads scale out like shit, and then you want all your bandwidth concentrated. At some point you’ll also want to consider power draw:

    • One H200 is like 1500W when including support infrastructure like networking, motherboard, CPUs, storage, etc.
    • 58 consumer cards will be like 8 servers loaded with GPUs, at like 5kW each, so say 40kW in total.

    Now include power and cooling over a few years and do the same calculations.

    As for apples and oranges, this is why you can’t look at the marketing numbers, you need to benchmark your workload yourself.


  • Well, a few issues:

    • For hosting or training large models you want high bandwidth between GPUs. PCIe is too slow, NVLink has literally a magnitude more bandwidth. See what Nvidia is doing with NVLink and AMD is doing with InfinityFabric. Only available if you pay the premium, and if you need the bandwidth, you are most likely happy to pay.
    • Same thing as above, but with memory bandwidth. The HBM-chips in a H200 will run in circles around the GDDR-garbage they hand out to the poor people with filthy consumer cards. By the way, your inference and training is most likely bottlenecked by memory bandwidth, not available compute.
    • Commercially supported cooling of gaming GPUs in rack servers? Lol. Good luck getting any reputable hardware vendor to sell you that, and definitely not at the power densities you want in a data center.
    • TFLOP16 isn’t enough. Look at 4 and 8 bit tensor numbers, that’s where the expensive silicon is used.
    • Nvidias licensing agreements basically prohibit gaming cards in servers. No one will sell it to you at any scale.

    For fun, home use, research or small time hacking? Sure, buy all the gaming cards you can. If you actually need support and have a commercial use case? Pony up. Either way, benchmark your workload, don’t look at marketing numbers.

    Is it a scam? Of course, but you can’t avoid it.


  • Please note that the nominal FLOP/s from both Nvidia and Huawei are kinda bullshit. What precision we run at greatly affect that number. Nvidias marketing nowadays refer to fp4 tensor operations. Traditionally, FLOP/s are measured with fp64 matrix-matrix multiplication. That’s a lot more bits per FLOP.

    Also, that GPU-GPU bandwidth is kinda shit compared to Nvidias marketing numbers if I’m parsing correctly (NVLink is 18x 10GB/s links per GPU, big ’B’ in GB). I might read the numbers incorrectly, but anyway. How and if they manage multi-GPU cache coherency will be interesting to see. Nvidia and AMD both do (to varying degrees) have cache coherency in those settings. Developer experience matters…

    Now, the real interesting thing is power draw, density and price. Power draw and price obviously influence TCO. On 7nm, I guess the power bill won’t be very fun to read, but that’s just a guess. The density influences network options - are DAC-cables viable at all, or is it (more expensive) optical all the way?


  • You assume a uniform distribution. I’m guessing that it’s not. The question isn’t ”Does the model contain compressed representations of all works it was trained on”. Enough information on any single image is enough to be a copyright issue.

    Besides, the situation isn’t as obviously flawed with image models, when compared to LLMs. LLMs are just broken in this regard, because it only takes a handful of bytes being retained in order to violate copyright.

    I think there will be a ”find out” stage fairly soon. Currently, the US projects lots and lots of soft power on the rest of the world to enforce copyright terms favourable to Disney and friends. Accepting copyright violations for AI will erode that power internationally over time.

    Personally, I do think we need to rework copyright anyway, so I’m not complaining that much. Change the law, go ahead and make the high seas legal. But set against current copyright laws, most large datasets and most models constitute copyright violations. Just imagine the shitshow if OpenAI was an European company training on material from Disney.



  • There is an argument that training actually is a type of (lossy) compression. You can actually build (bad) language models by using standard compression algorithms to ”train”.

    By that argument, any model contains lossy and unstructured copies of all data it was trained on. If you download a 480p low quality h264-encoded Bluray rip of a Ghibli movie, it’s not legal, despite the fact that you aren’t downloading the same bits that were on the Bluray.

    Besides, even if we consider the model itself to be fine, they did not buy all the media they trained the model on. The action of downloading media, regardless of purpose, is piracy. At least, that has been the interpretation for normal people sailing the seas, large companies are of course exempt from filthy things like laws.



  • enumerator4829@sh.itjust.workstoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    4 months ago

    I’m using ”Commercially deployed” in the context of ”company you interacted with had an AI represent them in that communication”. You don’t use AI for that to increase costumer satisfaction. (I wonder why I haven’t seen any AI products targeted at automated B2B sales?)

    I won’t argue that GenAI isn’t useful for end consumers using it properly. It is.

    (As an aside, I hope you and your grandfather get better!)


  • enumerator4829@sh.itjust.workstoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    4 months ago

    But why use money to innovate when there is profit to be made and laws are just made up?

    AI is the new kid on the block, trying to make a dent in our society. So far, we don’t really have that many useful or productive deployments. It’s on AI to prove it’s worth, and it’s kinda worthless until proven otherwise. (Name one interaction with a commercially deployed AI model you didn’t hate?)

    So far, Apple is failing with consumer products, Microsoft is backing off on GPU-orders, research showing commercial GenAI isn’t increasing productivity, NVDA seems to cool off and you expect the benevolent commercial health care industry to come to the rescue?

    Yeah, I’ll keep my knee jerk reaction and keep living with my current socialised health care.