• 0 Posts
  • 26 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle

  • The kneejerk reaction is gonna be “Meta bad”, but it’s actually a bit more complicated.

    Whatever faults Meta has in other areas, it’s been mostly a good player in the AI space. They’re one of the major reasons we have strong open-weight AI models today. Mistral, another maker of open AI models and Europe’s only significant player in AI, has also rejected this code of conduct. By contrast, OpenAI a.k.a. ClosedAI has committed to signing it, probably because they are the incumbents and they think the increased compliance costs will help kill off competitors.

    Personally, I think the EU AI regulation efforts are a big missed opportunity. They should have been used to force a greater level of openness and interoperability in the industry. With the current framing, they’re likely to end up entrenching big proprietary AI companies like OpenAI, without doing much to make them accountable at all, while also burying upstarts and open source projects under unsustainable compliance requirements.


  • The EU AI Act is the thing that imposes the big fines, and it’s pretty big and complicated, so companies have complained that it’s hard to know how to comply. So this voluntary code of conduct was released as a sample procedure for compliance, i.e. “if you do things this way, you (probably) won’t get in trouble with regulators”.

    It’s also worth noting that not all the complaints are unreasonable. For example, the code of conduct says that model makers are supposed to take measures to impose restrictions on end-users to prevent copyright infringement, but such usage restrictions are very problematic for open source projects (in some cases, usage restrictions can even disqualify a piece of software as FOSS).
















  • cyd@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 months ago

    The strangest twist to this is that Deepseek itself seems to be the only company not trying to cash in on the Deepseek frenzy:

    Liang [Deepseek’s founder] has shown little intention to capitalise on DeepSeek’s sudden fame to further commercialise its technology in the near term. The company is instead focusing the majority of its resources on model development…

    These people added the independently wealthy founder has also declined to entertain interest from China’s tech giants as well as venture and state-backed funds to invest in the group for the time being. Many have found it difficult to even arrange a meeting with the secluded founder.

    “We pulled top-level government connections and only got to sit down with someone from their finance department, who said ‘sorry we are not raising’,” said one investor at a multibillion-dollar Chinese tech fund.



  • It’s strongly dependent on how you use it. Personally, I started out as a skeptic but by now I’m quite won over by LLM-aided search. For example, I was recently looking for an academic that had published some result I could describe in rough terms, but whose name and affiliation I was drawing a blank on. Several regular web searches yielded nothing, but Deepseek’s web search gave the result first try.

    (Though, Google’s own AI search is strangely bad compared to others, so I don’t use that.)

    The flip side is that for a lot of routine info that I previously used Google to find, like getting a quick and basic recipe for apple pie crust, the normal search results are now enshittified by ad-optimized slop. So in many cases I find it better to use a non-web-search LLM instead. If it matters, I always have the option of verifying the LLM’s output with a manual search.