Meta refuses to sign EU's voluntary AI code of practice as Joel Kaplan warns Brussels regulations create "legal uncertainties" that could hamper tech innovation across Europe.
The EU AI Act is the thing that imposes the big fines, and it’s pretty big and complicated, so companies have complained that it’s hard to know how to comply. So this voluntary code of conduct was released as a sample procedure for compliance, i.e. “if you do things this way, you (probably) won’t get in trouble with regulators”.
It’s also worth noting that not all the complaints are unreasonable. For example, the code of conduct says that model makers are supposed to take measures to impose restrictions on end-users to prevent copyright infringement, but such usage restrictions are very problematic for open source projects (in some cases, usage restrictions can even disqualify a piece of software as FOSS).
Emphasis added. If the result of not signing a voluntary code of practice is massive fines and IP blocks, was it really “voluntary?”
If it is really only voluntary, it is a failure from the word go. Voluntary never worked with predatory buisinesses.
Well, direct your ire at the EU for that, I suppose. I’m just pointing out that calling for massive retribution against Meta isn’t warranted here.
The EU AI Act is the thing that imposes the big fines, and it’s pretty big and complicated, so companies have complained that it’s hard to know how to comply. So this voluntary code of conduct was released as a sample procedure for compliance, i.e. “if you do things this way, you (probably) won’t get in trouble with regulators”.
It’s also worth noting that not all the complaints are unreasonable. For example, the code of conduct says that model makers are supposed to take measures to impose restrictions on end-users to prevent copyright infringement, but such usage restrictions are very problematic for open source projects (in some cases, usage restrictions can even disqualify a piece of software as FOSS).