104
submitted 1 year ago by alyaza@beehaw.org to c/technology@beehaw.org
top 19 comments
sorted by: hot top controversial new old
[-] arcticpiecitylights@beehaw.org 21 points 1 year ago

As an American, its always so encouraging to see the things that come out of the EU

[-] 0x815@feddit.de 14 points 1 year ago

As an EU citizen, I'm often disappointed how these things are applied. New rules may be fine, but often it takes a really really long time here until the necessary changes take an effect in the real world.

The GDPR is a good example imo. We have it for 5 years now, but even many public authorities still don't comply with it. So I feel that many things are just written on paper.

[-] JustSomething83@kbin.social 5 points 1 year ago

Things take time and citizens to demand their rights but at least with legislation we have that right.

[-] Thy_Moose@lemmy.ml 2 points 1 year ago

Don’t they? It may depend on which MS you live in, but fines for not complying with the GDPR are pretty hefty, and although I agree that at the beginning there was a bit of chaos, things have significantly improved, and things like the right to be forgotten do indeed have a direct impact on our lives!

[-] 0x815@feddit.de 3 points 1 year ago* (last edited 1 year ago)

It seems to get better of late, but slowly. We can get an idea about how GDPR is handled across the EU in the GDPR enforcement tracker or in the GDPR Trap Map. Amongst others, the latter says for example:

Departing from the standard in most procedural laws in Germany, Article 20 of the Bavarian Data Protection Law codifies that a complainant may not get access to the files in a complaints procedure. This means that the data subject is very much limited in effectively challenging wrong arguments by the controller. The provision seems to violate fair procedures rights.

Edit for an addition: There are many sites to check a website's GDPR compliance, e.g. Fathom's, and to find trackers and cookies there is also The Markup's Blacklight. I'm not aware whether these tools are known by everyone already.

[-] Balssh@kbin.social 5 points 1 year ago

It's also harder to harmonize a thing such as EU with a lot more heterogenous states than the US. It's still better to move slowly than to not move at all.

[-] lysy@szmer.info 2 points 1 year ago

Also Rentgen - https://addons.mozilla.org/en-US/firefox/addon/rentgen/
"Rentgen illustrates the amount of tracking scripts on a website and helps with formulating an email to the website admin, which can be a basis for a GDPR complaint. "

[-] ccx@sopuli.xyz 1 points 1 year ago

State authorities aren't bound by GDPR. That's something that's explicitly stated in it.

[-] TheTrueLinuxDev@beehaw.org 14 points 1 year ago* (last edited 1 year ago)
[-] Spzi@lemmy.click 5 points 1 year ago

The European Union isn’t a big player in cutting-edge AI development. That role is taken by the U.S. and China. But Brussels often plays a trend-setting role with regulations that tend to become de facto global standards and has become a pioneer in efforts to target the power of large tech companies.

The sheer size of the EU’s single market, with 450 million consumers, makes it easier for companies to comply than develop different products for different regions, experts say.

It's called the Brussels effect. Wish we would utilize it more for climate regulation / carbon pricing, although that's another topic.


Is this the offical website for the act? https://www.artificial-intelligence-act.com/ Yesterdays's signing does not seem to be covered yet in their timeline.

[-] jeena@jemmy.jeena.net 4 points 1 year ago

Interestingly Japan is going into the opposite direction: https://beehaw.org/post/414113

[-] wsippel@discuss.tchncs.de 9 points 1 year ago

This isn't the opposite direction, copyright isn't really the focus of the Artificial Intelligence Act. Copyright and AI training is covered in the EU Copyright Directive 2019/790, and is very similar to the Japanese law. The AI Act basically just reiterates that AI models have to disclose exactly what they were trained on, something already implied by CD 2019/790.

[-] cyd@vlemmy.net 2 points 1 year ago

One major issue that concerns me about these regulations is whether free and open source AI projects will be left alone, or whether they'll be liable to jumping through procedural hoops that individuals, or small volunteer teams, can't possibly deal with. I have seen contradictory statements coming from different parties.

Regulations of this sort always bring the risk of entrenching big, deep-pocketed companies that can just shrug and deal with the rules, while smaller players get locked out. We have seen that happening in some of the previous EU tech regulations.

In the AI space, I think the major risk is not AI helping create disinformation, invading privacy, etc. Frankly, the genie is already out of the bottle on many of these fronts. The major worry, going forward, is AI models becoming monopolized by big companies, with FOSS alternatives being kept in a permanently inferior position by lack of resources plus ill-targeted regulations.

[-] barsoap@lemm.ee 2 points 1 year ago

The regulation is generally about the application side -- things like "states, don't have a social score system" or "companies, if you make a CV scanner you better be bloody sure it doesn't discriminate". Part of the application side already was regulated, e.g. car autopilots, this is simply a more comprehensive list of iffy and straight-up unconscionable uses.

Generating cat pictures with stable diffusion doesn't even begin to fall under the regulation.

[-] cyd@vlemmy.net 1 points 1 year ago* (last edited 1 year ago)

Well, here's my worry. From my understanding, the EU wants (say) foundation model builders to certify that their models meet certain criteria. That's a nice idea in itself, but there's a risk of this certification process being too burdensome for FOSS developers of foundation models. Worse still, would the FOSS projects end up being legally liable for downstream uses of their models? Don't forget that, unlike proprietary software with their EULAs taking liability off developers, FOSS places no restrictions on how end users use the software (in fact, any such restrictions generally make it non-FOSS).

[-] barsoap@lemm.ee 0 points 1 year ago

A foundation model is not an application. It's up to the people wanting to run AI in a high-risk scenario to make sure that the models they're using are up to the task, if they can't say that about some FOSS model then they can't use it. And, honestly, would you want some CV or college application scanner involve DeepDanbooru.

[-] cyd@vlemmy.net 1 points 1 year ago

The regulation not only puts obligations on users. Providers (which can include FOSS developers?) would have to seek approval for AI systems that touch on certain areas (e.g. vocational training), and providers of generative AI are liable to "design the model to prevent it from generating illegal content" and "publishing summaries of copyrighted data used for training". The devil is in the details, and I'm not so sanguine about it being FOSS-friendly.

[-] barsoap@lemm.ee 1 points 1 year ago* (last edited 1 year ago)

Ok here's what parlimant passed, ie. its amendments

Quoth:

5e. This Regulation shall not apply to AI components provided under free and open-source licences except to the extent they are placed on the market or put into service by a provider as part of a high-risk AI system or of an AI system that falls under Title II or IV. This exemption shall not apply to foundation models as defined in Art 3.

Interesting, no foundation model exception, though the FLOSS community isn't going to train any of those soon in any case.

Or am I reading that wrong and the "unless placed on the market" is the exemption that shall not apply, not the whole of 5e. Gods.

More broadly speaking this is the same issue as with the cyber resilience act and they're definitely on top of it as to saying "we don't want FLOSS to suffer by a misinterpretation of 'to put on the market'". Patience, none of this is as of yet law but the very act of amending it such tells courts to not interpret it that way.

In case you have use for it, the base version the parliament diffed against. Why aren't they using proper VCS in .

[-] sudoreboot@mander.xyz 1 points 1 year ago

Sam Altman, CEO of ChatGPT maker OpenAI, has voiced support for some guardrails on AI and signed on with other tech executives to a warning about the risks it poses to humankind. But he also has said it’s “a mistake to go put heavy regulation on the field right now.”

Lol, this guy

load more comments
view more: next ›
this post was submitted on 15 Jun 2023
104 points (100.0% liked)

Technology

37208 readers
293 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS