PGZ Geruchten /// Digitale onafhankelijkheid, leeftijdsverificatie, Europese plannen, gezichtsherkenning en meer

De PGZ-Geruchten van april 2025.
Snelberichten, geheugensteuntjes, korte notities, boeiende links, … Alles wat (voorlopig) niet in een bericht of op een andere pagina terecht komt.
Check back voor updates in de loop van de maand.

Europa digitaal onafhankelijk?
Ons hele digitale leven draait op Amerikaanse en Chinese tech. Hoe wrikt de EU zich los? Topeconoom Francesca Bria pleit voor een ambitieus plan: de EuroStack. De Italiaanse econoom onderzoekt wie de baas is over technologie die het moderne leven mogelijk maakt. Ze was onder meer Chief Technology Officer van Barcelona, en hervormde de stad tot een fundamenteel andere ‘smart city’ – een die minder afhankelijk is van Amerikaanse en Chinese technologiebedrijven. Sindsdien hamert ze op het belang van technologische soevereiniteit: het idee dat steden, landen, continenten zelf de regie moeten voeren over digitale technologieën.
— via De Correspondent

Digital Identities and the Future of Age Verification in Europe
A three-part series about age verification in the European Union. In part one, EFF give an overview of the political debate around age verification and explore the age verification proposal introduced by the European Commission, based on digital identities. Part two takes a closer look at the European Commission’s age verification app, and part three explores measures to keep all users safe that do not require age checks.
— via Electronic Frontier Foundation

[Europa]
‘ProtectEU’ security strategy: a step further towards a digital dystopian future
The European Commission presented an internal security strategy that would undermine digital rights and even increase security threats. We unpack what ‘ProtectEU’ means for the EU’s future digital policy, including on encryption, data retention, and border surveillance.
— via EDRi

[burgerrechten] [gezichtsherkenning] [Hongarije]
Civil society to European Commission: Act now to defend fundamental rights from Hungary’s Pride ban and the use of facial recognition against protesters
EDRi, along with a broad coalition of civil society organisations, demands urgent action from the European Commission on Hungary’s new law banning Pride marches and permitting the use of live facial recognition technology targeting protesters.
“In an attack on the EU fundamental rights of freedom of peaceful assembly and freedom of expression, Hungary’s Parliament fast-tracked the passing of amendments banning and criminalising Pride marches and their organisers. The penalties for this ban include exorbitant fines, and in certain cases, imprisonment. The amendments are also in violation of the EU Artificial Intelligence (AI) Act because they permit the use of real-time facial recognition technologies for the identification of protestors – a significant infringement on privacy and personal freedoms also protected under EU law.”
— via EDRi
.
[privacytools]
Des outils libres pour une campagne express
Une sélection d’outils fiables, sécurisés (et souvent gratuits) pour faire campagne, en express, avec des outils éthiques. Tous les outils de cette liste sont opérationnels, éprouvés, et déjà hébergé sur des instances.
— via Louis Derrac

Is Google Photos safe? For sharing private photos, not so much

Google Photos promises to keep your memories safe, organized, and always within reach — but its convenience has a cost. While your pictures may be protected from hackers, they’re not always private, especially from Google itself. From unclear AI practices(new window) to cases where people lost entire accounts over misinterpreted images, the privacy risks are often overlooked.
— via Proton blog

UK creating ‘murder prediction’ tool to identify people most likely to kill
Algorithms allegedly being used to study data of thousands of people, in project critics say is ‘chilling and dystopian’.
— via the Guardian
Reactie van Ilyas Nagdee (Amnesty International)
Predictive policing has prejudice built in
Re your article (‘Dystopian’ tool aims to predict murder, 9 April), the collection and automation of data has repeatedly led to the targeting of racialised and low-income communities, and must come to an end. This has been found by both Amnesty International in our Automated Racism report and by Statewatch in its findings on the “murder prediction” tool.
For many years, successive governments have invested in data-driven and data-based systems, stating they will increase public safety – yet individual police forces and Home Office evaluations have found no compelling evidence that these systems have had any impact on reducing crime.
Feedback loops are created by training these systems using historically discriminatory data, which leads to the same areas being targeted once again. These systems are neither revelatory nor objective. They merely subject already marginalised communities to compounded discrimination. They aren’t predictive at all, they are predictable – and dangerous.
— via the Guardian

[surveillancetech]
Companies Honed Their Surveillance Tech in Israel. Now It’s Coming Home.
After deploying AI tools in Israel and on the U.S. border, American tech companies are now powering domestic repression.
“In March, Secretary of State Marco Rubio announced the State Department was launching an AI-powered “Catch and Revoke” initiative to accelerate the cancellation of student visas. Algorithms would collect data from social media profiles, news outlets, and doxing sites to enforce the January 20 executive order targeting foreign nationals who threaten to “overthrow or replace the culture on which our constitutional Republic stands.” The arsenal was built in concert with American tech companies over the past two decades and already deployed, in part, within the U.S. immigration system.
Rubio’s “Catch and Revoke” initiative emerges from long-standing collaborations between tech companies and increasingly right-wing governments eager for their wares. The AI industry’s business model hinges on unfettered access to troves of data, which makes less-then-democratic contexts, where state surveillance is unconstrained by judicial, legislative, or public oversight, particularly lucrative proving grounds for new products. The effects of these technologies have been most punitive on the borders of the U.S. or the European Union, like migrant detention centers in Texas or Greece. But now the inevitable is happening: They are becoming popular domestic policing tools.”
— via the Intercept

LAAT EEN REACTIE ACHTER

Breng hier je commentaar in!
Vul hier je naam in