Headline

AI, The EU and learning from past mistakes

The AI Act & more explained.

Artificial Intelligence (AI) has piqued the interests of many industries looking to automate processes and make more money. The introduction of AI into a variety of sectors has allowed for quicker and smarter analysis. It has slashed costs and increased revenue in marketing, with more targeted advertising using data to personalize ads. In Oncology, AI has been able to detect and diagnose cancer samples at the same rate or faster than clinical pathologists. AI has been used to help wildlife conservation, an area that has a critical deficit in funding. It analyzed vast quantities of audio files to determine when birds had struck power lines or other infrastructure. Unmanned aerial vehicles (UAVs) have been deployed with AI that can analyze human activity in an effort to determine if an incident of poaching is about to take place. 

Like many human-made technological innovations, there can be unintended consequences. It took over a decade for the world to understand the consequences social media could have on democracy. Their inaction allowed bad actors to conduct disinformation campaigns that changed the course of ostensibly democratic elections and fueled mass violence such as the genocide in Myanmar. 

UAVs is an abbreviation that can also be used to describe drones that have lethal capabilities and rather than being fitted with conservation AI, they are fitted with AI that is used to analyze photos of people and weapons, targeting such individuals with deadly strikes. The proliferation of deep fakes on the internet, particularly the usage of politicians’ faces digitally manipulated to deliver speeches they never gave, is yet another way it can be used for harm. 

Determined not to allow Big Tech to facilitate potentially harmful activity the European Parliament has adopted a report on AI that lists demands, mandating that the European Union (EU) sets rules governing the use of the technology. As a result of its current usage, the EU has created a commission tasked with drafting policies to regulate the use of AI with regard to any outputs within the union, with the intention of it eventually becoming law, called the AI Act. This follows two major pieces of legislation, the Digital Markets Act and the Digital Services Act, intended to reign in Big Tech; which as an industry has operated in a relatively non-competitive low-tax environment for the early part of this century. These draft rules are intended to protect EU citizens’ privacy and any attempt of manipulation by bad actors to steal identity, money or breach their rights; including the banning of the use of AI to impersonate any citizen. 

However, the EU is not only taking the freedoms from AI firms, albeit to protect its citizens, it is also investing billions into AI technology. The goal is to create data centers and ‘AI excellence centers’ focused on attracting skilled individuals from across the EU. But as mentioned earlier, the EU isn’t just doing this because of the moral or technological implications, it is afraid of falling behind the ‘tech curve’, to the likes of China and the US. The European Parliament’s rapporteur for the report, Axel Voss told Science|Business, “The number one barrier certainly is market fragmentation, which in turn affects investment and research. Without a truly harmonized digital single market, the resulting lack of cross-border investment and also cross-border data exchange prohibit innovation of any kind,”. 

The EU is set to pass the AI Act into law later this year. It is seemingly ahead of the curve, despite consistently being accused of its slow response to crises, it might just be early to this next one. 


Lewis Lovejoy

Lewis Lovejoy

25 May 2022


Share:

Please log in to comment

Other posts you might like

 Help