Europes digital ambitions the e. U. Has just revealed its vision for the continents tech future hearing competition from abroad it wants to grow its own sector but there are concerns over just how it will protect privacy this is inside story. Hello there and welcome to the program im the stasi attain now theres a global race for Technology Leadership with the United States and china out front europe now says it wants to catch up and reinforce its position on the digital scene its leaders are concerned about the reliance on Foreign Tech Companies including microsoft apple and while way they say they want to develop europes own tech sector now to do that theyve come up with a strategy that aims to find solutions that prioritize peoples lives the economy and an open society the e. U. Outlined its approach for days and Artificial Intelligence on wednesday the proposal seek tougher regulation of the worlds biggest tech platforms and increased spending for the tech sector but critics say more needs to be done to protect private data will bring in our guest in a moment but fast lets hear what the European Commission had to say we want to boost european Artificial Intelligence by attracting more than 20000000000. 00 euros per year for the next decade. I do Vision Intelligence is about big data data data and again data and we all know that the more data we have. The smarter our algorithms this is a very simple equation and therefore it is so important to have access to data that are out there this is why we want to give our businesses but also our researchers and the Public Services better access to data. Here europe is not just following there is a real potential of taking the lead we have strengths when it comes to business to Business Technology and lot of industrial high quality data is being produced right now to be able to put that together to make the most of it that gives us a sense of urgency. You budgeted for industry stopped know and you it weve made but if we dont whisper through a show through a drug with the knowledge should go down and then theres the tribulation but of course. If they dont find your way to be it you we need to if you believe that we will have to review that and we are ready to do z. Says you know is a digital agenda or 2000. Well lets now bring in our guests and also we have got to give it shes president of the Panopticon Foundation and also a board member of European Digital rights thats an International Advocacy group in london we have Maria Luisa Stasi the senior legal officer at article 19 a Human Rights Organization that focuses on the freedom of expression and in the dutch city of must rate we have catalina go into shes assistant professor and private university and also comanaged of the mass tricked law and tech lab welcome to you and thanks for joining us on the program i do want to start with whats driving this whole process so catalina let me ask you we have seen a number of policies image from various different european body has over the last few years and it feels like there is this sense of agence see whats driving that so its good to take a step back and consider the nature of the field that were going to talk about today so technological change and the way in which technological change has been advanced through research is there in a lot of people but its also presenting a lot of opportunities so there has been a lot of pressure on the European Union to really show some sort of leadership in this field and to develop rules that would combine these 2 or create a a balancing act between the challenges and the potential of emerging technologies like Artificial Intelligence so what were seeing right now yesterday has been. Did the release of 3 different papers 3 different documents through which the European Commission is making its wars heard on what its current vision is and how to tackle this challenge. Im going to come to you because as we were saying this this big ai paper was released this week and broadly the e. U. Doesnt really have a reputation for being a leader in ai weve seen the u. S. With its tech giants leading the charge china because the government there is pressing for for Technological Advancement there is this the e. U. Trying to catch up i see that chinese and u. S. Firms can account for 86 percent of ai related patents globally so can the e. U. Catch up. The way i understand the vision here is not exactly to the. Economy catching up with with with visit the villages behind ai but rather and regulator a key governing body on the on the on able to respond to risks and threats generated by ai so you want it simply to speed up growth related to the sec knology and in that sense compete weve chinese or u. S. Based companies i would say that would be a failure what we expected and what is visible in this white paper published yesterday is it different vision in it have been the leading role in training and development of this technology so that it benefits societies and not only businesses and it also includes problems such as an Environmental Impact of this technology which is very problematic well i say that the e. U. I mean that this is also certainly about a Regulatory Framework but they are also wanting to scale up the spending in 2060 in the e. U. Spent 3200000000 euros on ai projects and now theyre saying they want to scale that up to 20000000000 euro as a yeah and a i development. Moving back to regulation and i do want to focus on that for a little bit. The policy is that were talking about seem to reflect a kind of inherent tension as kathleen of a saying trying to regulate the space but also trying to to grow the space it is a really fine line to walk do you feel that this white paper does that. Or it is a difficult task for everyone to kind of combine these 2 objectives i think what we have seen yesterday is a good attempt to do that it comes from our wide public debate in the past couple of years a least where we are going to say shit we all are Civil Society organizations and consumer associations we try to convince the European Commission theyre human and citizens the should have been at the center of this saw the economic and Business Trade off shouldnt have been too high and this is what it seems to be actually reflected in the paper because its the claires again and again that the kind of development that we want to see about ai that we need to do you can commission imagine about the da ideas official intelligence in this to be human centric and be in it to be grounded on the European Union values and fundamental rights now of course they can aeration need to be actually implemented so what we need is to see how these goes down in concrete application but in principles i think the framework that has been shaped the balance has been put forward by the you can commission yesterday its pretty satisfying then you know i hope were going to have the time to talk about specific cases to see what we mean but the general framework it is it is going in the right direction i think so one of the things that is part of this framework is this policing of the development of ai and theyve said that they want to do certification of high risk areas so policing Transportation Health care so it sounds like the use not necessarily concerned with what suggestions are getting on netflix but it is concerned with what kind of disease youre getting diagnosed with so how did you know i want to come to you here because i want to ask you about whether or not you feel like drawing that line between high risk and low risk ai is that does that work. Well we will see whether that has any chance to work in practice certainly approach not to regulate technology in itself but to look at how it is used by humans in specific cases is a good approach i agree with with with with this one but im concerned its easy you it is not really helping us understand where the high risk occurs so for example the example you just you just gave us with profiling Internet Users and showing the news on facebook and in other platforms that actually might post serious risks to how we perceive the world how much we are fed up with disinformation how much we are affected and manipulated by only platforms so taking the easy approach and looking at certain certain sectors such as policing or transport or healthcare might not be enough out to expect you to demand that every implementation of ai comes with a Risk Assessment and because we never know which particular risks might be involved another red line that im missing in this paper is saying which implementations of ai are problematic for example predicting future predicting Human Behavior in future it can be very problematic we dont really have Scientific Evidence proving that this can be done in a rut theres a huge margin of error this huge risk of bias here i dont see it thats red line being drawn in their white paper and im still waiting for that line to be drawn e. U. Legislation and kathleen i see that you agree broadly with this and i do want to ask you about this idea of bias or discrimination because its really about how you train an ai to understand the days that that would potentially lead to that or not and i say that the e. U. Is saying that they want to test and sasa fired the day said the way that they can check cosmetics cars and toys how would they even start to do that. So thats a very interesting point and i totally agree with qatars you now i think that there are a lot of technologies that will be posing a lot of different risks even if theyre going to be applied across sector maybe in one sector they will be lower risk in a different sector there might be high risk i think thats also why the white paper mentions 2 cumulative factors so there is somewhat of a vision of this this danger and being there the problem is that i think its important to keep in mind that maybe one of the most criticized aspects of this white paper has been the fact that perhaps the commission doesnt really show that it itself understands the risks that poses so the examples that have really stayed with me especially in the in the field of Product Liability is i have a Consumer Protection background that is what kind of struck me if for instance a cell driving cars that is a very typical example and germany already has a law already dealt with this particular field of regulation however what do we do with the profiling of citizens where perhaps similar to the knowledge can be 10 be deployed in different ways and i mentioned that predicting social outcome is actually one of the fields of ai that really we know very clearly in science that it simply just doesnt work we cannot predict social outcome so then the question is how are you going to determine the accuracy of such a tool and maybe one of me one of the more Important Documents that was released yesterday this European Data strategy actually speaks about a more Important Development and a more important aspect of coming up with an inoperative Regulatory Framework and that is changing public procurement rules if you have public authorities because this is also something that the commission to be very much in favor of if we are going to have these tools being purchased by public authorities then. We need new public procurement rules and new assessments back in a sense that are going to help these Public Officials determine what they should rely on and what the shouldnt charlotte if only it would be as easy as this because it all starts with Information Literacy so indeed been this is a hugely hugely complex space and i see that when 7 delenn when she became the new head of the European Commission back in november she gave everyone 100 day deadline to come up with a strategy on ai maria luisa that sounds like a pretty tight time frame has this been rushed do you think well i think we need to consider that this discussion didnt start with a new commission in fact your commission has been working well they have they appointed a high level expert working group. To provide you know to this to and for my some recommendations on where to go and what to do so that in the e. U. Level this debate didnt start 100 year under a days ago and said that yes it might be a short time we welcomed it here that the white paper is now on the Public Consultation until may so all Civil Society and every stakeholder can actually contribute to that debate as well there are there a number fang says i agree with what has been said before it needs to be fixed and if i can get back for example to the 5 risk ai and the definition which is going to be definitely a key definition i support. The fact of having different criteria to identify it when we have a high risk. Cation i think did the research another element which is completely missing or at least it seems to be completely missing from the white paper so this might be something to be addressed in the next days or weeks which sees what do we do when dual use applications sucked up what do we do for example with a smart for age that is used or misused to spy on people you live with this same snot to be dealt with as a whole by do you get commission but if you considered the amount of technology at our disposal and im talking about private uses not only public uses. The possibility of the use is very widespread way more than it used to be 10 years ago so i think this is the shippey also raised well this really raises the question of privacy essentially right because thats at the heart of all of this im cutting i want to ask you about facial recognition because i know there was a big discussion about whether or not facial Recognition Software would be outlawed as part of this paper and it seems that that hasnt happened what do you think the priority is here and do you agree with that i understand your is a reluctance to impose a general ban on any technology which is a clever move because such an advance hardly ever work in practice on the internet and we are all waiting to see very concrete safeguards protecting citizens from abuses in most sensitive areas the use of facial Recognition Technology by to police the only one of the very sensitive areas i wish you could eat if not a partial ban its the least of safeguards for people to be protected for example people who protest on the street to be protected from identification but she we have to understand that you cant but this is very limited and were probably going to have to wait for a National Level rules on how police can use such technology so that area im afraid is quite complicated and we want to see very few rules e. U. 11 legislation well it sounds like there is this level of fragmentation that is so kathleen the let me ask you then i mean we are not talking just about the u. S. Or china or a single state that can have a blanket chloral all or choose to do things one single way were talking about a block of states here is it possible i mean theyre talking about a digital Single Market already and thats already allegedly in place is it possible to get everyone to agree. Yes so thats a very important point and i think maybe we should perhaps mention 1st and foremost that this white paper was never really meant to outlaw anything because it simply cannot do that it doesnt have the legitimacy and legal binding power to do so so thats why its going to be very interesting to follow this space and see whats going to be put and to to new acts that are upcoming the Digital Services act that is going to be adopted later this year where that is going to be presented as a draft sorry later this year and also the date to act that is supposed to be published next year so these are going to be the acts where we will see exactly whats Member States might want to agree to and what the limit to this agreement will be and maybe to build a little bit on also what i was saying and to give you the example of how the space will be quite increasingly difficult to regulate from this perspective of proportionality and the perspective of a managing the sovereignty tug of war between the European Union and the Member States in the netherlands it was very recent i believe it was last week or so that a court in the hague ruled that the government by using a software that was supposed to detect fraud such as tax fraud or Social Security fraud was simply not allowed to be used by the government so on the one hand the white paper and also it its more structured twin of the communication by the commission created for a Digital Strategy for europe they really have this idea lets have public authorities use ai but at the same time we see already proof and we see evidence from Member States that there will be problems and if these kind of applications and these kind of deployments of ai are not going to come with very transparent procedures that will lead to really actionable remedies by people who might be really in danger. Being the victims of such generals well theres been also quite a lot of criticism of the white paper about some of the things that arent included in the paper and i see that it notes that the volume of data weve been talking a lot about how dangerous stored and shed the volume of data stored across the wells that would be used to train a i would most likely quadruple from the current 100. 00 would quadruple from the current to 175. 00 by us by 2025 by to look at what disease a bite is and thats one trillion gigabytes so in 5 years were looking at 175. 00 trillion gigabytes of data that needs to be stored and managed and looked after and this all takes a huge amount of hardware and energy and i see tech Energy Consumption is already increasing by 9 percent every year and i know this can all seem a little bit abstract so i do want to take a look at a few figures here now back in 2014 Google Search engine received nearly 47000 global requests a 2nd and that actually works out to around 500 kilograms of c o 2 emissions a 2nd research is also estimate the training a single ai model just to understand human language and its Carbon Dioxide equal to 5 times the lifetime emissions of a car in the United States the information of Communications Tech sector including data centers is now comparable to aviation in terms of Greenhouse Gas emissions and now its for cars the data centers alone could account for 10 percent of total electricity use within the next 5 years so let me ask you then i mean there are clearly Climate Change implications here and it feels like even though there is talk of data centers being Carbon Neutral is not a lot of detail about that in this white paper through your being to be seen to be very optimistic about the possibility of mediating even mental impact and you a little want to hand seems to be here gene with the mainstream narrative of progress behind and on and on the other hand bringing its own values and concerns but. What i want to expect is something more something more radical by saying that by asking the question questioning why do we need all that data he said of fetishes as in the beta and its you mean dead not progress in Data Collection and analytics is necessary we do not see this being necessary as weve discussed before ai is a great tool for certain uses and its not going to do much good in predicting future in managing Human Behavior is solving and other societal problems so do you really need more of this do we really need to generate more data that thinking in their white paper is quite boring im not optimistic regarding the possibility of media getting in for mental impact if we continue with more of the saying meaning more data more technology more of a servers well this all comes down to trust doesnt it maria luisa let me ask you because this is about retaining public trust as i see it as europe and the public faces of uncertain future and and fairly scary Technological Advancements that people dont fully understand so does this policy do that does this policy allow the public to to hold on to trust as we move forward. Well im not sure if it does it im sure and a wishes start to build trust aides an ongoing process that cannot be done from one day to another and it needs a number of failing to limit of elements so i think the white paper spots a few of those elements but then it doesnt need to be the mother of a stating again that for example technologies of college you need to be transparent fair open and human centric its about how do you actually implement this principles so this is where i think the white paper is pretty generic. The reste if you read along the lines that might be a good. A good end to take which hes the commission its kind of a myth i think that we do have a ready that is that theyve told us that we can use to make sure that. The way we deploy Artificial Intelligence in europe it actually. Happens you know trust was a way which is the g. D. P. Are the general the protection regulation mainly but also other instruments in Consumer Protection unfair commercial practices etc and it seems like the commission is recalling dace and also kind of suggesting that those instruments not being applied properly so far which is something that the the the judgment mentioned just a few minutes before by should by the course india i got a couple of weeks ago was doing the same was recalling a number of basic principles in the g. D. P. Our and saying that in this specific case this they didnt seem to have properly applied them so i think there we are again at the beginning of a process were not yet at the end of all but much counts on the way much much weight needs to be put on the way were going to implement basic principles well essentially the beginning of a very long process indeed no concrete laws until the end of the year i understand its something well continue to watch very closely indeed thank you so much to. Thats. Really resisted kathleen to go into and thank you too for watching you can see the program again any time by visiting our website is there at dot com and for further discussion do go to our Facebook Page thats facebook dot com for thrash a. J. Inside story and you can also join the conversation on twitter handle is at a. J. Inside story for me in the stars and the entire team here and by find out. Whether online like to be one of the worst groups to bring a preview about. Or if you join us on sunday so theres a difference in diversity and inclusion and they were sometimes isnt always sincere this is a dialogue sanctions on the ways in which their reply to iran are in awe of what fair everyone has a voice we as a society that is simply to get timidly to sit down and listen join the global conversation on aljazeera. Use your vehicle dead and did you get the peace prize too soon. Is this feeling to hurt or help Donald Trumps reelection chances given very true when they fail you could shoot somebody on 5th avenue and not have any consequence this is not a muslim issue this is a human colony issue join me im in the hot sun as i put up from questions to my special guests and challenge them to some straight talking political debate here on. We understand the differences and similarities of cultures across the world so no matter how you take it out 0 bringing you the news and Current Affairs that matter to you. Down to 0. It doesnt display of opiates and fun to see fun to campus fashion it isnt only a lesson in creativity but also in sustainability all these clothes are made from recycled phin teach all riposte fabrics were leading the way to make steering to be the new the new normal because thats what it used to be some of the worlds most prestigious fashion houses are based in paris most say that theyre trying to become more environmentally friendly changing a multibillion dollar industry is a challenge but in an environment driven by trends where responsibly make loans may become the ultimate fashion statement. Im Richelle Carey and their hobbies are the top stories on aljazeera voting is underway in irans parliamentary elections thousands of candidates mostly reformists and moderates have been barred from running irans leaders are calling for a high turnout for such a bar isnt tehran where in the mosque in east terror on im just going to get my camera mounted to show you where the line up it continues at this mosque for people to vote people have been lining up since about 8 am local time what they process is here that they register elec trauma clee and then they move over to the opposite side where they cast their votes now these people are choosing 30