Interview with Gregor König, Head of Data Protection office
Digital providers collect loads of data in the background. They are evaluated/analysed and, more and more often, are used for personalised marketing and/or product/service offers. What is your opinion in this context? Is this beneficial for the user, or is the opposite the case? And what is data protection?
Like many things in life, targeted marketing has its pros and cons. The benefit, for example, of not receiving any personally irrelevant adverts is juxtaposed by the downside of the fact that the targeting of such an advert is only possible on the basis of detailed knowledge of the interests and proclivities of the potential customer, i.e. on the basis of profiling. While the average recipient may be aware of that, the kind, depth, and content of these profiles are usually not known to them. This is where the EU General Data Protection Regulation (GDPR), the new data protection law that came into effect in May, comes in by requiring transparency from such profiles and prescribing a basic agreement (that can also be expressed in the process of signing a contract). This way, it is up to the individual to judge whether for them, the pros outweigh the cons, or the other way around – based on the premise that I as targeted customer know the way may data are processes, and it is transparent to me.
A whole range of apps is available for free to their users. While the data of an individual user may be worth next to nothing, it is the volume that counts. What kind of data that users readily provide would you consider on the “critical” and “noncritical” side, respectively?
Here, too, transparency plays a role. When using an app, I have to be aware of what data this app uses in what way, what other functions and information it access
es, and with whom it shares these data. What has to be seen as “critical” is often a very subjective perspective, depending on the age and the technological affinity of the person in question. To this end, data protection has an objective benchmark and protects, in particular, “specific categories of personal data”, as the industry jargon expresses it. Among them are political leanings, membership of trade unions, and of particular, practical relevance, religious denomination and health data as well as biometric data. This categorisation can be used as basis to evaluate the critical nature of the data, but in practice, as just explained, the crucial aspect is with the use of what data I personally have a problem. We also have to accept that for many people, especially the younger generation, privacy and data protection play a less relevant or at least different role than the general structuring of the law suggests. For example, they record and share pulse rate and calorie consumption, divulge their preferences or activities on social networks, and accept mobility profiles for more detailed geographical measurements and directions.
In social networks, the personal, face-to-face contact is not required. What reasons do you see that make social networks so successful? Does human interaction become irrelevant, and will it be completely phased out at some point? What does that mean for society?
The easy access to information definitely has an impact on society. After all, we are talking about the information society. This happens across all levels – companies talk about Industry 4.0, in the private sphere this development is connected to the discussion of whether a private sphere or privacy still exists or whether due to our constant need for information, which at the same time is the source of new information pools, we have not already waived this (currently) basic right. I would like to see a broadly-based discussion in society here that is detached from political events. In the past years and decades, we have come to accept our role as passengers in a trend that is driven by the technological development, which tempts us with the utility of achievements in this field without participating in critical discussion. As far as social networks are concerned, for example, such a discussion nowadays would not make sense on a general basis anymore, but in an area where social networks are subject to misuse for influencing opinions.
Big Data applications make use of a large number of algorithms, i.e. processes of calculation for all sorts of problems. They are often said to be “objective”. How objective can Big Data be if these algorithms are implemented by programmers? Do you know applications where the lack of any “subjectivity” is required, i.e. applications that could solve such problems better than people and/or public bodies?
The implementation of applications by human programmers does not prevent an application from being objective – otherwise, research and scientific knowledge, based on the notion of lege artis, would never be possible. But as far as applications are concerned, Big Data, too, opens up a large bandwidth between “reasonable” and “less reasonable” as well as “socio-politically desirable” and “less socio-politically desirable” (although there is a lack of broad discussion, we do have a certain form of basic consensus on some questions). Do we want improved traffic-control systems on the streets and better telematics in our navigational devices of our car? Probably yes. Do we want single-case classification of a consumer in an insurance system in dependence on their individual lifestyle? Probably not. Do we want to be able to improve the analysis of diseases and exterminate certain disease patterns on the basis of large data volumes? Probably yes. Do we want certain individuals with a certain financial background to be able to influence democratic elections? Probably not. Legally speaking, the GDPR missed the chance to address the issue of Big Data more specifically; while the legislator wants to put data users in the position to process the data of personified information for compatible uses, the wording here is very vague (N.B. we can see discussions about the exact range of applicability in both legal and business areas) and moreover, there is a complete absence of any underlying mitigating measures.
The Chinese government plans to introduce a social credit system by 2020 for all Chinese citizens to boost the “honesty in governmental issues”. It has already been implemented on a voluntary basis and accesses various sets of data such as credit rating, criminal record etc. to establish the reputation of an individual. How do you see this development? Should the government be able to classify its citizens into “desirable” and “undesirable” citizens, and to award good behaviour and sanction bad behaviour?
Welcome to 1984. We indeed have to look at this development very critically. I also think that despite the relatively inhomogeneous political situation, in today’s Europe this would not be possible, given our basic values and also the development of legal framework in this field, i.e. in particular the GDPR and the right to privacy as laid down as basic human right. But I would not bet my life that we might not also experience such a development, albeit a watered-down version, in the coming decades on a governmental level in Europe. While the hunger for information displayed by governmental institutions is quite high at the moment in Europe, and specifically Austria, as well, we also attach a great deal of importance to such data pools remaining insular – i.e. without connections existing or being established and without any decisions-making processes with legally disadvantageous consequences stemming from such connections.
How do you see this issue in the corporate sector? Award schemes or discounts for positive behaviour vs. punishments or even exclusion from certain services?
In the corporate sector, such profiling is deemed acceptable only if it does not relate to goods or services that cover basic needs or where a monopoly exists. Apart from that, such a merit-rating system is acceptable also from a data protection point of view if it is transparent and the people affected have freedom of choice. We can already see numerous examples, especially in the insurance industry. But I think that basic products from insurance companies or banks for example should be available without profiling and its consequences. Also, the complete exclusion from a service has to be looked at critically and has to be linked to traceable, grave criteria that have been communicated clearly in advance.
Can or should ethical/moral considerations in Big Data, Artificial Intelligence, and similar applications implemented (e.g. in applications such as autopilots in the event of an accident)?
I think this is a very important point. Not only in the example you mention, how data-controlled processes should behave in cases of moral dilemma, not the least since due to speed they have more options than the human decision-making process; but even with regard to the question of whether we as society actually want a certain use of information. Do we want the outcome of democratic elections to be influenced by social networks? Do we want to legally provide governmental institutions with more and more data to fight crime at the expense of our privacy? I am not being judgemental either way, but these should be the political and societal discussions of the information age. Legal discourse alone will not suffice to answer these questions.
From the perspective of the consumer/user: does the EU General Data Protection Regulation (GDPR) have any positive effects for the user? Can the legislator actually keep up with the development in the areas of Big Data and Artificial Intelligence?
The legislator will never again be able to keep up with the technological development, especially since it keeps accelerating; to believe anything else would mean to avoid looking at the facts. The reaction of the legislator – or the specifications from other governmental institutions – necessarily has to remain on a general and technically neutral level so as to create a level playing field for technological developments. Actually, the Data Protection Regulation is doing a good job in this respect – of course, as recipient of such regulations, we despair, because the vagueness does not give us any certainty that the controlling authorities concur with our implementation say, at a company. Here, too, the mindset has to shift, away from by-the-book control to ensuring compliance with the goals of a regulation.
Disclaimer:
Forecasts are not a reliable indicator for future developments.