Data has been an important topic at Mobile World Congress for many years. This is the case at MWC2018 too, but with some new angles and perspectives. Data privacy, user control of his or her own data, and ethics have become important topics in discussions. This is also linked to the development of AI, when machines are becoming more autonomous to use data and make decisions. This doesn’t happen only at MWC, but we can say the data business and discussion are stepping to a new era.
There are still many companies that are mainly talk, making promises about how they can collect more data than others and monetize it better. It can be mobile user data, advertising and content data or financial data, but these companies just want to offer better weapons to get to know customers and sell more to them. This was maybe a cool story 5 years ago. Today, not so much.
GDPR is on the way in the EU and other countries are working with similar initiatives. Users are more interested in how their data is used. Many international organizations, including European Council (e.g. Convention 108), UN (e.g. Resolution 68/167; The right to privacy in the digital age) and IEEE (e.g. P7002, work group for data privacy processes) are working with guidelines and standards to give more control to people regarding their own data.
It is no longer only about data and privacy. AI is changing the game, and it is also about AI ethics. With AI you have input and output data from the system, but also the algorithms that then process the data. Both data and algorithms can have a bias or ‘unethical’ components, and both of them are relevant when we talk about the rights of people and liabilities of companies.
Paula Boddington has conducted AI ethics research at the University of Oxford. She held a speech at MWC2018 and raised some interesting cases. She told about Microsoft’s chatbot (Tay) which became racist on Twitter and Microsoft had to close it down. Ms Boddington raised a question that it is not easy to answer - if we can blame AI on this, or was the underlying reason Twitter itself, i.e. tweets from people or peoples' behavior on Twitter. We can say social media seems to make many people behave like angry racists. Maybe AI just illustrated the real nature of social media behavior.
She also talked about the famous Milgram Experiment, where people at Yale University were asked to give electric shocks to people in another people, who’s audio they could hear, to help them to learn about ethical behavior. The voltages were increased all the time, and when instructed by their researcher, many people continued to give shocks at levels that were a threat to the life of the subject. Ms Boddington pointed out, that this illustrates, how people can get to unethical behavior step by step, without really realizing it. This might be the case with the use of data and AI too, and that’s why it is really important to discuss these topics all the time.
Of course, the work of academics and international organizations is not alone enough in this area. The reality is that lawmakers must create guidelines and companies must realize a business case to protect privacy and ethical use of data and machines. We can see now evidence that many companies are starting to see this. For example, in finance services MasterCard introduced a token model (MDES) to better protect the card user’s data. Blockchain and distributed models offer solutions to consumers to manage their own finance data (e.g. Prifina). Qualcomm, AT&T, IBM, Nokia, Palo Alto Networks, Symantec and Trustonic have formed a IoT cybersecurity alliance. And there are many other examples. Many companies also develop better digital identities, but they are still more complex to evaluate from a privacy point of view, when they can help and challenge privacy.
AI and data are gaining key roles in all industries. We must remember that intelligence is not only about the use data and making decisions; ethics is an important part of intelligence too. Now it is positive to see that more and more parties realize this and MWC2018 also illustrates this new era of data and AI business. Consumers need solutions to manage their own data and get AI to work for them. It is not only about individual people, but how data and AI can serve a common good and justice.