Even though artificial intelligence can bring about a substantial positive impact in many areas of our lives, AI’s inappropriate and unethical use has become a big concern in recent years. During my investigation of artificial intelligence in business and in societies in the last five years, I have been negatively surprised by the quantity of wrong and unethical usage of AI worldwide. Several big technology companies have been involved in scandals by allowing wrong and unethical use of data and artificial intelligence in their platforms. The most well-known is the Cambridge Analytica scandal. Facebook gave access to sensitive user data of 87 million user to consultant firm Cambridge Analytica which used AI algorithms to micro-target their political ads in the 2016 US elections.
Biased and unethical use of Artificial Intelligence
Two of the principal errors of AI usage are the unethical use of AI and AI’s biased use.
The word biased means unfairly prejudiced for or against someone or something. In our current world there is unfairness and discrimination. Al algorithms work thanks to data, and if the data reflects the injustice and unequal society, it will learn from our biased systems.
The word unethical means not morally correct. In the context of Artificial Intelligence, the unethical use of AI is typically related to the violation of private data. However, who decides what is morally or ethically correct and what not? Between cultures, there are differences as to what is considered unethical and what is not. There should be a more open public discussion about morally and ethically correct when building technology products for consumers and using their data. However, people´s data should be valued and not violated.
In today’s competitive business environment, many business leaders are pressured to analyze how to generate more value for shareholders and investors and fail to reflect the long-term implications of artificial intelligence’s unethical or biased use. It’s worth mentioning that this is not the case of most companies, and due to the Cambridge Analytica scandal, many have carefully started to design human-centric use of Artificial Intelligence. In 2018 Salesforce, the world-leading CRM company was the first company in the world to name “chief ethical office, “ and many big companies have followed their examples.
Some considered it was a way to improve Salesforce’s brand image, but either way, it is a crucial step to more human-centric business practices.
AI also gives tremendous power for those to control all the data. This is one reason China is investing huge amount of money to use AI everywhere in society, but most importantly, to make the Chinese government more powerful by controlling citizens’ data. China is not the only country that uses Artificial Intelligence for surveillance. Out of the 176 countries globally, 75 are actively using AI for surveillance using AI technologies such as facial recognition systems and smart policing tools.
This often happens without citizens knowing they are being monitored and their data is being used for different AI algorithms. Most AI surveillance technologies are developed in China and sold to countries with less democratic systems, especially in Africa.
Here is a list of some of the surprising and unethical uses of AI that have been recently discovered:
- Instagram’s skin-showing Artificial Intelligence algorithm: In 2020, an investigation into the social media company Instagram found that one of its algorithms prioritized photos of men and women that show more skin. This can directly impact content creators and has an especially negative impact on many young people who use Instagram.
The investigation analyzed 2,400 photos and found that a computer program recognized 21% of them as containing women in bikinis or underwear or bare-chested men. Not posting images showing body parts significantly decreases the organic reach on Instagram. This means that male and female content creators could face pressure to show skin to reach a larger audience.
- Clearview Articicial Intelligence facial recognition technology: The U.S. firm Clearview AI has been the subject of controversy since the beginning of 2019 when the company expanded to 26 countries and began to work with law enforcement agencies, governments, and police forces.
The company’s software enables organizations to access a database containing more than 3 billion images, which can then be used to match pictures of individuals’ faces. The major problem lies in how the company gathers the images, which is done by extracting photos from social media platforms and websites. At the same time, users are unaware that their private photos are being used.
Police in many countries has used Clearview Artificial Intelligence. In Sweden, police department was sued by using Clearview AI on February 2020 against Swedish privacy laws.
- Smart speakers listening to you: The rollout of smart speakers like Amazon Echo and Google Home was quite popular a few years ago. They continue to be an essential source of revenue for these companies. However, users were never aware that these technologies listened to them long after being “shut off.”
While these speakers typically only wake up and start listening after an active word, such as “Alexa,” they are prone to mistakes. A study carried out in 2020 found that some software, such as Alexa, Google Assistant, Siri, and Cortana can be activated by mistake up to 19 times a day. Companies do this since it gives them access to people’s private conversations, which is valuable data and can be used for marketing products.
In 2019 the Finnish newspaper Helsingin Sanomat interviewed four Finns who work for Google and other technology companies listening to private conversations of Google Home device users. It is worth noting that users of these devices do not know that their conversations are being shared and listened to by company workers. “Generally, you know that you are collecting data, but then when you listen to other people’s conversations, it was somehow “sick” commented one worker.
Having virtual assistants is good but they would have to work if the need to share private conversations with the servers of large technology companies.
- Deepfake videos: Deepfakes are already becoming a serious problem worldwide, and the technology is still in its infancy. By utilizing deep learning, an Artificial Intelligence can gather data on the physical movements of an individual face, eventually becoming almost identical and then processing it into a deepfake video. Deepfakes can also be used for voice.
Bad actors use deepfakes for either political or personal reasons. In 2019, a deepfake was used to scam the CEO of a British energy firm out of $243,000. The technology was so good that the individual believed the voice of his parent company’s head requesting emergency funds.
These deepfakes can be wielded as weapons. For example, company presidents or other individuals can be blackmailed by the threat of publishing a damaging deepfake videos.
- Tracking shoppers: Another cause for privacy concerns is England’s Southern Co-op franchise, which uses real-time facial recognition technology to track shoppers. The system aims to detect shoplifters and prevent abuse against staff, but the problem is that the company never announced the rollout to the general public before the trials began. Privacy advocates are concerned over whether the technology adheres to data protection laws, as the cameras scan customers’ faces as soon as they walk through the door to find matches against a suspect watchlist.
How do you find these pitfalls and challenges? If you know of any other challenges, please leave a comment. There are also several positive use cases of artificial intelligence, as you can read more here.