Annie Mullins OBE, Security Advisor at Yubo and Founder of Trust + Safety Group, discusses the role that online authentication and identity verification can play in increasing the safety, trust and security of social media
Online verification, in its purest form, would involve signing up for the services with some form of personal identification.
The way we interact with digital technology is changing rapidly and this has a major impact on our lives because of the way we communicate, the way we are educated and the way we work. Our behaviors and attitudes are largely shaped and developed by our digital world.
Social media, or the early stages of it, was first introduced in the late 80s and 90s with the introduction of real-time online chat and email functions. These foundations paved the way for Friendster, LinkedIn, Myspace and more specifically Facebook. Now, over 77% of the UK are on social media platforms with internet usage at an all time high over the past year, with an average British adult spending four hours online. Social media has been and is a space for us to connect, share our lives and our opinions with each other, but just as the debate over how much of ourselves to share online continues, so does it. There is growing public pressure around online safety and how users can be held accountable.
So-called online anonymity has allowed an environment of cyberbullying and hate speech to fester and grow, poignantly highlighted by the recent racist attacks on some members of the England football team in the Euro 2020 final. Now there is the question of how platforms can more easily hold those responsible for their actions, and whether technology is part of the answer.
Authentication and identity verification has been an idea launched by the UK government and others to combat the problem. This is a challenge for social media companies due to the lack of available solutions, the lack of comprehensive identification databases to verify an ID, as well as the reluctance and trust of users to share this personal information with online platforms. However, some companies are already diving into the water and investing and testing authentication and verification solutions, depending on the level of risk and issues encountered.
The events of the past year, where the spread of disinformation, trolling, abuses and undermining democracy have been a watershed moment for the public, government and business to scale up and maximize Their efforts. Approaches to minimize the damage caused by these factors, and all solutions and improve them, should be on the table, including AI. Otherwise, the lack of online security will undermine confidence in the use of the platforms and all the opportunities that can be offered as they grow and develop to make our lives easier and strengthen our bonds with family and friends.
Major trends in digital ethics
This article will explore the biggest trends happening in digital ethics, as technologies continue to play a major role in people’s lives. Read here
Perceived anonymity and the role of social media platforms
When I talk about online anonymity, I am not referring to the use of our data and online monitoring and protection, but rather the idea of ââprotecting one’s identity in order to express oneself freely and in peace. all impunity. While this certainly allows for freedom of speech and ideas without fear of judgment and prejudice, this perceived anonymity can also be used to spread hatred, misinformation and prejudice and, in some cases, attack others.
Some commentators and researchers have observed and commented that online abuse and cyberbullying has increased during the pandemic, as people attempt to reproduce their daily lives online to resemble a sense of normalcy. This increase has been claimed to be greatest in children and adolescents, with one report citing a 70% increase in hate speech during online discussions between young people. The fact that this trend appears to be on the rise has increased attention and attention to social media platforms and the role they play in addressing hate speech and cyberbullying.
It is essential that all platforms mobilize to address these challenges with others, including governments, educators and NGOs, as each has a role to play in countering these difficulties, including the combative and sometimes abusive behavior of the mainstream. leaders of the company. Learning civility and behaving respectfully towards others online is a major challenge of the 21st century, and must be met if we are to have a safe internet that offers the wonder of sharing ideas, opinions and meeting and socializing. connect with others around the world, as he promised when he started.
Perhaps a more critical imperative would be to help users understand through education, using platforms or both, that they are in fact not anonymous and that they can, if the harm qu ‘they cause is criminal, to be traced and, if necessary, to face the full force of the law. Perceived anonymity is in itself detrimental and users who turn to harassment and abuse themselves are putting their lives at risk, as the attack on the United States Capitol on January 6 was well illustrated, because some of them are now facing the legal consequences. Many have found themselves losing their jobs or their friends and relatives because of their actions.
While online abuse and harassment cannot be completely eradicated and remains an ongoing challenge, security and privacy must be at the heart of the design of online products and services as we move forward. Smart, proportionate and scalable regulation is widely recognized as necessary, even by online platforms. However, not all platforms face the same issues and challenges and therefore a ‘one size fits all’ approach is unlikely to suit the diverse online world we all participate in. Australian Line Design Code Highlights, the first step for businesses is to undertake a risk assessment of their service and identify and mitigate the risks of abuse. The focus should be on identifying and eliminating harm online before it happens.
Tackling online abuse is also a necessary consideration for the investment and business community, which would benefit from ensuring that it places security and ethical considerations at the heart of the early design processes it initiates. and finance.
Responsible Tech Series 2021 Part 1: Exploring Ethics in Digital Practices
The first part of the Responsible Tech Series 2021 explored ethics in digital practices, explained about the provision of information and discussed the issue of privacy. Read here
The role of technology
As touched on earlier in the article, there have been a lot of suggestions and discussions about steps that should or can be taken to improve online security through authentication and identity verification. There are different approaches to authenticating users, including user verification. In its purest form, online verification would require people to prove their identity when registering on a platform or using certain features, such as financial transactions, with some form of personal identity. such as a passport or some form of government identity. This can facilitate better security, but also where there is a high risk to users, it can prevent and limit bad actors who seek to exploit and abuse others, and also build trust between users participating in a online community.
Besides authenticating and verifying credentials, age estimation technology is a powerful AI tool for identifying and flagging accounts where there may be doubts about a user’s age. platform. This is especially important for platforms that have content only for users 18 and over and very young users under 13. It would be encouraging to see more investment in technology to further develop these solutions, but this may take time due to the privacy and ethical concerns surrounding the use of AI in the field of business. age authentication.
AI is the broader concept of computers and machines capable of simulating human behavior and performing tasks in “intelligent” ways. Used intelligently, AI can help monitor online behavior and fight fake accounts. Automated moderation and blocking are, for example, great tools that allow a platform to automatically block malicious users and monitor fake accounts. In addition to monitoring fake accounts, AI can also be used to find and filter inappropriate content through human training and machine learning to eliminate online damage before it even reaches. the desired end point.
While there has been a step forward in the types of technology that can be used to combat online toxicity, we still have a long way to go. It is important that this is undertaken by all platforms with some urgency, with the support of academics and others to deliberate on ethics and move forward by making the Internet a more civil and participatory space, as it was originally designed.