The Importance of Policy Making in Content Moderation
There are billions of active users on social media platforms worldwide, and as the figure continues to rise, social media companies are finding it difficult to monitor and control the nature of the content posted on their platforms.
There is an urgent need for content moderation policies to evolve and protect those who are unexposed to sensitive content in accordance with the current trends and other important factors, including the following:
Rapidly changing trends
Rapidly changing and emerging trends in content creation and consumption have made it difficult for companies to develop policies for content moderation. Adapting to change in user behaviour, an upgrade or new product launch, dealing with the limitations of the existing policy, emerging misinformation trends, and many such changes demand effective evolution of policies over time.
At Tech Mahindra, for example, we help our clients stay abreast of the trends and user behavior impacting the social media industry. We work with them to continually review and refine their content moderation policies. As a result, our clients have been able to identify and prevent new risks to their platforms sooner and more efficiently.
Data Privacy laws impacting policy making
A significant amount of user data is handled during content moderation, including data about the user, complainant, and nature of the content. It is imperative for social media companies to comply with national/regional laws and regulations such as GDPR & CCPA. However, the ambiguity and variance surrounding the laws and regulations around content moderation is one of the biggest obstacles to compliance and policymaking. Strict guidelines and stringent laws are essential to protect user data by organizations. At Tech Mahindra, we help protect user data for our clients no matter where it is collected, transported, processed, or retained within the scope of the applicable laws as well as regulatory requirements. Our moderators are trained and tested on data privacy and protection. Moderation work is performed in a monitored environment by qualified moderators using secure networks and equipment.
Employee safety and wellbeing
While most companies have an AI-led monitoring as the first line of defence, the absence of clear-cut policies leads to stress for moderators who deal with this environment daily. Hence addressing the well-being and safety of employees leading to mental health problems such as anxiety, trauma, post-traumatic stress disorder (PTSD), and acute/chronic stress is always real. At Tech Mahindra, we enforce an effective resilience support structure that includes in-house wellness coaches, a strong wellness and full-service design partner MADPOW, and wellness SPOCs to assist in early intervention. Our intensity-based production and wellness plan helps us achieve high quality work with higher employee retention. We also have a well-established and customized psychometric assessment and support at each stage of hiring, training, and employment. By implementing a robust wellness policy, we not only look after the well-being of our employees, but also help our clients achieve their goals.
Balancing online free speech while dealing with harmful content
People share their opinion, views, and real-world pictures on social media frequently. Though content moderation aims to solve a particular problem, the removal of a post is a restriction on the right to freedom of expression. Policies should clearly call out guidelines to combat harmful content without limiting the user's freedom of expression. Companies can rely on content moderation partners to help safeguard the interest of their communities. Be it protected groups, kids, weaker sections of the society or law enforcement agencies, at Tech Mahindra, we have stood true to the expectations of our clients who are some of the world’s biggest technology and social media companies.
With our strong governance on decision-making and AI/ML-driven insights, we are helping clients maintain freedom of speech on their platform and constantly upgrade their policies.
Evolution of technology with Metaverse and policy-making
Metaverse is considered as the next evolution of the Internet. It has the potential to transform the way people communicate, work, learn, and interact with other users. Thus, many companies are creating applications, products and services that enable metaverse to develop and serve its users in a lot more interesting and engaging ways. But companies are struggling to draft policies that protect their users from harmful content or behaviour. Let’s look at some of the challenges policymakers face while developing a content moderation policy for the metaverse platform.
- Creating a policy to protect users from bad actors and bad content without invading user privacy.
- Defining illegal and objectionable virtual acts of avatars.
- Policy to prevent Avatar hacking, stealing of online identities and creating fake profiles.
- Constantly addressing new factors due to technological advancements and user behaviour changes.
- Appropriate use of technology features for moderation (blocking a user vs. reporting a user)
As the metaverse evolves, it is the collaborative efforts between the content moderation partner and the client that will help to continually update policy. The policy should outline various scenarios where the system could be abused and identify means to prevent those abuses. Policies should respect personal space of users while keeping a watch on activities going around. At Tech Mahindra, we use real-world governance models to prevent harmful behaviour and act in real time - such as assigning administrators to virtual spaces for monitoring and protecting users from abuse, harassment, and illegal activity.
Leveraging technology for effective content moderation
The content moderation partner’s experience should enable clients to tailor policies that are aligned with the business goals across industries and platforms. In addition, policies should include a hybrid model (AI-human) to detect and remove PII and SPII information faster before it reaches another user.
The goal is to build policies that ensure the trust and safety of the users and protect the platform as a brand.
Sathish Kasthuri has over two decades of experience in leading business expansions across service lines, with revenues upwards of USD 90 Million. His hands-on work experience includes setting up a Center of Excellence (CoE) for varied digital competencies in the areas of Trust & Safety, Generative AI, Conversational AI, Metaverse moderation, and Revenue Operations.More
Sathish Kasthuri has over two decades of experience in leading business expansions across service lines, with revenues upwards of USD 90 Million. His hands-on work experience includes setting up a Center of Excellence (CoE) for varied digital competencies in the areas of Trust & Safety, Generative AI, Conversational AI, Metaverse moderation, and Revenue Operations. He has 10+ years of experience in driving digital transformation programs across multiple industry segments. With the world moving towards ‘anything as a service’ in the metaverse, ensuring user safety and building a platform that people can trust have become paramount. Bringing together an emerging technology and integrating it with current business realities to drive profitable growth is what Sathish enjoys the most.
Less