Tiffany Xingyu Wang, Chief Strategy Officer, Spectrum Labs
Tiffany Xingyu Wang is the Chief Strategy Officer at Spectrum Labs, in charge of strategic alliances, go-to-market operations, and thought leadership on brand safety. She shares her learnings about digital governance at InternetHealthProject.com.
Tiffany is an angel investor and advisor in AI, blockchain, and quantum computing startups and continues to live by the principle, “multiply nearby, influence globally” that she learned from her father. Through this, Tiffany has become a proven multiplier within organizations and a sought-after speaker on global stages.
Creating safe, inclusive communities is an important objective for online platforms. Negative online experiences can impact user engagement, brand equity, and stakeholder investments. To prevent this, developers and companies must choose the right tools to keep communities safe from User Generated Content (UGC) that is outside platform guidelines. But monitoring content is a complex endeavor, and it is critical for moderation engines to speak the same language as the platform.
Real-Time Engagement and Human Behavior
The Internet is the foundation from which we build our virtual interactive communities. When another layer of technology is added, like Agora’s Real-Time Engagement (RTE) platform, people have even more new and exciting ways to interact with each other through audio, live video streaming, and chat over various applications and use cases. Apps that connect people to others are widely popular and include gaming platforms, dating sites, education tools, or online visits with a doctor. The adoption of RTE platforms and solutions allows us to behave and interact online much as we do in our physical lives, but without the travel, time, or expense of meeting up.
Much like in physical conversations, online interactions can also be difficult to control and bring about both negative and positive experiences. As norms and laws guide our behavior in the physical world, community guidelines guide our behavior in the cyber world. Every online community sets its own guidelines, which are appropriate to the platform’s intended purpose and audience. Consistently and accurately detecting when guidelines are violated and responding in real-time is essential to protecting the community from potential revenue loss and community members from harm.
Moderating Online Content
Historically, online communities have thrown people at this problem, enlisting thousands of content moderators to sift through incidents reported by users. According to MIT Technology Review, Facebook alone has 15,000 content moderators reviewing 3 million posts per day.
Yet, the majority of online users don’t take advantage of user report features, and those that do often abuse it with false reports that just waste moderators’ time.
In addition to human moderation teams, platforms have also historically tried using keyword lists to flag words or expressions that indicate prohibited UGC such as profanity, for example. However, these list tools don’t account for nuance or context that often result in false positives. If a platform tries to restrict the use of the word “ass” they are also restricting “sunglasses” and “assassin.” This also creates confusion for users, making it more difficult to understand why certain phrases are punished and others are not.
At the Agora RTE2020 Conference, Spectrum Labs spoke about the importance of identifying unwanted cyber behaviors, prioritizing them, and responding appropriately. We approach content moderation by adding a layer of context: interpreting the context of UGC in an automated, real-time and holistic way. Since its inception, Spectrum Labs has used a contextual AI approach to keep over 1 billion users safe online.
This is not an easy task, however, Spectrum Labs provides the technology to immediately identify over 40 disruptive behaviors in 30 supported languages and then act on them with optimal effectiveness. The company’s solution goes beyond simply identifying a word being offensive by considering the word in the context of the conversation. Further, high-performing behavior identification technology enables automated responses to community guideline infractions, ensuring consistent enforcement. This allows platforms to recognize and respond to disruptive content that threatens user engagement, revenue and brand equity.
Why It Matters
The Facebook boycott led companies to reconsider their online advertising options, with five of the top twenty advertisers either drastically reducing Facebook budgets or leaving the platform completely. Another study found that one in three consumers will walk away from a brand they love after a single bad experience.
Wang believes companies investing in RTE should protect their brands and invest in Safety by Design for three reasons:
- Speed to trust will differentiate a brand and win GenZ and beyond;
- Cost of inaction is paramount. All investment in growth features can be wiped out and ends in negative virality upon a single scandal;
- Safety by design is more cost-effective than and pivotal to crisis management.
Building a Strong Foundation for Your Business
Agora’s artificial intelligence technology is built into the RTE platforms, which allows for scalability and consistency across continents. Spectrum Labs and Agora have partnered to create a powerful solution for developers and platforms to recognize and respond to toxic behaviors.
Agora’s partnership with Spectrum Labs helps brands create safe real-time engagement features. Request a demo today to learn ways in which Spectrum Labs moderation tools can be beneficial to your business.