Arjun Narayan, is the Head of Global Trust and Safety for SmartNews a news aggregator app, he can also be an AI ethics, and tech policy expert. SmartNews uses AI and a human editorial team because it aggregates news for readers.
You were instrumental in helping to Establish Google’s Trust & Safety Asia Pacific hub in Singapore, what were some key lessons that you simply learned from this experience?
When constructing Trust and Safety teams country-level expertise is critical because abuse could be very different based on the country you’re regulating. For instance, the best way Google products were abused in Japan was different than how they were abused in Southeast Asia and India. This implies abuse vectors are very different depending on who’s abusing, and what country you are based in; so there is not any homogeneity. This was something we learned early.
I also learned that cultural diversity is incredibly vital when constructing Trust and Safety teams abroad. At Google, we ensured there was enough cultural diversity and understanding inside the people we hired. We were on the lookout for individuals with specific domain expertise, but in addition for language and market expertise.
I also found cultural immersion to be incredibly vital. Once we’re constructing Trust and Safety teams across borders, we wanted to make sure our engineering and business teams could immerse themselves. This helps ensure everyone seems to be closer to the problems we were attempting to manage. To do that, we did quarterly immersion sessions with key personnel, and that helped raise everyone’s cultural IQs.
Finally, cross-cultural comprehension was so vital. I managed a team in Japan, Australia, India, and Southeast Asia, and the best way during which they interacted was wildly different. As a frontrunner, you should ensure everyone can find their voice. Ultimately, that is all designed to construct a high-performance team that may execute sensitive tasks like Trust and Safety.
Previously, you were also on the Trust & Safety team with ByteDance for the TikTok application, how are videos which might be often shorter than one minute monitored effectively for safety?
I need to reframe this query a bit, since it doesn’t really matter whether a video is brief or long form. That isn’t an element once we evaluate video safety, and length doesn’t have real weight on whether a video can spread abuse.
When I believe of abuse, I believe of abuse as “issues.” What are a number of the issues users are vulnerable to? Misinformation? Disinformation? Whether that video is 1 minute or 1 hour, there remains to be misinformation being shared and the extent of abuse stays comparable.
Depending on the difficulty type, you begin to think through policy enforcement and safety guardrails and the way you’ll be able to protect vulnerable users. For example, as an instance there is a video of somebody committing self-harm. Once we receive notification this video exists, one must act with urgency, because someone could lose a life. We rely so much on machine learning to do this sort of detection. The primary move is to at all times contact authorities to attempt to save that life, nothing is more vital. From there, we aim to suspend the video, livestream, or whatever format during which it’s being shared. We’d like to make sure we’re minimizing exposure to that sort of harmful content ASAP.
Likewise, if it’s hate speech, there are alternative ways to unpack that. Or within the case of bullying and harassment, it really relies on the difficulty type, and depending on that, we would tweak our enforcement options and safety guardrails. One other example of safety guardrail was that we implemented machine learning that might detect when someone writes something inappropriate within the comments and supply a prompt to make them think twice before posting that comment. We wouldn’t stop them necessarily, but our hope was that folks would think twice before sharing something mean.
It comes right down to a mix of machine learning and keyword rules. But, in terms of livestreams, we also had human moderators reviewing those streams that were flagged by AI so that they could report immediately and implement protocols. Because they’re happening in real time, it’s not enough to depend on users to report, so we want to have humans monitoring in real-time.
Since 2021, you’ve been the Head of Trust, Safety, and Customer experience at SmartNews, a news aggregator app. Could you discuss how SmartNews leverages machine learning and natural language processing to discover and prioritize high-quality news content?
The central concept is that we now have certain “rules” or machine learning technology that may parse an article or commercial and understand what that article is about.
At any time when there’s something that violates our “rules”, as an instance something is factually incorrect or misleading, we now have machine learning flag that content to a human reviewer on our editorial team. At that stage, a they understand our editorial values and may quickly review the article and make a judgement about its appropriateness or quality. From there, actions are taken to handle it.
How does SmartNews use AI to make sure the platform is secure, inclusive, and objective?
SmartNews was founded on the premise that hyper-personalization is nice for the ego but can also be polarizing us all by reinforcing biases and putting people in a filter bubble.
The best way during which SmartNews uses AI is a little bit different because we’re not exclusively optimizing for engagement. Our algorithm wants to grasp you, but it surely’s not necessarily hyper-personalizing to your taste. That’s because we consider in broadening perspectives. Our AI engine will introduce you to concepts and articles beyond adjoining concepts.
The thought is that there are things people have to know in the general public interest, and there are things people have to know to broaden their scope. The balance we attempt to strike is to offer these contextual analyses without being big-brotherly. Sometimes people won’t just like the things our algorithm puts of their feed. When that happens, people can decide to not read that article. Nonetheless, we’re pleased with the AI engine’s ability to advertise serendipity, curiosity, whatever you should call it.
On the protection side of things, SmartNews has something called a “Publisher Rating,” that is an algorithm designed to consistently evaluate whether a publisher is secure or not. Ultimately, we would like to determine whether a publisher has an authoritative voice. For example, we will all collectively agree ESPN is an authority on sports. But, in case you’re a random blog copying ESPN content, we want to make sure that ESPN is rating higher than that random blog. The publisher rating also considers aspects like originality, when articles were posted, what user reviews seem like, etc. It’s ultimately a spectrum of many aspects we consider.
One thing that trumps all the pieces is “What does a user need to read?” If a user desires to view clickbait articles, we cannot stop them if it is not illegal or breaks our guidelines. We do not impose on the user, but when something is unsafe or inappropriate, we now have our due diligence before something hits the feed.
What are your views on journalists using generative AI to help them with producing content?
I consider this query is an ethical one, and something we’re currently debating here at SmartNews. How should SmartNews view publishers submitting content formed by generative AI as a substitute of journalists writing it up?
I consider that train has officially left the station. Today, journalists are using AI to enhance their writing. It is a function of scale, we do not have the time on this planet to supply articles at a commercially viable rate, especially as news organizations proceed to chop staff. The query then becomes, how much creativity goes into this? Is the article polished by the journalist? Or is the journalist completely reliant?
At this juncture, generative AI isn’t able to write down articles on breaking news events because there is not any training data for it. Nonetheless, it could actually still provide you with a reasonably good generic template to achieve this. For example, school shootings are so common, we could assume that generative AI could give a journalist a prompt on school shootings and a journalist could insert the varsity that was affected to receive a whole template.
From my standpoint working with SmartNews, there are two principles I believe are value considering. Firstly, we would like publishers to be up front in telling us when content was generated by AI, and we would like to label it as such. This manner when individuals are reading the article, they don’t seem to be misled about who wrote the article. That is transparency at the best order.
Secondly, we would like that article to be factually correct. We all know that generative AI tends to make things up when it wants, and any article written by generative AI must be proofread by a journalist or editorial staff.
You’ve previously argued for tech platforms to unite and create common standards to fight digital toxicity, how vital of a problem is that this?
I consider this issue is of critical importance, not only for corporations to operate ethically, but to keep up a level of dignity and civility. For my part, platforms should come together and develop certain standards to keep up this humanity. For example, nobody should ever be encouraged to take their very own life, but in some situations, we discover this sort of abuse on platforms, and I consider that’s something corporations should come together to guard against.
Ultimately, in terms of problems of humanity, there should not be competition. There shouldn’t even necessarily be competition on who’s the cleanest or safest community—we should always all aim to make sure our users feel secure and understood. Let’s compete on features, not exploitation.
What are some ways in which digital corporations can work together?
Corporations should come together when there are shared values and the potential for collaboration. There are at all times spaces where there may be intersectionality across corporations and industries, especially in terms of fighting abuse, ensuring civility in platforms, or reducing polarization. These are moments when corporations ought to be working together.
There’s in fact a business angle with competition, and typically competition is nice. It helps ensure strength and differentiation across corporations and delivers solutions with a level of efficacy monopolies cannot guarantee.
But, in terms of protecting users, or promoting civility, or reducing abuse vectors, these are topics that are core to us preserving the free world. These are things we want to do to make sure we protect what’s sacred to us, and our humanity. For my part, all platforms have a responsibility to collaborate in defense of human values and the values that make us a free world.
What are your current views on responsible AI?
We’re initially of something very pervasive in our lives. This next phase of generative AI is an issue that we don’t fully understand, or can only partially comprehend at this juncture.
Relating to responsible AI, it’s so incredibly vital that we develop strong guardrails, or else we may find yourself with a Frankenstein monster of Generative AI technologies. We’d like to spend the time pondering through all the pieces that might go fallacious. Whether that’s bias creeping into the algorithms, or large language models themselves getting used by the fallacious people to do nefarious acts.
The technology itself isn’t good or bad, but it could actually be utilized by bad people to do bad things. This is the reason investing the time and resources in AI ethicists to do adversarial testing to grasp the design faults is so critical. This may help us understand the right way to prevent abuse, and I believe that’s probably an important aspect of responsible AI.
Because AI can’t yet think for itself, we want smart individuals who can construct these defaults when AI is being programmed. The vital aspect to think about right away is timing – we want these positive actors doing these items NOW before it’s too late.
Unlike other systems we’ve designed and built prior to now, AI is different because it could actually iterate and learn by itself, so in case you don’t arrange strong guardrails on what and the way it’s learning, we cannot control what it’d change into.
Without delay, we’re seeing some big corporations shedding ethics boards and responsible AI teams as a part of major layoffs. It stays to be seen how seriously these tech majors are taking the technology and the way seriously they’re reviewing the potential downfalls of AI of their decision making.
Is there anything that you desire to to share about your work with Smartnews?
I joined SmartNews because I consider in its mission, the mission has a certain purity to it. I strongly consider the world is becoming more polarized, and there is not enough media literacy today to assist combat that trend.
Unfortunately, there are too many individuals who take WhatsApp messages as gospel and consider them at face value. That may result in tremendous consequences, including—and particularly—violence. This all boils right down to people not understanding what they will and can’t consider.
If we don’t educate people, or inform them on the right way to make decisions on the trustworthiness of what they’re consuming. If we don’t introduce media literacy levels to discern between news and pretend news, we’ll proceed to advocate the issue and increase the problems history has taught us to not do.
One of the vital components of my work at SmartNews is to assist reduce polarization on this planet. I need to meet the founder’s mission to enhance media literacy where they will understand what they’re consuming and make informed opinions concerning the world and the numerous diverse perspectives.