Matthieu Boutard of Bodyguard: 5 Things We Can Each Do To Make Social Media And The Internet A Ki

Posted on

Matthieu Boutard of Bodyguard AI: 5 Things We Can Each Do To Make Social Media And The Internet A Kinder And More Tolerant Place

Set clear community guidelines: have a clear message on your channels at the gatehouse saying that aggressive, hateful or discriminatory language will not be tolerated. In your personal channels call out toxicity and make it clear that its unacceptable.

As a part of our interview series about the things we can each do to make social media and the internet a kinder and more tolerant place, I had the pleasure to interview Matthieu Boutard, President and co-founder, Bodyguard.ai.

Sensitive to social impact and business, Matthieu started his career in micro-finance. Then he spent years in leading tech companies such as Groupon and Google and worked at one of the most inspiring philanthropic teams in the world: Google.org.

Matthieu is currently Bodyguard’s Managing Director. He truly believes in a better internet for all and focuses his energy on offering a safe digital space for everyone.

Matthieu is a passionate social entrepreneur excited about the opportunities technology and the Internet open up but concerned at the same time about the rise of online toxicity and extremism.

Thank you so much for doing this with us! Our readers would love to “get to know you” a bit better. Can you share your “backstory” with us? Can you share the most interesting story that happened to you since you started your career?

Since I was young, I’ve always had an interest in communities. First, in the real world, I worked in microfinance in Bangladesh and India, then I co-created a group/community buying platform that I sold to Groupon. Then, in the virtual world, I worked for 5 years at YouTube to create ecosystems that were useful for creators and streamers to thrive, and brands to spend lots of money. That was 10 years ago and yet I had already noticed that online toxicity was increasing. I wanted to do something about it and so I joined Google’s philanthropy team. I became an expert on online hate, extremism, child pornography and radical groups online. Online hate is no longer a secret to me. Google, the big social networks and the studios talk the talk but don’t invest enough in protecting people, in my opinion. What I really wanted to do was to put the human being at the heart of my work.

To do this, I decided to join forces with a young computer genius to found a startup together and fight online toxicity. We called it “Bodyguard.ai”. Our mission is to protect people, platforms, brands and their communities from online toxicity, including hate, spam, noise, fraud and illegal advertising.

It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

When it was time for us to create a product dedicated to businesses rather than consumers, we thought that the social media giants would rush to help us by connecting us to work with some of their clients, especially the ones producing a lot of social media content. We spent days putting together great stories about what collaboration would mean and what would be the benefits for all parties. We spent a lot of time and energy only to realize that it wouldn’t lead to anything concrete. We realized that even though your cause is noble, doors are still very hard to open.

Are you working on any exciting new projects now? How do you think that will help people?

Even as Web 2.0 becomes the Metaverse, moderation and security are still issues of paramount importance and what we are laser-focused on. This is what I am working on today. We are not in a situation where technology serves our well-being, quite the contrary!

At the moment, there is not enough control over content in general: whether it is online hate, child pornography, selling weapons or organs, etc. The impact of social networks on thought is already proven, extremism is flourishing everywhere, serene and constructive political debates seem to have gone out of the window. The free exchange of views and sharing spaces and the security of individuals is not at all guaranteed. Just look at the first reports of sexual harassment and aggression already reported in the Metaverse in December 2021. The challenge I have set myself is not to reproduce the mistakes of Web2.0 and to take on the challenge of making the internet a place for free and fair discourse, protecting its users from the worst elements of toxicity.

Ok, thank you for that. Let’s now jump to the main focus of our interview. Have you ever been publicly shamed or embarrassed on social media? Can you share with our readers what that experience felt like?

I’m fortunate to be old enough for social media not to be such a big thing when I was a teenager. And I am also fortunate not to have personally experienced the worst aspects myself. However, my time at YouTube was eye-opening in terms of seeing some extreme reactions and negativity to some creators and within online communities. There is a lot of evidence out there that online toxicity impacts mental health and we should all be better at identifying and calling this out when we see it.

What did you do to shake off that negative feeling?

I’ve never really felt a sense of shame because I’m quite courteous by nature. But I like to ask myself the question: if you saw someone being bullied online or in real life, what would you do? What would you do if you saw someone say something misogynist, racist or hateful? No one hears or says that in real life without being worried. We don’t allow this in a public place, it would be terribly embarrassing. And that’s good! It’s forbidden in real life and it should be forbidden online too. That feeling of shame that we all have in real life should also be felt when we are online. You don’t insult someone, whether it’s on the street or behind your screen.

Have you ever posted a comment on social media that you regretted because you felt it was too harsh or mean?

I’m sure I’m not the only person guilty of using social media or the internet to “sound off” if I’ve had a bad experience with a business or service, from leaving a negative review of a restaurant to complaining if the power is cut off for hours. It’s important for brands and customers to have this feedback channel for poor service, as well as good. However, it is equally important that these channels are not overwhelmed with toxic content and that individual people aren’t targeted with harmful comments.

Can you describe the evolution of your decisions? Why did you initially write the comment, and why did you eventually regret it?

When one reads the comments on Youtube or Instagram, or the trending topics on Twitter, a great percentage of them are critical, harsh, and hurtful. The people writing the comments may feel like they are simply tapping buttons on a keyboard, but to the one on the receiving end of the comment, it is very different. This may be intuitive, but I feel that it will be instructive to spell it out. Can you help illustrate to our readers what the recipient of a public online critique might be feeling?

It’s easy to forget that behind every instance of toxic or harmful content, there are real human people being impacted. As well as creators, celebrities, athletes, politicians and people in the public eye there are countless social media managers, customer service staff or human moderators who can be attacked simply for doing their jobs. Human moderators tend to be young, early in their career and this role can have huge turnover thanks to the mental health strain they can be subjected to.

Some of the real risks to the mental health of Internet users are:

  • Addiction: excessive use creates an addiction to social networks. This increases the risk of depression, anxiety, sleep disorders and social isolation.
  • Harassment: online hate manifests itself in many forms, with real consequences on mental health. The behaviours experienced by Internet users, including harassment, insults, discrimination and revenge porn, have dramatic effects. The effects of such hate speech do not remain within the confines of the Internet.

Our instant and intelligent moderation solution relies on a double contextualization:

  • the words contained in the sentence to measure its toxicity by analyzing the meaning in which the suspected word is used
  • the relationship between the writer and the receiver by analyzing metadata

My ambition is not to regulate the Internet, nor the Metaverse: but to offer protection to people who come on the Internet to discuss, meet and share things with other people.

Do you think a verbal online attack feels worse or less than a verbal argument in “real life”? How are the two different?

It is strange that many behaviours that would get you thrown out of a bar, office or shop in the real world become somehow more acceptable when it happens online. If you are unlucky enough to be targeted repeatedly online I think it CAN be worse than in real life, particularly if it is a sustained attack with multiple people piling on. We see this a lot if someone posts an opinion on a controversial topic. It can be much easier to get out of hand when people are hidden behind a keyboard and harder for them to appreciate the consequences of their attack.

What long term effects can happen to someone who was shamed online?

The cases that make the headlines involve suicide, self harm or nervous breakdowns. The cases that don’t are more insidious, a decline in mental health, a loss of trust in others, a loss of confidence. This is why we work so hard at Bodyguard to protect our clients and their customers and staff.

Many people who troll others online, or who leave harsh comments, can likely be kind and sweet people in “real life”. These people would likely never publicly shout at someone in a room filled with 100 people. Yet, on social media, when you embarrass someone, you are doing it in front of thousands or even millions of people, and it is out there forever. Can you give 3 or 4 reasons why social media tends to bring out the worst in people; why people are meaner online than they are in person?

  1. There is a perception of anonymity online. Trolls and other negative users feel insulted from enforcement because they dissociate their online persona from their real life ones.
  2. There is less perception and understanding of harm. Similarly, when people dissociate from their online persona, they create a cognitive distance to the potential impact of their actions/words.
  3. Social media is designed to amplify. Algorithms reward engagement and reactions to comments, both negative and positive. And so, those who are more extreme online see that behaviour rewarded.
  4. Social media is an echo chamber. An extreme point of view that creates engagement, means that you are served similar content, this could reinforce a particular world view and make the user feel like their negativity is a common viewpoint.

If you had the power to influence thousands of people about how to best comment and interact online, what would you suggest to them? What are your “5 things we should each do to help make social media and the internet a kinder and more tolerant place”? Can you give a story or an example for each?

We can still turn things around, we can put the human being back at the center of the equation.We need to ensure that technologies mature in the direction of our well-being and not just the market. We are more than a bunch of data, we are humans with a history. We can take back control of our data so that we are not just information to be extracted. We’re all going to live a large part of our lives on the Internet, so let’s make it fit us, and even push us to the best of our abilities. It is up to each of us to act. Let’s choose consciously what we want to consume, rather than leaving this choice to algorithms that recommend us products specially designed to make us as addicted as possible.

When it comes to online toxicity, here are 5 things every business can focus on to best deal with toxic content and make a more tolerant place!

  1. Set clear community guidelines: have a clear message on your channels at the gatehouse saying that aggressive, hateful or discriminatory language will not be tolerated. In your personal channels call out toxicity and make it clear that i’ts unacceptable.
  2. Take a step back! Think before you react on the spot and say something you might regret.
  3. Coaching and training: Ensure your team is trained on techniques to cope, both personally and in a professional environment. Dealing with hateful content on a regular basis is psyïcholôgically draining. And make it clear that as a business you will support them.
  4. Make use of tools: set up the right tools to prioritise, moderate and support your brand communications.
  5. Stand up together: Share best practice and knowledge with your peers, even your competitors! We should all support an industry standard for fighting online toxicity.

Freedom of speech prohibits censorship in the public square. Do you think that applies to social media? Do American citizens have a right to say whatever they want within the confines of a social media platform owned by a private enterprise?

Bodyguard.ai’s ambition is not to regulate the Internet and the people who express their opinions on it, we don’t have the legitimacy to do so. What is certain is that moderation is not about prohibiting freedom of expression, but rather preventing toxic content from reaching the recipient. It’s important to block all those intended to hurt and not to educate, to attack rather than to converse and to leave people to express themself, even if it’s with negative comments, criticism or humor as long as those comments are constructive!

If you had full control over Facebook or Twitter, which specific changes would you make to limit harmful or hurtful attacks?

The challenge for social media giants is the sheer size and scale of their audience. Millions of users, in hundreds of countries connecting and engaging online. The vast majority of these interactions are positive but even a small percentage can impact many users. A trained human moderator, even a whole team of them, cannot hope to filter and moderate without help. I would encourage the use of intelligent technology to support teams by identifying toxic content immediately and I would make this technology freely available to developers to bolt onto products and services targeting their users.

Can you please give us your favorite “Life Lesson Quote”? Can you share how that was relevant to you in your life?

A while ago I met Brigitte Macron, first lady of France. I will always remember what she told me: social networks may be our children’s daily communication tools, but we seem to forget to help them do on the net what we help them do in “real life”: avoid dangers. Today, we all need to work on fighting online hate to stop treading water.

We are blessed that some of the biggest names in Business, VC funding, Sports, and Entertainment read this column. Is there a person in the world, or in the US with whom you would love to have a private breakfast or lunch with, and why? He or she might just see this if we tag them 🙂

Vishal Shah, Meta’s VP of Metaverse. While the company has yet to create a killer metaverse app, it could happen one day. Similarly, Bodyguard is not yet a Metaverse company, but we are naturally going to respond to Web 3.0 needs by following the trends on web 2.0. Our technology is always adapting (adding images, video, audio…). As an expert in moderation on Facebook, Instagram, but also Twitch and YouTube, we are currently preparing new features to adapt our moderation solution to the growing democratization of the Metaverse and the revolution it represents for our digital uses. We would love to discuss this with him! Collectively, we have so stand up for a positive model. It is time to take action now to make the Internet a safer, more inclusive, and better place to be for everyone.

How can our readers follow you on social media?

I have my own medium profile in case you want to read more about moderation challenges, Twitter news, metaverse and democracy! https://medium.com/@matthieu.bodyguard

Thank you so much for these insights! This was so inspiring!


Matthieu Boutard of Bodyguard: 5 Things We Can Each Do To Make Social Media And The Internet A Ki was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.