Defeating Deepfakes: Rijul Gupta of DeepMedia How We Can Identify Convincingly Real Fake Video…

Posted on

Defeating Deepfakes: Rijul Gupta of DeepMedia How We Can Identify Convincingly Real Fake Video, Pictures, and Writing, And How We Can Push Back

Prioritize self-care and work-life balance. When I first started, I was heavily invested in my work, often at the expense of my personal well-being. I wish someone had advised me to prioritize self-care and work-life balance. Over the years, I’ve learned that taking care of my mental and physical health is essential for long-term success and sustained creativity in a demanding industry like tech.

Most of us are very impressed with the results produced by generative AI like ChatGPT, DALL-E and Midjourney. Their results are indeed very impressive. But all of us will be struggling with a huge problem in the near future. With the ability for AI to create convincingly real images, video, and text, how will we know what is real and what is fake, what is reality and what is not reality? See this NYT article for a recent example. This is not just a problem for the future; it is already a struggle today. Media organizations are struggling with a problem of fake people, people with AI-generated faces and AI-generated text, applying to do interviews. This problem will only get worse as AI gets more advanced. In this interview series, called “Defeating Deepfakes: How We Can Identify Convincingly Real Fake Video, Pictures, and Writing, And How We Can Push Back,” we are talking to thought leaders, business leaders, journalists, editors, and media publishers about how to identify fake text, fake images and fake video, and what all of us can do to push back against disinformation spread by deepfakes. As a part of this series we had the distinct pleasure of interviewing Rijul Gupta.

Rijul Gupta is the co-founder and CEO of DeepMedia, a revolutionary AI platform company that is setting the standard for responsible synthetic media use. In an age marked by misinformation, DeepMedia incorporates synthetic faces and voices into its Universal Translator to help people communicate across language barriers while simultaneously building datasets to power high-accuracy deepfake detection for the US Government.

Thank you so much for joining us. Before we dive in, our readers would love to ‘get to know you’ a bit better. Can you share with us the “backstory” about how you got started in your career?

I’ve been building apps and websites since I was 10-years-old. After high school, I got into Yale and earned a degree in Machine Learning and began working as a machine learning contractor for about two years. Once I saw my first deepfake video in 2017, I became very passionate about synthetic media technology and spent the past six years pioneering cutting edge generation and detection.

Can you share the most interesting story that occurred to you in the course of your career?

Back when I was just 15-years-old, I created one of the first 5,000 apps on the iOS AppStore, called iBreak. It’s just an app that created a fake broken screen. The neat accomplishment, at that time, was seeing it downloaded over 100,000 times!

It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

As a highly technical and tech-obsessed youngster, I ignored the importance of marketing. I was focused on solving very hard problems and building things that I thought were cool, not what others necessarily wanted.

For example, after graduating in 2015, I spent six months developing an iPhone app called Eye Chroma that could reverse color blindness by intelligently manipulating color patterns through the iPhones camera. Only after much hard work did I realize that not enough people suffered from this problem for my app to make a world-changing impact I had wanted it to.

Most people with color blindness live their normal lives without too much issue, and wouldn’t fork over a large amount of money just to be forced to view the entire world through their phone camera. It was then that I learned the importance of talking to your customer and understanding the market before building something — and to build things for other people, not just for yourself.

What are some of the most interesting or exciting projects you are working on now?

We’re working on a lot of great projects at DeepMedia right now. We’re going to launch the DEEP ID platform, DMDF Faces V2 Dataset which allows users to scan for deepfakes and other manipulated content. We’re also launching a Universal Translator that allows people to communicate across language barriers by translating and dubbing video content in 50 different languages.

And speaking of translating, we’re developing a web-based app where anyone can upload a 15-second audio clip in English which then gets automatically translated in Spanish. That audio file will then come with a sharable link so people can share via WhatsApp, Facebook Messenger, Twitter, etc.

Lastly, we received a new Phase 2 Small Business Innovation Research Program (SBIR) contract from the Department of Defense totaling $1.25M to develop deepfake detectors for faces, voices, and text. Ensuring the US has the most advanced detection tools at their disposal remains top of mind for us.

For the benefit of our readers, can you share why you are an authority about the topic of Deepfakes?

I’ve been working on deepfakes since 2017 and believe I was one of the first people to take deepfakes seriously.

During my tenure as CEO of DeepMedia, I have personally developed and trained over twenty different deepfake networks and have been granted a US patent on text, voice, and face DeepFake technology for the company’s Universal Translator.

I lead a team of talented individuals dedicated to creating and detecting state-of-the-art deepfakes, which has positioned our company as the best “pureplay in DeepFakes,” according to Morgan Stanley.

This dedication to our craft has not gone unnoticed. Esteemed publications like Forbes have included me in their Next 1000 list, while Fast Company has recognized our work as part of their world-changing ideas. These accolades serve as a testament to our unwavering commitment to revolutionizing the way we interact with digital media and our pursuit of innovation that shapes the future.

Ok, thank you for that. Let’s now shift to the main parts of our interview. Let’s start with a basic set of definitions so that we are all on the same page. Can you help define what a “Deepfake” is? How is it different than a parody or satire?

A deepfake is a type of artificial intelligence/synthetic media used to create convincing images, audio, text or video. It’s not a new technology. It’s been used in Hollywood for decades and now, more and more people have access to creating their own through commercial applications.

The way and the reason why deepfakes are used is what makes it different from parody or satire. Deepfakes presidential elections have malicious intentions behind them, whether it’s creating revenge porn, committing financial fraud or tainting presidential elections.

Can you help articulate to our readers why Deepfakes should be a serious concern right now, and why we should take measures to identify them?

Deepfakes are the primary threat to national security and democracy. Deepfakes can be used dangerously in many ways, including falsifying orders from military or world leaders. Imagine a video of Zelenskyy surrendering goes viral and news outlets report it, people retweet it because no one questions its validity. Deepfakes have evolved and gotten so accurate to the point that it’s truly hard to tell what’s real and what’s not.

I led a panel at SxSW this year with DeepMedia’s COO Emma Brown to discuss how governments, other organizations, and investigative journalists can stay ahead of this technology. Informed people are the best defense against manipulation and misinformation.

Can you please share with our readers a few ways to identify fake images?

When someone suspects an image to be fake, I usually tell them to look out for unusual skin tones. Is the face too smooth, lacking texture? Strange lighting, especially around the face, is another dead giveaway. And if there are any oddly positioned shadows in the photo, that’s also highly suspicious.

Similarly, can you please share with our readers a few ways to identify fake audio?

You can identify a fake audio if you notice a lack of clear emotion from the speaker. Or, if there’s an inappropriate regional accent. Check for certain artifacts that sound like the speaker may be “under water” and any improper emphasis on specific words.

All of these items are quickly going away, though. Fairly soon it will be impossible for a human being to identify fake audio. Only other AI will be able to detect AI generated speech.

Next, can you please share with our readers a few ways to identify fake text?

It’s essentially impossible for human beings to detect fake text at this point.

AI can detect fake text by looking at the uniqueness and sameness of the text in question across several examples. The technical terms for these are perplexity which is a measure of how unusual the generated text is, and burstiness which is a measure of how similar the generated text is to nearby phrases.

Finally, can you please share with our readers a few ways to identify fake video?

When it comes to identifying a fake video, you have to pay attention to the lighting conditions, the smoothness around the face, the lines around the jaw, chin, and hair. Note that generated faces have to be “reinserted” back into the original video. This reinsertion will sometimes lead to artifacts around the mask.

Here again, it’s essentially become impossible for humans to detect fake videos, and so the solution to all this — how to detect fake images, text, audio, and video — is to use DeepMedia’s DeepID platform.

How can the public neutralize the threat posed by deepfakes? Is there anything we can do to push back?

I believe that tackling the challenges posed by deepfakes requires a visionary approach, uniting the efforts of individuals, the tech industry, and governments in pursuit of a safer digital landscape.

At DeepMedia, we’re committed to making a positive impact on the world by developing cutting-edge deepfake detection solutions. Our partnerships with the US Department of Defense and the United Nations demonstrate our unwavering dedication to combating the malicious use of deepfakes on a global scale.

We recognize the importance of working closely with the highest levels of government to address this threat effectively. I have personally spoken with leading US senators on the topic, advocating for a comprehensive strategy that encompasses education, technology, and regulation.

As a tech CEO, my stance in favor of government regulation might be considered unusual. However, I firmly believe that our responsibility to do good and protect people transcends conventional thinking. By fostering collaboration among researchers, tech companies, and governments, we can create a united front against the deepfake threat and ensure a more secure digital environment for all.

Together, we can harness our collective ingenuity to neutralize the dangers posed by deepfakes, empowering people to navigate the digital realm with confidence and discernment. It is through this visionary approach that we will push back against malicious uses of technology and forge a brighter future for generations to come.

This is the signature question we ask in most of our interviews. Can you share your “5 Things I Wish Someone Told Me When I First Started” and why? Please share a story or an example for each.

  1. Focus on solving real problems. When I first started, I was more interested in developing cool technology than addressing the needs of users. I wish someone had emphasized the importance of identifying real problems that people face and building solutions around them. My experience with the Eye Chroma app, which did not address a pressing need for most people with colorblindness, taught me this valuable lesson.
  2. Embrace failure and learn from it. I wish I had known that failure is an essential part of growth and innovation. The road to success is often paved with setbacks and challenges, and embracing them can lead to valuable insights. The struggles I faced with my initial ventures helped me refine my approach and develop the resilience needed to succeed in the tech industry.
  3. Build a strong network. The value of a robust professional network cannot be overstated. I wish someone had told me early on that fostering relationships with mentors, peers, and industry leaders can open doors and lead to opportunities for collaboration, learning, and growth. It wasn’t until later in my career that I fully understood the significance of building connections and nurturing relationships.
  4. Balance technical expertise with business acumen. As a tech enthusiast, I initially focused more on the technical aspects of my projects. I wish I had known the importance of developing business acumen alongside my technical skills. Understanding the market, customer needs, and the financial side of the business is crucial for turning innovative ideas into successful ventures.
  5. Prioritize self-care and work-life balance. When I first started, I was heavily invested in my work, often at the expense of my personal well-being. I wish someone had advised me to prioritize self-care and work-life balance. Over the years, I’ve learned that taking care of my mental and physical health is essential for long-term success and sustained creativity in a demanding industry like tech.

Reflecting on these lessons, I believe that sharing my experiences and insights can help guide aspiring entrepreneurs and innovators on their journey toward success and personal growth.

You are a person of enormous influence. If you could start a movement that would bring the most amount of good to the greatest amount of people, what would that be?

I believe that our greatest potential as a global society lies in the power of connection and collaboration. We must strive to bridge the divides that separate us, uniting people from all nations and cultural backgrounds in pursuit of a common goal: the betterment of humanity.

Growing up, Star Trek served as a guiding light in my understanding of the future, illustrating a world where people from diverse backgrounds worked together towards an ethical and equitable path for all. This vision has profoundly influenced my personal mission and the work I do in the field of technology.

At the core of this movement is the development of a Universal Translator, a groundbreaking innovation that transcends language barriers, enabling individuals from different cultures to communicate, share their experiences, and develop mutual understanding. By fostering connections between people who once viewed each other as adversaries, we can create an environment where global challenges such as climate change, war, poverty, and hunger are recognized as the true enemies of humanity.

With this newfound sense of unity, we can shift our focus from fighting against one another to fighting for each other, pooling our resources and expertise to tackle the world’s most pressing problems. By embracing our shared humanity, we can embark on a collective journey to address these global threats, ensuring that future generations inherit a world that is more peaceful, prosperous, and sustainable.

Together, we can redefine the boundaries of human potential, harnessing the power of technology to create a future that transcends borders, fosters collaboration, and brings us closer to the utopian vision of Star Trek — a world where humanity works in harmony for the greater good.

How can our readers further follow your work online?

Website: https://www.deepmedia.ai/
TikTok: https://www.tiktok.com/@deepmedia.ai

Thank you so much for the time you spent on this. We greatly appreciate it and wish you continued success!


Defeating Deepfakes: Rijul Gupta of DeepMedia How We Can Identify Convincingly Real Fake Video… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.