Defeating Deepfakes: Jaya Baloo of Singularity Group How We Can Identify Convincingly Real Fake…

Posted on

Defeating Deepfakes: Jaya Baloo of Singularity Group How We Can Identify Convincingly Real Fake Video, Pictures, and Writing, And How We Can Push Back

There is no standard work life balance formula, it’s bespoke. Trying to strive for this perfect work life balance has only put me out of balance. I find it easier to be fully present for work, or for life and shift as needed.

Most of us are very impressed with the results produced by generative AI like ChatGPT, DALL-E and Midjourney. Their results are indeed very impressive. But all of us will be struggling with a huge problem in the near future. With the ability for AI to create convincingly real images, video, and text, how will we know what is real and what is fake, what is reality and what is not reality? See this NYT article for a recent example. This is not just a problem for the future; it is already a struggle today. Media organizations are struggling with a problem of fake people, people with AI-generated faces and AI-generated text, applying to do interviews. This problem will only get worse as AI gets more advanced. In this interview series, called “Defeating Deepfakes: How We Can Identify Convincingly Real Fake Video, Pictures, and Writing, And How We Can Push Back,” we are talking to thought leaders, business leaders, journalists, editors, and media publishers about how to identify fake text, fake images and fake video, and what all of us can do to push back against disinformation spread by deepfakes. As a part of this series we had the distinct pleasure of interviewing Jaya Baloo.

Jaya Baloo, currently Chief Information Security Officer (CISO) at Rapid 7 has worked in the cybersecurity area for nearly 20 years. She received an honorary doctorate from the University of Twente in 2022 and won the Cyber Security Executive of the year award in 2015 and is one of the top 100 CISOs and security influencers in the world. Jaya frequently speaks at security conferences on subjects around lawful interception, mass surveillance, cryptography. She is a Singularity Expert, member of the Singularity University and a member of various infosec boards. Expert on quantum computing, Jaya is a quantum ambassador of KPN Telecom and a Vice Chair of the Quantum Flagship Strategic Advisory Board of the EU Commission.

Thank you so much for joining us. Before we dive in, our readers would love to ‘get to know you’ a bit better. Can you share with us the “backstory” about how you got started in your career?

To be honest I was fascinated by technology already at a very young age and was lucky enough to attend a public school, ps24 in Queens, NY where I had learned to program in basic at the age of 9. I considered my unusual interest in computers a hobby, and it was not something that I thought of as a professional career choice. Despite studying political science in university, my part-time jobs were always in places like the computer science lab or other technically oriented companies.

Can you share the most interesting story that occurred to you in the course of your career?

It’s hard to choose just one story but I think that the most interesting things that I have done across my career were the international projects. I’ve been fortunate enough to be in places where we could still build networks and businesses in places like Namibia, Egypt, Costa Rica, etc,

On the flip side I’ve also had to navigate some very serious security issues,

from nation-state hacking attempts to dealing with cyber criminals. It is always a learning experience (and sometimes a bit of an adventure) working with great teams across the company to manage major incidents.

It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

Although the security community believes in blameless post-mortem, I however tend to be much more critical of myself. One of the biggest that I made early on in my career, roughly at 25, was during a late night change where I accidentally gave the same set of commands to two different devices on the network thereby starting a process that ended up in a loop which meant that the network became unavailable the following day. I just assumed that everything was probably updating but the entire network was unreachable, and although the network escaped mostly unscathed, as did I, it was a valuable lesson to double check for quality above speed.

What are some of the most interesting or exciting projects you are working on now?

As I just started at Rapid7 as their new chief security officer, right now everything is exciting. Embarking on a new security program, strategy, metrics, etc and customizing it to the specific needs of Rapid7 and her customers is something that I’m super excited to do.

For the benefit of our readers, can you share why you are an authority about the topic of Deepfakes?

I wouldn’t consider myself in authority on deepfakes, however I am a specialist in cybersecurity and risk. It is from this lens that I can speak on the topic.

Ok, thank you for that. Let’s now shift to the main parts of our interview. Let’s start with a basic set of definitions so that we are all on the same page. Can you help define what a “Deepfake” is? How is it different than a parody or satire?

So let’s be clear, a deepfake can also be used for parody or satire. A deepfake is using techniques of deep learning to create fake videos or images that takeover the likeness of some real individual.This doesn’t always have to be bad and can be used for humor or theater, unfortunately, like most of our technology, this technique is dual use which means that it can be used for both good and bad purposes depending on intent. Recently Facebook announced that they were banning deepfakes on their platform with the exception of those that were specifically used for parody and satire. At its core, deepfake technology is a means to conduct disinformation.

Can you help articulate to our readers why Deepfakes should be a serious concern right now, and why we should take measures to identify them?

I think the unauthorized use of anyone’s likeness should be a case for concern. We see different cases emerging. Who can remember the deep fake video of Vladimir Zelensky asking Ukrainian troops to law down their arms and surrender? Currently one of the biggest use cases for deep fakes is making porn videos with someone else’s face and ( usually ) without their consent. Deepfake porn, beyond the psychological and ethical concerns is currently exploding on the internet with many victims likening it to form of sexual assault. From a cybersecurity viewpoint we should also be concerned when deepfakes are used for different forms of phishing or trying to lure victims to giving information or conducting actions because the request is coming from a (seemingly) trusted third party.

Why would a person go to such lengths to create a deepfake? How exactly can malicious actors benefit from making them?

Cybercriminals will go great lengths if it helps them succeed in for example a phishing operation where theyImpersonate a company CEO or CFO and convince employees to transfer money to a specific bank account and conduct financial fraud or information theft or some other criminal activity. Bear in mind deepfakes are not just videos they are also images as well as audio.

Can you please share with our readers a few ways to identify fake images?

Although deepfakes are getting ever better and harder to detect, there are still a few potential ways to identify a deep fake. The simplest measure is to actually verify with the person who you think you saw. The second is to run the image through a commercially available facial recognition system or image detection tooling.

Similarly, can you please share with our readers a few ways to identify fake audio?

Keep it simple, and as with any potential phishing attack or suspicion of fake audio, verify with the actual party if the audio message was real. Secondly there are some quality issues with deep fake audio and they include glitches like background noise issues with the actual speech quality. Technical analysis, called spectral analysis, can compare the fake voice with the actual voice to provide clues to identify the veracity of the audio.

Next, can you please share with our readers a few ways to identify fake text?

This has gotten even more difficult as we have found it harder to navigate with the advent of Large Language Models (LLM) like GPT4. When in doubt, try to start by validating the source first and foremost. The most prevalent usage of fake text is within the context of fake news or phishing. The second thing is to try to verify the text either with a fact checker in the event of fake news or with tools that can identify text created by an LLM.

Finally, can you please share with our readers a few ways to identify fake video?

It depends, for example if you’re worried that the person you’re speaking to over Zoom is actually using deepfake technologies, one way to test is to ask or try to see left and right rotation of the head. At certain angles it becomes clear that they are using deep fake tech because of the lack of side profile images available to train the data set. At scale, we can only turn the tide with better tools to detect fake videos. One of the cool projects out there was a DeepFake Detection Challenge to figure out how we can get better at spotting fakes.

How can the public neutralize the threat posed by deepfakes? Is there anything we can do to push back?

The top three things we can do is:

  1. Be aware and vigilant about the use and impact of deepfakes
  2. When in doubt try to conduct verification, either by contacting the party being impersonated or use tooling like reverse image search to check origin and authenticity of images
  3. Drive for technology change and transparency by adopting measures such as watermarking, digital signatures, etc to be able to check for authenticity

This is the signature question we ask in most of our interviews. Can you share your “5 Things I Wish Someone Told Me When I First Started” and why? Please share a story or an example for each.

  1. Find your passion — I went to university for Political Science because that is what I thought I should do and worked in the computer science lab as a hobby. I should have thought it was better to pursue my hobby from the very beginning.
  2. Dare to say no and aggressively defend your boundaries. — Especially as women, we are conditioned to be people pleasers and try to be congenial at all costs. In business it’s better to say no and prioritize your agreements to drive higher quality in the things you said yes to.
  3. Invest annually in yourself — Make sure you understand that you can always take more courses or learn something new. — We tend to deprioritize our own training needs to our detriment.
  4. Check in every once in a while to make sure you’re focusing on important and not just the urgent stuff — I tend to be ruled by the daily checklists instead of really stepping back and addressing longer term objectives. Take a breath and make sure you’re not standing in the way of your future self.
  5. There is no standard work life balance formula, it’s bespoke. Trying to strive for this perfect work life balance has only put me out of balance. I find it easier to be fully present for work, or for life and shift as needed.

You are a person of enormous influence. If you could start a movement that would bring the most amount of good to the greatest amount of people, what would that be? You never know what your idea can trigger. 🙂

I wish we would better understand our own power as well as interdependencies on each other for a secure future. As consumers we are the second line of defense against cybercriminals. We essentially vote with our wallets and should be better at penalizing poor security and privacy practices. We need to make sure that the makers of technology, the providers of hardware and software, are held accountable for creating, delivering, and maintaining secure products. As long as we are not willing to make this mandatory, and willing to pay for better security, we will continue to read about victims of cybercrime.

How can our readers further follow your work online?

Twitter: @jayabaloo

Thank you so much for the time you spent on this. We greatly appreciate it and wish you continued success!

Defeating Deepfakes: Jaya Baloo of Singularity Group How We Can Identify Convincingly Real Fake… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.