Ineke Botter of Botter Enterprises: Five Things You Need To Be A Highly Effective Leader During…

Posted on

Ineke Botter of Botter Enterprises: Five Things You Need To Be A Highly Effective Leader During Uncertain & Turbulent Times

Life is a mindset. The attitude we bring into work or a personal endeavor shape what will happen. If we believe something is impossible, then we’ll probably never figure out a way to do it. If we believe the problem is too big, then we won’t even take small steps to reduce the impact. When I first started working with the United Nations on the AI for Good initiative, people thought I was crazy. They would ask, “how are you going to turn a slow moving, political bureaucracy into a fast-paced startup company that’s going to launch hundreds of social impact projects?” Good question, but wrong attitude with the “crazy.” If any of us had truly felt that way, we never would have figured it out. Instead, because we believed it was possible, we found a way to make it work, even in the constraint of the people challenges.

Most of us are very impressed with the results produced by generative AI like ChatGPT, DALL-E and Midjourney. Their results are indeed very impressive. But all of us will be struggling with a huge problem in the near future. With the ability for AI to create convincingly real images, video, and text, how will we know what is real and what is fake, what is reality and what is not reality? See this NYT article for a recent example. This is not just a problem for the future; it is already a struggle today. Media organizations are struggling with a problem of fake people, people with AI- generated faces and AI-generated text, applying to do interviews. This problem will only get worse as AI gets more advanced. In this interview series, called “Defeating Deepfakes: How We Can Identify Convincingly Real Fake Video, Pictures, and Writing, And How We Can Push Back,” we are talking to thought leaders, business leaders, journalists, editors, and media publishers about how to identify fake text, fake images and fake video, and what all of us can do to push back against disinformation spread by deepfakes. As a part of this series we had the distinct pleasure of interviewing Neil Sahota.

Neil Sahota is the CEO of ACSI Labs, United Nations (UN) A) Advisor, an IBM Master Inventor, author of the bestselling book Own the A.I. Revolution. and faculty at UC Irvine. Neil is a business solution advisor and a sought- after keynote speaker. Over his 20+ year career, Neil has worked with enterprises on the business strategy to create next generation products/solutions powered by emerging technology as well as helping organizations create the culture, community, and ecosystem needed to achieve success such as the UN’s AI for Good initiative. Neil also actively pursues social good and volunteers with nonprofits and is currently helping the Zero Abuse Project prevent child sexual abuse as well as Planet Home to engage youth culture in sustainability initiatives.

Thank you so much for joining us. Before we dive in, our readers would love to ‘get to know you’ a bit better. Can you share with us the “backstory” about how you got started in your career?

I’m the living embodiment of the one word every parent hates their child learning: Why?

Beyond curiosity, I was born with an insatiable need to understand the value of items and actions. My parents also infused me with a strong desire to help people through community service. This combination defined who I was at a very early age.

Basically, I am the person that wants to solve the big problems, not just the problem at hand. As a result, I pioneered a lot of new processes, models, frameworks, and patents. The latter would prove very important. They launched me down the path of artificial intelligence, which was an innovative and unfamiliar industry with a lot of new territory for me to explore. Here, I found an opportunity to help organizations understand how they could use AI as a tool for both commercial and social good. This really crystalized how much people should be the focus of “people, process, technology.” No matter what amazing ideas exist or the strength of the business case, if people don’t buy in, it isn’t going to work. This is my passion: to empower and connect people rather than leave them feeling fearful and excluded.

Can you share the most interesting story that occurred to you in the course of your career?

My most interesting story has quite a global journey. It started in Washington D.C. At the request of my great friend Stephen Ibaraki, I collaborated with Financial Services Roundtable on how the biggest Financial Services companies in the world could tap into emerging technology to transform their businesses before they got disrupted. After D.C., I left for Milwaukee to take care of some client business. While there, Stephen called me to express his gratitude for the help and told me he had an interesting opportunity. The United Nations (UN) was very interested in

having me speak to them about Artificial Intelligence (AI). I had one of those moments, where I took a step back and couldn’t believe this was being offered to me. (Truthfully, I didn’t believe it. I seriously thought Stephen was playing a joke until he forwarded me the invitation from the Secretary General).

One of the biggest challenges I faced was that most of the world leaders (at that time) thought of AI as “Terminator Time,” meaning machines would conquer the world and eradicate humanity. So, I decided to focus on shifting this perspective by giving a very uplifting keynote on what AI is and how it is being used for public service and sustainable development goals. My speech was very well received. That night, I was approached by several world leaders and people from the UN leadership, including the Secretary General. The consensus was that my talk opened their eyes to possibilities, and they wanted to do something while momentum was there.

But the question was what? After many critical discussions, we chose to create AI for Good — an initiative to use AI and emerging technology for the Sustainable Development Goals. Almost five years later, we boast a global ecosystem of partners and volunteers with 116 projects inflight with an unfathomable amount of positive social impact. I would have never guessed anything like this would’ve been possible as part of my career.

It has been said that our mistakes can be our greatest teachers. Can you share a story about the funniest mistake you made when you were first starting? Can you tell us what lesson you learned from that?

Ironically, one funny mistake actually involved virtual teams. This was back in the day when outsourcing work overseas was just in its infancy. We had set up a team in India with the expectation that when we ended for the day, they would pick up the work. When they ended their day, our day was starting so we would carry it forward. It would be a true 24-hour workday– or so we thought.

We had a major issue that needed quick resolution that was handed off to the team in India. We asked them to fix a problem and confirm if they understood what happened. The next morning, the issue was resolved. While the India team said they had fixed it, we still didn’t know what happened. I sent an email asking, “Do you know what caused the problem?” The following day, I got an email back that just said, “Yes.” So, I emailed back, “What happened?” The next day, I got this response, “We fixed the problem.”

Rather than be frustrated, I just burst out laughing. I learned that cultures are different and to be more prescriptive. More importantly, I learned to spend time with remote teams and build relationships with the people. By doing this, we were able to work effectively together because there was a more intuitive understanding of what information and actions people are expecting. It has been a powerful lesson throughout my career as I have essentially worked in a virtual office for almost twenty years!

What are some of the most interesting or exciting projects you are working on now?

Right now, we’ve unlocked the ability to solve complex problems, clone expertise and (more crazily) figured out how to enhance creative and critical thinking capabilities. How? In one word, it’s convergence.

We live in a time of science and emerging technology that is creating exponential growth opportunities. However, through the combination of these capabilities, convergence, we see exponential to the power of exponential growth. That’s why at ACSI Labs, we’ve spent the last 12 years tapping into the convergence of cognitive science, artificial intelligence (AI), and the Metaverse to help business and government leaders solve large, complex problems.

With 50+ projects completed over ten years, we have made pioneered practices in developing coping skills to mental health issues, recession counter measures in financial services products, sustainable mining practices, and new law enforcement procedures that emphasize the safety for all stakeholders. Interestingly, benchmarking this work, we notice an interesting “side effect.” People who went through our platform on tackling these tough issues, developed sustained cognitive improvement in creative and critical thinking.

In fact, we’ve seen these people experiencing something completely brand new in the real world, their ability to enter flow state, analyze, and problem solve increased in speed while also boosting their skills to come up with more robust solutions. In essence, thanks to convergence, we’ve broken through the traditional 19th century teaching methods and automation (through technology) of these learning models to amplify people’s cognitive abilities.

For the benefit of our readers, can you share why you are an authority about the topic of Deepfakes?

My work with deepfakes started back in 2011. After completing the IBM Watson Jeopardy challenge, we had a lot of focus on how to apply AI (and now what we call Web 3.0) technology. Back then, bad actors were using machine learning technology towards social hacking identity theft. However, when Tinder came on the scene in 2012, it helped spur a wave of AI bot deepfakes. Back then, they were so good, we considered it one of the closest competitors we saw to the Watson AI technology. These deepfake bots were so robust that (at one point) nearly 40% of the dating app profiles were these bots and could carry on a meaningful chat and trick the person into sharing sensitive information.

In 2018, this went to a whole new level. At the time, I had met the leadership of the Zero Abuse Project, a non-profit that helps survivors of childhood sexual abuse. We were looking at ways to help these survivors as well as identify at-risk children to prevent the abuse from occurring at the same time. Unfortunately, in parallel, deepfake technology sprung forward with an app called Deep Porn that allowed the user to take pictures or video and generate a fake pornographic images and video of another person (even if they didn’t know them).

While many people are familiar with some of the deepfake videos of President Obama or Tom Cruise, what recourse does the average person have? That’s the issue at hand. That’s why a good chunk of my work has been to protect against deepfakes and help those people, famous or not, combat this threat for over last 5+ years. This has included some celebrities (fake ads, not pornography) but also reporters and, sadly, children. As we just recently saw, a man in Canada pleaded guilty to using non-sexual images and video of children to generate pornographic images of them. (Absolutely horrifying!) As a result, I have discussions with organizations like UNICEF on how we can prevent such a disgusting use of technology.

Ok, thank you for that. Let’s now shift to the main parts of our interview. Let’s start with a basic set of definitions so that we are all on the same page. Can you help define what a “Deepfake” is? How is it different than a parody or satire?

Well, let us start a little earlier than that. Technology is not inherently good or evil. It’s just a tool, like a hammer, so it is all about how people use it. (I know that’s not a popular topic, because some people prefer to shift responsibility to the tool than another person, but sorry, this is true). This technology (tool) is rooted in digital twins; the “AI for Good” side is that digital twins help us experiment and optimize resources… and they do. We’ve created digital twins of farms to help identify optimal crops and seeds to plan to minimize use of topsoil and water, but also increase crop yields for local nutrition and market consumption (yielding more money for the local community to build schools, hospitals, etc.) There’s also a lot of use in digital twins to train employees and even celebrities using them for commercials and fan engagement.

On the flip side (or mirror image for evil), we have deepfakes which are replicas of people, places, or objects by bad actors for malicious intent or profitability for the bad actors. These are the items we generally see in the media like President Obama saying to blow stuff up. Ukrainian President Zelenskyy telling his forces to surrender, or the deepfake porn stuff we talked about earlier. (Ugh!)

Malicious intent is the key item here. Parody or satire tends to be overtly obvious like Saturday Night Live or a radio show like the Klein/Ally Show on KROQ where they make it explicitly clear this is a joke. This is the chief problem with deepfakes: it’s deliberate intent to mislead with the motivation to cause harm.

While we’ve talked about deepfakes in pornography, we’re already seeing deepfakes to influence international elections during 2022 and 2023 as well as to use notable social media influencers to sell products in international markets.

That’s why deepfakes are currently the biggest threat facing society.

Can you help articulate to our readers why Deepfakes should be a serious concern right now, and why we should take measures to identify them?

Sadly, anyone can be deepfaked without their knowledge or awareness. We’ve already seen the deepfake videos, deepfake audio calls of fraudulent child kidnapping (ugh!) and deepfake images like the Pentagon explosion. While these are the “big ticket” news items, the real challenge still remains with the average person who gets impacted, and they have been through platforms like DeepPorn app. What recourse do they have? Now, take that to a larger extent, how can we trust any digital platform to share information when it can be deepfaked? Sadly, that’s the issue we face. While most of the work out there is legitimate, it’s that 2–5% of fake items that messes everything up.

At the same time, I get a lot of people who ask, “if AI is a tool, why don’t we use it to stop deepfakes?” That’s the big problem. Deepfakes are typically AI systems that train against other AI systems to learn how to fool them and people.

This is the new arms race: AI for Evil versus AI for Good. That’s why I invest so much time as a United Nation (UN) Advisor and their designated Godfather of AI for Good to minimize the

damage from bad actors, while trying to maximize benefit around the UN Sustainable Development Goals.

Why would a person go to such lengths to create a deepfake? How exactly can malicious actors benefit from making them?

Sadly, the two chief motivations are money and reputational damage. For some bad actors, the threat alone of deepfake damage can be quite lucrative. Recently, there have been several cases of audio deepfakes on child kidnapping. Using social media data, the bad actors can create an audio deepfake of the child. They then call the parent during school hours (when the child is usually not permitted to use their phone per school policy) and claim that the child has been abducted. The parent hears the threat but also hears the (deepfake) child when requesting to talk to them. Panic ensues, and other verification options (rightfully so under this duress) go out the window. This works often enough that is profitable for the bad actors to continue with their ‘work.”

More concerning is the intentional reason to cause harm reputationally (or even financially or personally). This also stems from the “old app” days of Deep Porn where the intent of these bad actors was to exact revenge or personally hurt the person. This is even more devasting because what recourse does the average person have?

Can you please share with our readers a few ways to identify fake images?

Sadly, fake images are the hardest to identify. Because they are static, there’s less data to analyze. As a result, it takes a deep analysis of image parameters like normal shadowing, aperture, etc. to determine if it is real or fake. A step up from this is the audio deepfakes. There’s a bit more information, like tone of voice and word choice, that help shine a light on discrepancies. After that, ironically, video deepfakes are even easier to recognize because of data points like body language (over 2,000 points on the face alone) to determine real or fake.

So, with fake images, sadly some painstaking research needs to be done… and it is not quick to do in an age of viral social media explosion.

Similarly, can you please share with our readers a few ways to identify fake audio?

Interestingly, fake audio is detectable using the “old” techniques of deepfake chats on dating apps from ten years ago. With the recent fake kidnapping audio cases, it highlighted the use of a unique (something bizarre from what you’ve put on social media) code phrases. These have been incredibly useful, as long as they are guessable. Easier than what most people think. What also works insanely well (and we saw this with the dating bots) are non-sequitur statements; you are having a “normal” conversation via voice or text, but then you say something crazy that is way off the conversation like, “That’s a purple dinosaur egg for planning a getaway with the New York Rangers after they win the Stanley Cup championship .”

Next, can you please share with our readers a few ways to identify fake text?

Fake text is the hardest because it has the least amount of data. Think about how much data you get from an audio file, image, or video. It has information on body language, tone of voice, inflection of voice pitch, shadowing, background noise, etc. Text is the hardest, and it’s been around the longest (at least since 2012).

The best way to “fake out” a fake text (complicated, I know) is to use non-sequiturs in a chat. Saying of-the-wall things that can be explained away in a human-to-human conversation is a lot easier than going through a “normal” chat with a deepfake.

Finally, can you please share with our readers a few ways to identify fake video?

Detecting fake video takes a keen eye, and usually, some powerful tools. With a deepfake video, you need to look at body language. Check to see if this person is moving “normally” like they usually do. For example, when they talk, do they wave their hands? Make certain facial gestures when expressing a certain topic or emotion? Are they using words, slang, or metaphors like they normally do? Is the shadowing of the person, given the light in the background, appear accurate? Does the audio sync with the lip motion? There’s a lot of subtle clues we can look for.

How can the public neutralize the threat posed by deepfakes? Is there anything we can do to push back?

Honestly, there’s not much the public can do at the moment, other than keep an open eye and not be so quick to believe, to avoid misinformation going viral. Sadly, deepfakes are trained to fool even AI systems, so it takes keen awareness to notice that something might be off. There are groups working on trusted authentication systems to certify if a text, image, or video is real or not. However, perfecting that will take standardization and time, meaning that there’s no quick solution to this problem.

This is the signature question we ask in most of our interviews. Can you share your “5 Things I Wish Someone Told Me When I First Started” and why?

  1. When it comes to people, process, and technology, people are usually the biggest challenge. Processes have become mature. Technology has developed into some robust tools. However, getting buy in and enacting change is always a challenge. Many of the barriers (or misuses) we encounter stem from people. Thus, to bring innovation and new value, we have tackled the people problems as well. Look at artificial intelligence; from 2006–2015, I kept hearing how AI will be “Terminator time.” It wasn’t until I gave a speech at the United Nations that people started realizing AI could be used for public services and social good.
  2. You want to do a good job… but not too good of a job. If you do, people wind up typecasting you and locking you into a box because that’s the “only thing you’re good at.” Most people call me the “Godfather of AI for Good.” So, people always limit me as the “AI Expert,” forgetting all the other work I’ve done in other technologies as well as helping Global 500 Fortune companies solve business problems (without needing a technology related solution).
  3. Create your own boundaries between life and work. Honestly, no one else will do this for you. Early in my career, I experienced my employer loading me up with work until just short of my breaking point. For almost five months I was working 80+ per week and my personal life disappeared. It wasn’t until I pushed back and set boundaries that this changed.
  4. Life is a mindset. The attitude we bring into work or a personal endeavor shape what will happen. If we believe something is impossible, then we’ll probably never figure out a way to do it. If we believe the problem is too big, then we won’t even take small steps to reduce the impact. When I first started working with the United Nations on the AI for Good initiative, people thought I was crazy. They would ask, “how are you going to turn a slow moving, political bureaucracy into a fast-paced startup company that’s going to launch hundreds of social impact projects?” Good question, but wrong attitude with the “crazy.” If any of us had truly felt that way, we never would have figured it out. Instead, because we believed it was possible, we found a way to make it work, even in the constraint of the people challenges.
  5. Don’t care so much. This (along with #2) are great pieces of advice I’ve received but never really could follow well. We get locked up too much in things we pursue that we can get overly attached to it. We may have the right idea but lack the funding for it, the idea doesn’t align with organizational goals or culture, or other people don’t see value in the vision. We should do our best job in explaining the value of an endeavor but also be ready to let it go if it doesn’t work out.

You are a person of enormous influence. If you could start a movement that would bring the most amount of good to the greatest amount of people, what would that be? You never know what your idea can trigger. 🙂

How can our readers further follow your work online?

There are several great ways to follow me. For active, frequent content, please subscribe to my LinkedIn newsletter, Disrupting the Box (https://www.linkedin.com/newsletters/6957767299151880192/) and my podcast AI for All.

You can also find me and additional content on digital media and social media: Website: https://www.neilsahota.com/

LinkedIn: https://www.linkedin.com/in/neilsahota/ Twitter: @neil_sahota

Instagram: @neil_sahota

YouTube: https://www.youtube.com/channel/UCM9N97dyw7EwnCrXn3uac-w Forbes: https://www.forbes.com/sites/neilsahota/

United Nations Podcast: https://www.ctscast.com/artistic-intelligence/

Thank you so much for the time you spent on this. We greatly appreciate it and wish you continued success!


Ineke Botter of Botter Enterprises: Five Things You Need To Be A Highly Effective Leader During… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.