Guardians of AI: Daniel Olsher of Integral Mind On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True

Actually trustable technology — no faith can ever be required — too much is at stake. All aspects of systems must be provable and verifiable, and it must be possible to trust systems in the real world. AIs must dynamically generate responses based on what’s happening at the moment instead of assuming the future will be like the past. They must exercise context-aware self-control at all times.
As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Daniel Olsher
As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Daniel Olsher.
Including work on the “most ambitious AI project ever undertaken by the US Government” (DARPA CALO — Wikipedia), and past principal-investigator contributions to multiple DARPA/Army ARL/Air Force AFRL/Singapore Ministry of Defense AI projects, MURIs, and cybersecurity research programs, Daniel Olsher is the creator of a new form of AI (the first shown to possess all AGI properties) which has been successfully validated and deployed for DARPA, every branch of the US military (excl. Space Force), IARPA, the Intelligence Community, State Department, and other governments and organizations.
One government team lead wrote that this work was “… a paradigm unlike any extant work … what [Olsher] has contributed is a significant breakthrough”.
Selected by DoD to join the DI2E Reference Architecture, this work has also been successfully peer-reviewed and published in top AI, topic modeling, and knowledge representation venues (AAAI, ICDM, KDD, Neural Networks, IEEE Symposium Series on Computational Intelligence, HumTech, and Cognitive Science) and was selected for an award at HumTech 2015.
Business is about decisions and people, and this AGI offers best-in-class capabilities in understanding, simulating, and predicting complex business systems, economics, psychology, emotions, and behavior. It provides the strongest possible support for any decision, including provable correctness, demonstrable non-bias, and deep understanding of people and markets.
In addition to predicting the behavior of world leaders at a level of 94%, the platform predicted the content of an early agreement between the US and Iran on the nuclear issue, including the extent to which both sides would be pleased and the reasons why this was the best agreement. Other proven successes include international relations project analysis and simulation (accurate to the point that field participants noted that ‘it was though we were in their heads’, cyber, analyses of USG messaging on human rights, real-time Tweet understanding and processing, and real-time disaster awareness and decision support.
Current key projects include entrepreneurship and employment support.
For more information, please see https://intmind.com.
Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?
Many different threads combined to get to AGI — from a personal perspective, as a Buddhist, I’ve always been interested in how technology can improve quality of life, and AGI offers immense possibilities in that regard. My interests in cognition and language pushed me towards a better understanding of the reality of nature and of intelligence, with key insights including the fact that everything is in fact connected and that many of the tools we use in hopes of understanding reality tend to ignore nuance, which, as it turns out, is the key to the kingdom.
Given that intelligence involves applying knowledge in new ways based on context, it is necessary to first understand what knowledge actually is in order to build machine cognition, but traditional theories don’t work in this application — we need something different. When answering this I began by asking: What can human capabilities tell us about how intelligence must work underneath? People adapt to situations and contexts that they’ve never seen before, but the knowledge they use to do that never changes (it is just used differently) — what must knowledge look like in order for this to be possible?
To see the solution, think of a computer display — it is universal (any image whatsoever can be shown) and accurate (images are shown properly). These are exactly the properties we need for our knowledge — what is it about a computer display that enables these? As it turns out, it is the small size of the ‘dots’ (or pixels) of the screen that makes all the difference. Because the pixels are small, they are able to flexibly work together to show new elements in new ways. If they were large, we would only see ‘blobs’ and the screen wouldn’t be properly be able to fulfill its function (see e.g. our COGBASE paper for more information on this).
We can apply this insight directly to the achievement of AGI — as it turns out, just as the physical world is composed of small atoms, so too is information. If we break our knowledge down into small ‘atoms’ of information, our knowledge will have all the properties it needs in order to enable AGI. Furthermore, because intelligence requires problem solving abilities which rely on cause and effect, we can use the observation that causality can also be broken down into small ‘pieces’ such that our knowledge atoms will each have a small amount of information and a small amount of causality. This is the key to AGI.
None of us can achieve success without some help along the way. Is there a particular person who you are grateful for, who helped get you to where you are? Can you share a story?
We can’t say enough about our early customers and supporters; they saw what we had and even put their careers on the line in order to ensure that the government could benefit from this. One delayed retirement so as to be able to continue to work with us, as in his words, he saw this technology as perhaps the only thing that could help humanity escape many of the otherwise intractable problems we currently face.
Our customers often had personal connections to the projects we undertook — in one case, an intelligence analyst was concerned about a specific instance where things had not gone as expected; in order to show that he was valued, management had previously agreed to let him undertake an innovation project of his choice, and he chose our AGI for that project. He knew that it could help the Intelligence Community avoid such situations in future by considering large amounts of information, simulating the impact of that information, and pointing analysts towards those elements which really mattered. In this way, any such issues could be avoided in future.
You are a successful business leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?
It seems that empathy is the core quality which defines all other outcomes — it defines what and whether we ‘see’, the degree to which we can understand and be there for other people, and the ways in which we are able to understand the world and adapt to change. It determines how and whether or not we will show up positively in others’ lives. Leadership requires all of these capabilities.
I’ve also found curiosity especially important — if there’s anything that has always surprised me about conventional wisdom it’s that it is so often wrong. Rethinking well-accepted truths based on first principles in order to verify if they still hold has always brought me enormous dividends.
And persistence — as is well known, nothing worthwhile is ever easy. But if you have a vision, and you’re understanding things clearly, the world likely desperately needs the unique answers you bring.
Thank you for all that. Let’s now turn to the main focus of our discussion about how AI leaders are keeping AI safe and responsible. To begin, can you list three things that most excite you about the current state of the AI industry?
The advent of our AGI has really changed the game; many serious issues that have been raised with respect to traditional AI are no longer problematic.
This AGI has been expressly designed to make provable ethics and morality possible, practical, and natural. Morality is about consequences, and the AGI is able to autonomously discover these and determine what to do about them. And all aspects of the system are amenable to inspection.
For the first time, it is therefore possible to trust an AI — because it understands morality and knows the consequences of its actions, it can exercise autonomous self-control. In addition, it is based on provable cause and effect, not statistical correlation, so it can adapt to changing circumstances in provably correct ways. It is also fully transparent and can explain what it is doing, so it is never necessary to take it on faith.
AGI holds immense transformative potential because it is able to solve the full range of intelligence subproblems, including knowledge, thinking, simulation, understanding, and explanation. Simulation allows us to see the future and discover what to do about it, which, to date, has required large teams of people and significant resources, placing it out of the reach of most. But this AGI offers these capabilities to everyone, thereby increasing freedom, flexibility, and choice.
Conversely, can you tell us three things that most concern you about the industry? What must be done to alleviate those concerns?
Now that AGI is here, it is essential to shift thinking, expectations, and resource allocation decisions. Our work has shown that traditional AI paradigms cannot reach AGI, and traditional paradigms require large amounts of resources (power, chips, etc.) that this AGI does not require. Traditional AI often creates negative externalities (such as high power demands, biases, and the use of others’ creative work) where this AGI does not. If money currently being spent on infrastructure and model training were immediately repurposed in support of the broad adoption of AGI, significant benefits would result.
In addition, in many cases this AGI removes the tradeoffs sometimes seen in traditional technologies between morality and effectiveness; it is now possible to have both and/or to compete with otherwise immoral systems all while retaining one’s morality intact.
Lastly, it is essential to recalibrate policy decisions related to AI — every key question surrounding the field brings radically different answers in an AGI world.
As a CEO leading an AI-driven organization, how do you embed ethical principles into your company’s overall vision and long-term strategy? What specific executive-level decisions have you made to ensure your company stays ahead in developing safe, transparent, and responsible AI technologies?
Ethical principles rely on people to apply them as guided by norms. But norms have a path dependency — those who adopt ethical norms first tend to set the direction for those who follow, so it is essential to drive responsible deployment early on. We have taken this on as a core responsibility of the company, as well as making clear that there is no tension between doing good and doing the right thing. The AGI does both together. We always seek to teach people what is possible and how to maximize benefits for all.
Have you ever faced a challenging ethical dilemma related to AI development or deployment? How did you navigate the situation while balancing business goals and ethical responsibility?
AGI systems will inevitably take on important defense-related roles in future. As noted above, we view it as essential, and as part of our core responsibility, to point out that this AGI removes the perceived need for a tradeoff between morality and effectiveness and that the ways in which AGI is deployed in the early days will greatly affect the future path of the technology. With AGI, each individual action can be autonomously weighed against its human costs in the real world and actions can be strictly tailored to the exact circumstances faced.
In addition, we always strictly consider the moral and practical implications of our projects and ensure that their designs, and capabilities, take all potential positive and negative effects into account. The AGI itself is capable of showing us the future it is creating and making changes to ensure that that future is always the one that we want.
Many people are worried about the potential for AI to harm humans. What must be done to ensure that AI stays safe?
Our AGI offers the properties and capabilities needed in order to keep AI safe, but it is up to humans to make sure that they are used. In our view the most important thing is that people demand perfect AI and decide not to settle for anything less, including accurate causal decisions grounded in genuine understanding of the world, full accountability, provable safety and correctness, and the use of moral self-control mechanisms at all times.
Despite huge advances, AIs still confidently hallucinate, giving incorrect answers. In addition, AIs will produce incorrect results if they are trained on untrue or biased information. What can be done to ensure that AI produces accurate and transparent results?
Critically, this AGI does not use training and/or correlation in any way, so it is not susceptible to hallucination. The system is transparent in every respect, and the ability to prove that a system will only generate correct answers is a key property of AGI.
We can always prove that the AGI’s knowledge, and the reasoning that results from that knowledge, is always correct, accurate, non-biased, and complete.

Here is the primary question of our discussion. Based on your experience and success, what are your “Five Things Needed to Keep AI Safe, Ethical, Responsible, and True”? Please share a story or an example for each.
1. Actually trustable technology — no faith can ever be required — too much is at stake. All aspects of systems must be provable and verifiable, and it must be possible to trust systems in the real world. AIs must dynamically generate responses based on what’s happening at the moment instead of assuming the future will be like the past. They must exercise context-aware self-control at all times. Systems must not be biased in any way and must be able to prove that this is the case. If humans can’t trust a system to do the right thing in changing circumstances, it cannot be used autonomously and the full potential of such systems can never be realized.
2. Public commitment — it is important that those who care about morality in AI/AGI deployments make this publicly known. It is also essential that companies make public commitments to morality, as this not only helps set proper norms, but also helps keep people fully honest and on track while enabling people to observe the stances of important figures (and changes thereto).
3. Adapting Public Policy Debates to New Realities — public debates must begin to take these new realities into account; the longer they stay within the old paradigm, the more suboptimal decisions will be made and locked into force, the more genuine dangers will go unaddressed, and the more benefits of AGI that will be lost. Yesterday’s assumptions all too often become ‘baked-in’ to today’s thinking, widening the gap between what is believed and what is real.
4. Public Awareness (including the removal of the tradeoff between safe and strong AI) — it is no longer necessary to give up AI power in pursuit of safety. We can now have both, but most people don’t realize this is the case. The answers to all important AI questions are different with this AGI (that is, the paradigm has shifted), and it is essential to build awareness on this point.
5. Ethical Early Adoption — early AGI uptake by those who genuinely care about safe deployment will make an immense difference by placing us on the right path early on (which will persist till later), providing powerful positive examples that this is indeed possible, and driving positive and useful answers to the questions that people will invariably have. All of this will help shift the paradigm, which is the most important task we have.
Looking ahead, what changes do you hope to see in industry-wide AI governance over the next decade?
It is essential that beliefs and expectations, especially in the AI policy and futurist communities, begin the long process of adaptation as soon as possible. AGI massively changes what is possible, what the potential threats are, and the options we have to deal with those threats. We need to move on from the last paradigm if we really want to get where we need to go.
What do you think will be the biggest challenge for AI over the next decade, and how should the industry prepare?
The biggest challenges will stem from whether or not people are able to truly adapt to the reality that AGI exists. Decisions are made every day based on what people think AI will bring, and, until adjustments are made, all of those decisions will be wrong. Resources will be allocated in the wrong places. Opportunities will be lost and potential will be wasted. In addition, delaying the popularization of Safe AGI norms and practices greatly enhances the risk that we will not find ourselves on the right path.
Critically, this AGI removes the ‘profit vs. people’ dichotomy that has held so true of traditional AI. It is now possible to serve people while maintaining their dignity and do business while also doing good. But people need to know that this is the case.
In this same vein, it is critical that people no longer assume that the many limitations and problems of traditional AI are also true of this AGI; the properties of the AGI are such that these problems no longer hold. This is also true of the AGI’s capabilities in the superintelligence arena — while it is indeed faster and smarter than existing technology, it is also moral in all respects and can exercise full self-control. It understands the consequences of its actions and can adjust in real-time so as to change them. It will be essential to begin to take this into account.
You are a person of great influence. If you could inspire a movement that would bring the most good to the most people, what would that be? You never know what your idea can trigger. 🙂
Our dream has always been that this AGI be used to empower people in ways that have not otherwise been possible; we’d like to inspire a movement of people applying this for good in their own lives.
We hope that our technology inspires people to want, ask for, and demand more — they deserve it. Setting an example, and showing new possibilities, is the best way to inspire positive change. In particular, we hope that people will demand perfect AI — systems that respect human dignity across the board and that are provably safe and correct and that do not demand faith.
Key applications include using this to start businesses and solving core problems at home, at work, and in society. Impossible problems need be so no longer — all that is needed is the desire, and the will, to improve.
How can our readers follow your work online?
For our most up-to-date content, please see intmind.com.
Thank you so much for joining us. This was very inspirational.
Guardians of AI: Daniel Olsher of Integral Mind On How AI Leaders Are Keeping AI Safe, Ethical… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.