Guardians of AI: Phelim Bradley Of Prolific On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True
First, authentic human feedback throughout the AI lifecycle — from initial training through continuous monitoring of deployed systems in real-world use cases.
Second, sophisticated verification infrastructure that can distinguish genuine human input from synthetic or fraudulent responses, because the sophistication of non-human data is evolving rapidly.
Third, diverse global perspectives that reflect the breadth of humanity. AI systems built on narrow demographic perspectives will fail when deployed broadly, which is why we’re expanding our international reach and cultural localization capabilities.
Fourth, independent evaluation platforms. If data providers are too closely tied to specific customers or investors, it risks compromising the objectivity needed for safety assessment.
Fifth, continuous adaptation and vigilance. The fraud landscape evolves, AI capabilities evolve, so our verification and quality assurance processes must evolve accordingly. Standing still in this space means falling behind.
As AI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Phelim Bradley.
Phelim Bradley is the CEO and co-founder of Prolific, a leading platform that revolutionizes data collection for academic and market research. With a background in Physics and Computational Biology, Phelim holds a DPhil in Genomic Medicine and Statistics from Oxford University, along with a BSc in Physics and an MPhil in Computational Biology. Phelim played a pivotal role in building the first iteration of Prolific’s platform in 2014, serving as CTO and leading the company’s product strategy. His innovative vision and leadership led him to assume the role of CEO in 2021, where he continues to drive Prolific’s mission to empower researchers with high-quality data.
Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?
Funnily enough, my PhD was in genetics — I wasn’t aiming for a career in survey research or data quality at all. But during that time, my co-founder was running into serious issues trying to collect reliable data for his own research. He showed me what he was dealing with: survey responses full of garbage, bots, and participants clearly just rushing for the reward. It was obvious the infrastructure for online research was fundamentally broken.
That lit a spark. We started tinkering on a side project to fix the problem, initially just to help ourselves and a few friends. But pretty quickly, it started spreading through academic networks. It turned out a lot of researchers were frustrated by the same thing.
What began as a niche academic tool turned into Prolific. We got into Y Combinator in 2019, and things snowballed from there. Today I’m the CEO of a company that sits at the intersection of data integrity and AI — a long way from where we started, but a much more dynamic and impactful space than I could have predicted.
None of us can achieve success without some help along the way. Is there a particular person who you are grateful for, who helped get you to where you are? Can you share a story?
Honestly, there are too many to name. It’s taken an incredible village — from our early team who took a leap of faith, to investors who backed us well beyond what was required, to advisors, board members, and colleagues who stood by us through difficult stretches.
But if I had to single someone out, it would be Enrico D’ Angelo, who sadly passed away this year. During one of the most precarious periods in our company’s journey, he stepped in with clarity, generosity, and zero ego. He helped us navigate a make-or-break moment — not because he had to, not for any personal gain, but simply because he believed in what we were building and wanted to support it. That kind of selflessness is rare. I’ll always be grateful for it.
You are a successful business leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?
First, being customer-obsessed — this is actually one of our core company principles now. When we started, everyone just accepted that half of research responses would be low quality. Instead of accepting that status quo, we kept believing there had to be a better way. Even when we hit obstacles now, I genuinely believe those obstacles are showing us the problems to solve for our customers.
Second, maintaining unwavering integrity about our core mission. Whether it’s a PhD student or a frontier AI company, we treat every customer with the same commitment to data quality. This integrity has guided every major decision and built the trust that lets us work across such diverse use cases.
Third, strategic adaptability while staying true to our values. We’ve navigated major market shifts — from academic tool to COVID-driven online research to being at the center of AI development. Each transition was a risk, but we’ve learned that adapting while maintaining our standards and core values is what keeps us relevant and growing.
Thank you for all that. Let’s now turn to the main focus of our discussion about how AI leaders are keeping AI safe and responsible. To begin, can you list three things that most excite you about the current state of the AI industry?
First, we’re witnessing the “Experience Era” of AI evaluation. The industry is moving beyond benchmarks toward understanding how models perform when real humans use them in real-world scenarios. This requires the kind of authentic human feedback and diverse perspectives that we’ve been perfecting for over a decade.
Second, the explosion in demand for domain expertise in AI development. We’re working with customers who need physicists, medical professionals, climate scientists — the breadth of specialized knowledge required for safe AI development is unprecedented. It’s exciting to see how human expertise is becoming more valuable, not less, in the age of AI.
Third, the shift toward continuous evaluation rather than one-time training. Companies are realizing they need ongoing human feedback loops to ensure their models remain reliable and aligned. This represents a fundamental change in how AI systems are developed and maintained.
Conversely, can you tell us three things that most concern you about the industry? What must be done to alleviate those concerns?
There’s a pressure to prioritize speed, which I understand, but it potentially risks compromising on quality and integrity. Everyone wants to move fast, and there’s enormous commercial pressure, but cutting corners on data quality to hit growth targets risks compromising the safety of systems that will impact millions of people.
Second, the increasing sophistication of fraud and synthetic responses masquerading as human data is a worry. The professionalization of fraud operations means verification systems must constantly evolve. If we’re training AI on data that isn’t actually human, we’re building systems on false foundations.
Third, I worry about the long tail of AI applications — especially in consumer-facing or behavioral contexts. As these systems are deployed into messier, real-world environments, we’re going to see all sorts of unexpected edge cases in how people use, exploit, or react to them. These knock-on effects will be hard to predict and even harder to monitor. Unlike more deterministic systems, AI doesn’t break cleanly — it drifts, adapts, and fails in strange ways. That makes trust, safety, and auditability much more complex
As a CEO leading an AI-driven organization, how do you embed ethical principles into your company’s overall vision and long-term strategy? What specific executive-level decisions have you made to ensure your company stays ahead in developing safe, transparent, and responsible AI technologies?
We’ve made significant investments in what I call our authenticity infrastructure. Over the past couple of months, we’ve rolled out platform updates and product features anchored in data quality preservation and fraud prevention. We’re in our best position yet against sophisticated account manipulation, AI-generated responses — the whole spectrum of threats to data integrity.
The key decision for me was doubling down on our position as leaders in quality human data, even when it would be easier to lower our standards. We’ve built improved verification systems, multi-layered quality protocols, and real-time human escalation paths for reviewing potentially problematic AI outputs.
Have you ever faced a challenging ethical dilemma related to AI development or deployment? How did you navigate the situation while balancing business goals and ethical responsibility?
Our approach is built on three core principles. We always start with the question: “What serves human understanding?” This isn’t just about our business — it’s about the broader impact of the data we provide on scientific progress and AI safety.
Second, we maintain what I call “stakeholder integrity” — we consider the interests of our participants, customers, and society as a whole. When we make decisions about data quality standards or platform policies, we think about how it affects the millions of people who will ultimately interact with AI systems trained on this data.
Third, we prioritize long-term impact over short-term gains. This means sometimes making decisions that might slow our growth or cost us revenue in the near term, but that preserve the trust and quality standards that make our work valuable.
This framework has guided us through rapid growth while maintaining the integrity that our mission requires. It’s not always easy, but it’s what allows us to sleep well at night knowing our work is making AI development safer and more human-centered.
Many people are worried about the potential for AI to harm humans. What must be done to ensure that AI stays safe?
This is underpinned by authentic human feedback throughout the entire AI lifecycle — not just during initial training, but through continuous evaluation and real-world performance monitoring. This real-world impact component is what I call moving into the “Experience Era” of AI development.
But that human feedback must be genuinely human and representative. We need sophisticated verification systems that can distinguish real human input from synthetic or fraudulent responses, and we need diverse global perspectives — you can’t build safe AI for everyone using feedback from a narrow demographic slice.
Despite huge advances, AIs still confidently hallucinate, giving incorrect answers. In addition, AIs will produce incorrect results if they are trained on untrue or biased information. What can be done to ensure that AI produces accurate and transparent results?
This is exactly why human-in-the-loop evaluation is so critical, and why Prolific is uniquely positioned for this challenge. AI models can struggle with errors, biases, hallucinations, and simply being optimized for the wrong objective. The solution is getting real people — representative groups of people — involved during model training and ongoing evaluation.
But we’re entering a world where you can’t just assume your human data is actually human. We’ve spent 10 years building quality assurance processes that can detect and prevent fraudulent responses. We also provide access to domain experts who can evaluate AI outputs in specialized fields — from medical professionals reviewing health-related responses to climate scientists evaluating environmental claims.
The key is having platforms that can intelligently route between different types of human intelligence based on the specific evaluation requirements, while maintaining absolute verification standards.
Here is the primary question of our discussion. Based on your experience and success, what are your “Five Things Needed to Keep AI Safe, Ethical, Responsible, and True”? Please share a story or an example for each.
First, authentic human feedback throughout the AI lifecycle — from initial training through continuous monitoring of deployed systems in real-world use cases.
Second, sophisticated verification infrastructure that can distinguish genuine human input from synthetic or fraudulent responses, because the sophistication of non-human data is evolving rapidly.
Third, diverse global perspectives that reflect the breadth of humanity. AI systems built on narrow demographic perspectives will fail when deployed broadly, which is why we’re expanding our international reach and cultural localization capabilities.
Fourth, independent evaluation platforms. If data providers are too closely tied to specific customers or investors, it risks compromising the objectivity needed for safety assessment.
Fifth, continuous adaptation and vigilance. The fraud landscape evolves, AI capabilities evolve, so our verification and quality assurance processes must evolve accordingly. Standing still in this space means falling behind.
Looking ahead, what changes do you hope to see in industry-wide AI governance over the next decade?
AI is moving incredibly fast and creating tremendous opportunities — we need governance that enables progress rather than hindering it.
That said, there are a few targeted areas where standards would actually help the industry move faster. First, basic frameworks for data authenticity — not heavy regulation, but industry best practices that help everyone distinguish quality human feedback from synthetic or fraudulent data. This would save companies time and resources by establishing common verification approaches.
Second, I’d like to see voluntary transparency around conflicts of interest in evaluation services. Not mandated disclosure, but industry norms that help customers make informed decisions about their data providers. This market-driven approach would reward quality without creating regulatory burden.
The key is letting market forces drive quality while providing just enough infrastructure to prevent bad actors from undermining trust in human data. We want governance that supports innovation, not governance that gets in the way of the incredible breakthroughs happening in AI right now.
What do you think will be the biggest challenge for AI over the next decade, and how should the industry prepare?
The biggest challenge will be maintaining authentic human signal as AI systems become more sophisticated and globally deployed. This isn’t about avoiding synthetic data — that would be the wrong outcome — it’s about ensuring we have the right human perspectives for the incredibly diverse ways these models are being used.
We’re seeing demand for everyone from nuclear physicists to cultural experts as AI applications expand. The challenge is building evaluation infrastructure that can provide access to this breadth of human expertise while maintaining quality and authenticity standards.
The industry needs to invest in sophisticated human data orchestration platforms now. We can’t just choose between human or synthetic data — we need systems that can intelligently route between different types of human intelligence, AI capabilities, and verified synthetic data based on specific quality requirements and use cases.
Most importantly, we need to ensure this infrastructure remains unbiased, rather than being controlled by the same companies building the AI systems being evaluated.
You are a person of great influence. If you could inspire a movement that would bring the most good to the most people, what would that be? You never know what your idea can trigger. 🙂
I’d want to accelerate what we call the data revolution — making authentic human insight as accessible as cloud computing infrastructure. Our vision is data that enriches humanity, empowering an understanding of ourselves and our world that leads to real, positive impact.
Imagine if every major decision — whether developing medical treatments, building AI systems, or creating public policy — was grounded in authentic, diverse human insight. Right now, so many breakthroughs are limited by access to quality human feedback.
We’re building toward making high-quality human insights as easy to access as spinning up a server on AWS. This infrastructure for human intelligence could accelerate breakthroughs across science, technology, and society by ensuring human wisdom guides our most important innovations.
That’s what gets me excited every day — we’re not just building a platform, we’re building the foundations for a future where authentic human data drives solutions to the world’s biggest challenges.
How can our readers follow your work online?
If you’re a developer or researcher of any kind, I’d encourage you to try our platform directly — there’s nothing quite like experiencing how quickly you can access high-quality human insights when you need them.
The best way to follow our work is through Prolific’s website at prolific.com, where we regularly share insights about human data collection, AI development, and research innovation. You can also find us on LinkedIn.
Thank you so much for joining us. This was very inspirational.
Guardians of AI: Phelim Bradley Of Prolific On How AI Leaders Are Keeping AI Safe, Ethical… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.