AWS’s Ishneet Dua on Responsible AI, Sustainable Cloud Infrastructure and the Ethics Behind Generative Tech
…Generative AI’s rapid adoption has dramatically increased demand for compute, storage, and bandwidth, leading to higher energy consumption and a larger carbon footprint. This makes it more important than ever to optimize AI pipelines — using techniques like model distillation, quantization, and thoughtful model selection to balance performance, cost, and sustainability. Companies should carefully evaluate their business needs before adopting the latest models or tools, as unnecessary complexity and tool sprawl can waste both resources and energy…
I had the pleasure of talking with Ishneet Dua. Ishneet is a Senior Solutions Architect at Amazon Web Services (AWS), where she plays a central role in guiding enterprises through the adoption and implementation of artificial intelligence (AI), machine learning (ML), and sustainable cloud technologies. With over a decade of experience in digital transformation and enterprise architecture, Dua has built a career at the intersection of advanced computing and environmental consciousness, advising organizations across industries on how to optimize their technological infrastructures while remaining attentive to long-term sustainability and ethical AI use.
Dua joined AWS in 2019 and is currently based in San Francisco, a city she describes as both technically dynamic and culturally vibrant. Her position involves working closely with clients, ranging from startups to Fortune 100 corporations, to help them design and deploy cloud-native applications and generative AI models. A key focus of her work includes optimizing cloud architectures for performance, cost-efficiency, and environmental impact. In a sector where the demand for compute power continues to grow, particularly for GPU-intensive AI workloads, she is actively engaged in developing cost-aware, scalable, and sustainable solutions for model providers and enterprise users alike.
Prior to her role at AWS, Dua’s technical career began in DevOps engineering at a time when container orchestration technologies like Docker and Kubernetes were just gaining prominence. Her early exposure to these foundational tools allowed her to gain fluency in the types of infrastructural systems that now underpin much of the modern cloud ecosystem. In 2018, she briefly returned to India to help build out a development team, further expanding her global leadership credentials before securing her long-standing aspiration of joining Amazon.
Dua holds a master’s degree in computer science from Oregon State University. She originally hails from Delhi, India, where she completed her undergraduate studies. Her early interests were shaped by conversations with her father, a civil engineer with a specialization in environmental engineering, who envisioned a technical career for his daughter in a related domain. Although environmental engineering programs were less accessible in India at the time, Dua embraced computer science while continuing to draw inspiration from her upbringing, consistently emphasizing technology’s potential to support environmental stewardship.
Her professional portfolio includes authorship of two technical books and over 20 articles and white papers, reaching more than half a million readers worldwide. Her published work explores topics such as generative AI implementation, sustainability best practices, and cloud native deployment strategies. She is a frequent contributor to high-profile industry events including CloudX, DevNetwork, AWS re:Invent and AWS re:MARS, where she speaks on subjects ranging from MLOps and low-code machine learning to AI ethics and legal compliance.
In addition to her technical and thought leadership, Dua is deeply involved in mentoring and advocacy for women in AI/ML, and cloud computing. She has led several training initiatives and mentorship programs aimed at improving diversity and inclusion in technology fields. Her courses and workshops, such as those focused on career guidance, low-code ML, seek to make advanced technologies more accessible to wider audiences, particularly those traditionally underrepresented in STEM.
A hallmark of Dua’s current work is its practical and strategic orientation. In recent years, she has helped organizations navigate the complexities of building, scaling, and maintaining AI/ML pipelines. This includes advising on infrastructure orchestration, model/data drift monitoring, inference optimization, and the application of techniques like model quantization to reduce computational overhead. With AI models consuming growing amounts of energy, Dua has been instrumental in developing AWS frameworks that include sustainability as a core pillar alongside security and performance.
One of her most notable projects involved partnering with a U.S.-based telecommunications company to redesign their cell tower placement strategy using geospatial data and ML algorithms. The objective was to integrate environmental variables, such as flood risk and deforestation trends, into infrastructure planning. The initiative is still active and informs long-term network development in regions affected by climate vulnerability.
Her experiences have also illuminated the legal and ethical complexities of generative AI. The growing use of LLMs and generative AI highlights the critical need to carefully address risks such as hallucinations-where AI produces inaccurate or fabricated information-and intellectual property concerns like copyright protection. For example, image generation comes with challenges of safeguarding creators’ rights due to the absence of embedded watermarking. These issues underscore the importance of implementing robust measures to ensure the reliability, authenticity, and legal compliance of AI-generated content across both text and images.
The demands of Dua’s role, requiring both technical depth and broad consultative engagement, have made it among the most challenging in her career. She frequently navigates rapid context switching, adapting to different industry domains, technological maturity levels, and customer needs. While the learning curve has been steep, she notes that the dynamic environment has helped her grow personally as well as professionally. Over time, she has become an accomplished public speaker, overcoming initial fears to deliver keynotes and presentations at major industry gatherings.
Alongside her day-to-day responsibilities, Dua remains focused on the broader implications of AI’s growth. She advocates for greater transparency, governance, and ethical rigor in AI systems, especially in light of issues such as hallucination, model bias, and data privacy. She believes that the future of AI hinges not just on technical advancement, but on principled development, balancing innovation with societal impact.
Looking ahead, Dua continues to work on creating go-to-market strategies for sustainable IT and scalable AI infrastructure. Her mission remains clear: to ensure that the tools shaping tomorrow’s industries are not only powerful and effective, but also responsibly built and equitably deployed.
Yitzi: Ishneet Dua, it’s so nice to meet you. Before we dive in deep, our readers would love to learn about your personal origin story. Can you share the story of your childhood and how you grew up?
Ishneet: Sure. Thank you for your time again, Yitzi. I grew up in India and spent my formative years in Delhi, which is also where I completed my undergraduate studies. I was always driven by a desire to expand my educational horizons, so I made the decision to pursue my master’s degree in the United States right after finishing my undergrad. I went on to study at Oregon State University.
My father was a civil engineer and held an advanced degree in environmental engineering. He had envisioned a similar career path for me, especially when we would talk about what I wanted to be when I grew up. But life happened, and at the time, there were limited environmental engineering programs available in India. That led me to pivot toward computer science, which I saw as another field where I could contribute to sustainability and environmental solutions through technology.
My professional journey really began in the U.S., and it was an exciting time in tech. I started off as a DevOps engineer after completing my master’s at OSU, around 2016 to 2018. That was a period when containerization and orchestration were just starting to gain traction, people were talking about Docker and Kubernetes. I had the opportunity to work on those technologies when they were still in their early stages, and it was a valuable experience because they were beginning to reshape the industry.
In 2018, I had to temporarily move back to India to help support and build a team there for my previous company, which was a meaningful leadership experience. But throughout my time working in the U.S. and being involved with these technologies, my dream company was always Amazon. I wanted to work at AWS, Amazon Web Services, because even back then, and still today, they represent the pinnacle of cloud computing and innovation.
That dream became a reality in early 2019. I interviewed with Amazon and moved back to the U.S. from India to take on a role as a solutions architect in Chicago. I absolutely love that city. It captured my heart with its vibrant culture and strong tech scene. It’s now been six years at AWS, and it’s been an incredible journey of continuous growth and learning. The company has a strong culture of innovation. I’m sure you’ve heard of the leadership principles and the emphasis on customer obsession. That culture has given me countless opportunities to explore various domains and technologies.
What stands out the most to me is the consistent emphasis on continuous learning, innovation and bias for action, whether it’s diving deep into new services, understanding different industry verticals, or exploring emerging technologies. In my role as a solutions architect, there’s a lot of context switching and breadth rather than depth, which is exactly what I wanted. I thrive in environments where I’m constantly learning and trying new things.
Looking back, I see a clear connection from my early interest in my father’s work in environmental engineering to my current role in tech. It’s been an unconventional path, but a rewarding one. It’s allowed me to merge my passion for sustainability with the cutting-edge technologies shaping our world today, like generative AI and machine learning, while working at a company that takes its responsibility to the planet seriously.
Yitzi: You probably have some amazing stories from your career. Can you share with our readers one or two stories that most stand out in your mind from your professional life?
Ishneet: Sure. Just to give you a little additional background before the story makes sense, in my current role as a solutions architect in a pre-sales capacity, my work is pretty multifaceted. I engage with a wide range of customers across different industries, from small startups to large enterprises. Right now, I work a lot with generative AI model providers, companies building the next generation of foundational models, like Anthropic’s Claude, GPT models, or models for text-to-image generation like Stable Diffusion.
One of the key parts of my role is being a trusted technology partner for my customers. I make sure they’re architected correctly for the cloud, which includes principles like resiliency, security, performance, and operational excellence. In recent years, we’ve seen a shift in global priorities. Climate change has become an increasingly urgent issue, with deforestation accelerating and glaciers melting. That shift prompted us to evolve our architectural approach by adding sustainability as a new pillar to our framework.
One of the most fascinating projects I’ve worked on was with a major telecommunications provider. They wanted to optimize their cell tower placement strategy using advanced geospatial analysis. The idea was to combine satellite imagery and perform risk assessments to determine the most sustainable and least risky locations for future cell tower sites. They looked at a wide range of data, historical flooding patterns, deforestation rates, terrain elevation, population density, and network coverage. This project required a deep application of machine learning and geospatial analytics. I thought that was a really exciting project to be a part of, not just because of the technical challenge but also because of the broader impact it could have on sustainability and infrastructure resilience.
Yitzi: It’s been said that sometimes our mistakes can be our greatest teachers. Do you have a story about a funny mistake that you made when you were first starting in AI, and the lesson that you learned from it?
Ishneet: One that comes to mind from more recent times is related to the proof of concepts I often build for customers. I frequently need to put together small applications or demos to support their specific use cases, and some of the recent ones have involved text-to-text, text-to-image and text-to-video generation using foundational models.
We experimented with generating this creative content using AI models. While the technology performed well, we soon realized there were important considerations we had overlooked, particularly around content ownership and usage rights. Without clear indicators of origin or embedded credentials, there was a real risk of copyright or intellectual property issues, which could lead to significant legal complications.
For instance, if someone asks an AI to create content in the style of a well-known artist, it raises questions about the legal and ethical boundaries of such outputs. It’s easy to focus on the technical success of these models, but it’s equally important to consider whether users fully understand the implications of using AI-generated content, especially in relation to copyright and responsible AI practices.
Now, we take a much more cautious approach. We ensure that every project and piece of AI-generated content is evaluated for potential legal and security risks before delivery. This is particularly crucial in sectors like media and entertainment, where contractual obligations and intellectual property rights are complex. Ultimately, it’s vital to recognize that current AI models may not fully account for these nuances, so proactive risk management and responsible AI practices are essential.
This was a really eye-opening experience for me, and it happened just last year, when the generative AI buzz was peaking and everyone wanted to explore these tools. But we didn’t, and still don’t, have a complete legal framework to govern this space.
Yitzi: What has been the most challenging project or role that you’ve taken on, and why?
Ishneet: Let me think about that. I’d say my current role has been the most dynamic and rewarding so far. Each day brings new opportunities, with a diverse range of customers and technologies to explore. The constant variety has encouraged me to become more adaptable and comfortable with ambiguity, even though I used to prefer having everything planned out. This experience has really helped me grow, teaching me how to thrive in ever-changing situations.
Working with multiple customers, often back-to-back, means I have to stay sharp and flexible, quickly switching contexts and learning from each unique interaction. It’s also been a fantastic motivator to keep up with the latest advancements, especially as technology evolves so rapidly. While it can sometimes feel like there’s always something new to learn, I’ve developed routines to stay up-to-date and maintain a healthy balance.
Transitioning from a purely technical and development background to a more consultative and strategic pre-sales role has broadened my perspective. I’ve also had the chance to develop skills beyond technology, like public speaking, writing, and thought leadership. These experiences-whether presenting at conferences or publishing articles-have been both challenging and fulfilling. Looking back, I’m proud of how much I’ve grown and excited for the opportunities that lie ahead.
Looking back, I think this role has fundamentally changed me. It’s shaped my personality, helped me grow, and made me better, not just as a professional, but as a person. It’s definitely been a learning experience.
Yitzi: You have so much impressive work, Isha. Can you share with our readers the exciting projects you’re working on now? Whatever you’re allowed to share.
Isha: One of the recent projects I’ve been working on centers around generative AI and helping model providers build their solutions on the cloud in the most cost-effective way possible. This is especially important because, as you know, GPUs are both expensive and in short supply. These models require significant compute power, and meeting that demand at a lower cost is crucial for achieving a return on investment, particularly since many of these companies are startups. My recent work has focused on optimizing the entire model-building pipeline, including data processing and analysis, model training, hyperparameter tuning, and deploying models for inference. We evaluate how to maximize GPU utilization during training, which orchestrators to use (whether Slurm-based or Kubernetes), how to architect inference pipelines, and how to prevent throttling when multiple prompt requests come in simultaneously.
One of the major initiatives has been developing best practices for these pipelines. This includes providing guidance on the appropriate storage infrastructure, selecting the best vector databases or embedding models, and recommending the most effective monitoring and observability tools. We also emphasize keeping models up to date and continuously improving them to prevent staleness as real-world conditions evolve. This involves monitoring for data drift, model drift, and output quality drift.
All of this is carried out with responsible AI practices in mind. We ensure transparency and control throughout the entire lifecycle, particularly to support customers who may need this information for compliance or regulatory purposes in the future. A recent project has focused on designing this pipeline with best practices so it can be scaled and adapted for other model providers as well.
Another initiative I’ve been working on is creating a go-to-market package for sustainable IT. This is a broad area, and Amazon has a strong commitment to environmental stewardship. We launched the Climate Pledge in 2019, are powering data centers with renewable energy, improving our packaging, and investing in companies like Rivian to promote electric vehicles in our delivery fleet. Our sustainability efforts also include introducing more energy-efficient instances, like our Graviton instances, which are up to 60% more energy efficient.
Yitzi: So in terms of sustainability, can you share a few things that concern you about the AI industry as a whole, with regards to sustainability?
Isha: Over the past year, generative AI has surged in popularity, especially in 2024, bringing with it a complex intersection of technology and sustainability. As the field grows, environmental, social, and governance (ESG) considerations have become increasingly important. My work has expanded to address not only environmental impacts but also responsible AI development, focusing on data provenance, ethical use, bias, and the need for transparent governance frameworks. These themes, including legal and copyright challenges, are explored in my recent books.
Generative AI’s rapid adoption has dramatically increased demand for compute, storage, and bandwidth, leading to higher energy consumption and a larger carbon footprint. This makes it more important than ever to optimize AI pipelines-using techniques like model distillation, quantization, and thoughtful model selection to balance performance, cost, and sustainability. Companies should carefully evaluate their business needs before adopting the latest models or tools, as unnecessary complexity and tool sprawl can waste both resources and energy.
Efficient AI development requires strong architecture, best practices, and end-to-end monitoring. Tracking resource usage allows for smarter scaling and reduced waste, ensuring models are deployed and operated efficiently. Ultimately, a balanced, use-case-driven approach-supported by machine learning for optimization-can help organizations build powerful generative AI solutions while managing costs and minimizing environmental impact. These are some topics I talk about in my book as well.
Yitzi: I saw a recent tweet from Sam Altman where he said that when people say “please” and “thank you” to AI, being polite when it’s not necessary, it actually costs millions of dollars. But I also think about the environmental impact, every time you’re using all these servers and all this compute just to generate a polite response. Should people be considering that? Like, if I use AI today, am I essentially using 10 gallons of water?
Isha: I wouldn’t frame it that way. I believe AI should be embraced as a powerful tool to make our lives easier and more efficient. As I mentioned earlier, relying solely on manual experiments and iterations can actually waste valuable compute resources, whereas leveraging AI can help you reach solutions faster and more effectively.
That said, it’s important to use AI both responsibly and efficiently. For example, if you want to learn about machine learning, a clear and concise prompt like “Teach me the important concepts in machine learning” is far more effective than a lengthy, overly detailed request. The more concise your prompt, the fewer compute resources are required, since the model processes every token you send. We often see users include unnecessary details or repeat themselves, which only increases resource usage without adding value. Crafting clear, focused prompts is a simple yet impactful way to optimize your AI interactions.
Overall, I firmly believe that using AI is a positive step-it boosts productivity, streamlines everyday tasks, and opens up new possibilities. The key is to approach it thoughtfully, ensuring we don’t generate or prompt toxic or biased content. While modern AI models have built-in safeguards, we as users also have a responsibility to use these tools ethically and contribute to a more responsible AI ecosystem.
Yitzi: This is our signature question. You’ve been blessed with a lot of success. Looking back to when you first started, can you share five things you wish you knew when you began working in AI? Five things you’ve learned now that you wish you’d known at the start?
Isha: That’s a great question. Let me think about it-five things I wish I could have told my younger self.
- First, I’d say to give yourself some grace. You’re not going to know everything about AI and technology right away. Learning and processing all that information takes time. One of my biggest challenges was dealing with imposter syndrome; I was constantly comparing myself to others and their career progress. So, be patient with yourself. Take the time to learn, stay curious about developments in the field, and keep up as much as you can alongside your daily responsibilities.
- Second, being hands-on has made a huge difference for me. Reading is helpful, but actually working with the technology-building proof of concepts, experimenting, and trying new things-is where real learning happens. If I could go back, I’d do more pet projects at home, just for my own growth. It doesn’t have to be perfect or production-ready; the experience itself is invaluable.
- Third, I wish I had pursued more formal education in AI. Looking back, I would have liked to spend more time on advanced courses, perhaps even a master’s in computer science with a focus on AI and machine learning.
- Fourth, I’d attend more meetups and community events. These gatherings are fantastic for learning and networking. You get exposed to new ideas and meet people at similar stages in their journey. Peer programming and conversations with others can accelerate your learning and boost your confidence. It’s reassuring to realize you’re not alone in your struggles.
- Finally, I wish I had made better use of online resources-YouTube talks, courses on platforms like Udemy, Coursera, Data Nuggets, and Cloud Guru. Many of these are free or low-cost and offer incredible value. There are excellent instructors and speakers out there, and I wish I had taken more advantage of those opportunities.
Overall, if I could sum it up, I would have diversified my learning approach: less isolation, more hands-on projects, more community interaction, and more listening.
Yitzi: What are your thoughts about model collapse? I’ve read that the training data is running out. What are your thoughts on that, and on possible solutions to address it?
Isha:I’ve observed model collapse in certain types of models, particularly those designed for code generation and code completion. In the case of code generation, the challenge arises because there simply isn’t enough diverse code available to train these models at scale. This scarcity can lead to models being trained on synthetic or AI-generated code, which compounds the risk of model collapse as the diversity and originality of outputs diminish over time.
For code generation, the limited variety of training data is a real challenge. Some startups are addressing this by generating synthetic code from sources like GitHub and Stack Overflow, then retraining their models using this synthetic data and reinforcement learning techniques. While this can help supplement the limited original code, it also introduces the risk of reinforcing patterns and reducing output diversity if not managed carefully.
From what I’ve seen, model collapse is a more prominent issue in code generation than in language or image models, where the abundance of training data helps mitigate the risk. Still, it’s an important challenge to monitor, especially as synthetic data becomes more prevalent in AI training pipelines.
Yitzi: So, here’s our final question. Isha, because of your great work and the platform you’ve built, you’re a person of enormous influence. If you could put out an idea or inspire a movement that would bring the most good to the most people, what would that be?
Isha:I believe the most important priority is responsible AI. Moving forward, we need to be thoughtful in how we build and use AI and machine learning. As I mentioned, it’s crucial to ensure we’re not promoting toxicity or bias in the environment. We should develop models with robust guardrails to prevent hallucinations. When these models first emerged, hallucinations-where the AI generated verifiably false content-were common and led to significant issues, such as the spread of fake news, which became especially problematic during elections.
Given the current global climate, with unrest in many regions and the risks associated with misinformation, it’s more important than ever to prevent the spread of false or misleading content. AI must ensure fair representation of all demographic groups. For example, in consumer lending, a model should not disproportionately reject loans for one demographic over another; such biases must be eliminated.
Strong privacy and security measures are also essential, as many people input personal information into these models. We need to safeguard that data through regular impact assessments, transparency, and fail-safe mechanisms to prevent misuse. From the perspective of human oversight and ethical use, this is an area where I want to contribute. There’s still a lack of comprehensive laws and regulations around AI, and I’m eager to be part of the early efforts to shape responsible AI practices as my career progresses.
Yitzi: Isha, thank you so much for these amazing insights. How can our readers continue to follow your work?
Isha: The best way to connect with me is through LinkedIn, as that’s where I share most of my work and updates. I’ve already written two books and plan to continue authoring more content on these topics, so LinkedIn is the ideal place to stay informed about new projects or upcoming speaking engagements.
I also speak regularly at local conferences and always enjoy meeting people in person. If you attend, I’m happy to chat, exchange ideas, and discuss topics of mutual interest. Overall, LinkedIn is the primary platform I use-I don’t maintain other social media profiles, so that’s the best way to stay in touch.
Yitzi: Isha, thank you so much for your gracious time and for this amazing conversation. I wish you continued success, blessings, and good health. I hope we can do this again next year.
Isha: Thank you so much, Yitzi, and thank you for listening to me so patiently.
AWS’s Ishneet Dua on Responsible AI, Sustainable Cloud Infrastructure and the Ethics Behind… was originally published in Authority Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.