Arrested DevOps is the podcast that helps you achieve understanding, develop good practices, and operate your team and organization for maximum DevOps awesomeness.

AI, Ethics, and Empathy With Kat Morgan

June 03, 2025 40:13 7.54 MB ( 10.85 MB less) Downloads: 0

We’ve all been there: burning out on volatile tech jobs, tangled in impossible systems, and wondering what our work actually means. On this episode of Arrested DevOps, Matty Stratton sits down with Kat Morgan for a heartfelt, funny, and sharply observant conversation about AI: what it helps with, what it hurts, and how we navigate all of that as humans in tech.

They dive deep into how large language models (LLMs) both assist and frustrate us, the ethics of working with machines trained on the labor of others, and why staying kind—to the robots and to ourselves—might be one of the most important practices we have.

“We actually have to respect our own presence enough to appreciate that what we put out in the world will also change ourselves.” – Kat Morgan

Topics

  • Why strong opinions about AI often miss the nuance
  • Using LLMs to support neurodivergent workflows (executive function as a service!)
  • Treating agents like colleagues and the surprising benefits of that mindset
  • Code hygiene, documentation, and collaborating with AI in GitHub issues
  • Building private, local dev environments to reduce risk and improve trust
  • Ethical tensions: intellectual property, environmental impact, and the AI value chain
  • Why we should be polite to our agents—and what that says about how we treat people

Key Takeaways

  • AI isn’t magic, but it can be a helpful colleague. Kat shares how she uses LLMs to stay on task, avoid executive dysfunction, and manage complex projects with greater ease.
  • Good context design matters. When working with AI, things like encapsulated code, clean interfaces, and checklists aren’t just best practices. They’re vital for productive collaboration.
  • Skepticism is healthy. Kat reminds us that while AI can be useful, it also messes up. A lot. And without guardrails and critical thinking, it can become more of a liability than a partner.
  • Build humane systems. From privacy risks to climate concerns, this episode underscores that responsible AI use requires ethical intent, which starts with practitioners.