Making artificial intelligence practical, productive, and accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, etc). The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!
Similar Podcasts
Ship It! DevOps, Infra, Cloud Native
A show about getting your best ideas into the world and seeing what happens. We talk about code, ops, infrastructure, and the people that make it happen. Gerhard Lazu and friends explore all things DevOps, infra, and running apps in production. Whether you’re cloud native, Kubernetes curious, a pro SRE, or just operating a VPS… you’ll love coming along for the ride. Ship It honors the makers, the shippers, and the visionaries that see it through. Some people search for ShipIt or ShipItFM and can’t find the show, so now the strings ShipIt and ShipItFM are in our description too.
Changelog Master Feed
Your one-stop shop for all Changelog podcasts. Weekly shows about software development, developer culture, open source, building startups, artificial intelligence, shipping code to production, and the people involved. Yes, we focus on the people. Everything else is an implementation detail.
The Changelog: Software Development, Open Source
Conversations with the hackers, leaders, and innovators of the software world. Hosts Adam Stacoviak and Jerod Santo face their imposter syndrome so you don’t have to. Expect in-depth interviews with the best and brightest in software engineering, open source, and leadership. This is a polyglot podcast. All programming languages, platforms, and communities are welcome. Open source moves fast. Keep up.
scikit-learn & data science you own
We are at GenAI saturation, so let's talk about scikit-learn, a long time favorite for data scientists building classifiers, time series analyzers, dimensionality reducers, and more! Scikit-learn is deployed across industry and driving a significant portion of the "AI" that is actually in production. :probabl is a new kind of company that is stewarding this project along with a variety of other open source projects. Yann Lechelle and Guillaume Lemaitre share some of the vision behind the company and talk about the future of scikit-learn!
Creating tested, reliable AI applications
It can be frustrating to get an AI application working amazingly well 80% of the time and failing miserably the other 20%. How can you close the gap and create something that you rely on? Chris and Daniel talk through this process, behavior testing, and the flow from prototype to production in this episode. They also talk a bit about the apparent slow down in the release of frontier models.
AI is changing the cybersecurity threat landscape
This week, Chris is joined by Gregory Richardson, Vice President and Global Advisory CISO at BlackBerry, and Ismael Valenzuela, Vice President of Threat Research & Intelligence at BlackBerry. They address how AI is changing the threat landscape, why human defenders remain a key part of our cyber defenses, and the explain the AI standoff between cyber threat actors and cyber defenders.
The path towards trustworthy AI
Elham Tabassi, the Chief AI Advisor at the U.S. National Institute of Standards & Technology (NIST), joins Chris for an enlightening discussion about the path towards trustworthy AI. Together they explore NIST's 'AI Risk Management Framework' (AI RMF) within the context of the White House's 'Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence'.
Big data is dead, analytics is alive
We are on the other side of "big data" hype, but what is the future of analytics and how does AI fit in? Till and Adithya from MotherDuck join us to discuss why DuckDB is taking the analytics and AI world by storm. We dive into what makes DuckDB, a free, in-process SQL OLAP database management system, unique including its ability to execute lighting fast analytics queries against a variety of data sources, even on your laptop! Along the way we dig into the intersections with AI, such as text-to-sql, vector search, and AI-driven SQL query correction.
Practical workflow orchestration
Workflow orchestration has always been a pain for data scientists, but this is exacerbated in these AI hype days by agentic workflows executing arbitrary (not pre-defined) workflows with a variety of failure modes. Adam from Prefect joins us to talk through their open source Python library for orchestration and visibility into python-based pipelines. Along the way, he introduces us to things like Marvin, their AI engineering framework, and ControlFlow, their agent workflow system.
Towards high-quality (maybe synthetic) datasets
As Argilla puts it: "Data quality is what makes or breaks AI." However, what exactly does this mean and how can AI team probably collaborate with domain experts towards improved data quality? David Berenstein & Ben Burtenshaw, who are building Argilla & Distilabel at Hugging Face, join us to dig into these topics along with synthetic data generation & AI-generated labeling / feedback.
Understanding what's possible, doable & scalable
We are constantly hearing about disillusionment as it relates to AI. Some that that is probably be valid, but Mike Lewis, an AI architect from Cincinnati, has proven that he can consistently get LLM and GenAI apps to the point of real enterprise value (even with the Big Cos of the world). In this episode, Mike joins us to share some stories from the AI trenches & highlight what it takes (practically) to show what is possible, doable & scalable with AI.
GraphRAG (beyond the hype)
Seems like we are hearing a lot about GraphRAG these days, but there are lots of questions: what is it, is it hype, what is practical? One of our all time favorite podcast friends, Prashanth Rao, joins us to dig into this topic beyond the hype. Prashanth gives us a bit of background and practical use cases for GraphRAG and graph data.
Pausing to think about scikit-learn & OpenAI o1
Recently the company stewarding the open source library scikit-learn announced their seed funding. Also, OpenAI released "o1" with new behavior in which it pauses to "think" about complex tasks. Chris and Daniel take some time to do their own thinking about o1 and the contrast to the scikit-learn ecosystem, which has the goal to promote "data science that you own."
Cybersecurity in the GenAI age
Dinis Cruz drops by to chat about cybersecurity for generative AI and large language models. In addition to discussing The Cyber Boardroom, Dinis also delves into cybersecurity efforts at OWASP and that organization's Top 10 for LLMs and Generative AI Apps.
AI is more than GenAI
GenAI is often what people think of when someone mentions AI. However, AI is much more. In this episode, Daniel breaks down a history of developments in data science, machine learning, AI, and GenAI in this episode to give listeners a better mental model. Don't miss this one if you are wanting to understand the AI ecosystem holistically and how models, embeddings, data, prompts, etc. all fit together.
Metrics Driven Development
How do you systematically measure, optimize, and improve the performance of LLM applications (like those powered by RAG or tool use)? Ragas is an open source effort that has been trying to answer this question comprehensively, and they are promoting a "Metrics Driven Development" approach. Shahul from Ragas joins us to discuss Ragas in this episode, and we dig into specific metrics, the difference between benchmarking models and evaluating LLM apps, generating synthetic test data and more.
Threat modeling LLM apps
If you have questions at the intersection of Cybersecurity and AI, you need to know Donato at WithSecure! Donato has been threat modeling AI applications and seriously applying those models in his day-to-day work. He joins us in this episode to discuss his LLM application security canvas, prompt injections, alignment, and more.
Only as good as the data
You might have heard that "AI is only as good as the data." What does that mean and what data are we talking about? Chris and Daniel dig into that topic in the episode exploring the categories of data that you might encounter working in AI (for training, testing, fine-tuning, benchmarks, etc.). They also discuss the latest developments in AI regulation with the EU's AI Act coming into force.