Making artificial intelligence practical, productive, and accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, etc). The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!
Similar Podcasts
Ship It! DevOps, Infra, Cloud Native
A show about getting your best ideas into the world and seeing what happens. We talk about code, ops, infrastructure, and the people that make it happen. Gerhard Lazu and friends explore all things DevOps, infra, and running apps in production. Whether you’re cloud native, Kubernetes curious, a pro SRE, or just operating a VPS… you’ll love coming along for the ride. Ship It honors the makers, the shippers, and the visionaries that see it through. Some people search for ShipIt or ShipItFM and can’t find the show, so now the strings ShipIt and ShipItFM are in our description too.
Changelog Master Feed
Your one-stop shop for all Changelog podcasts. Weekly shows about software development, developer culture, open source, building startups, artificial intelligence, shipping code to production, and the people involved. Yes, we focus on the people. Everything else is an implementation detail.
The Changelog: Software Development, Open Source
Conversations with the hackers, leaders, and innovators of the software world. Hosts Adam Stacoviak and Jerod Santo face their imposter syndrome so you don’t have to. Expect in-depth interviews with the best and brightest in software engineering, open source, and leadership. This is a polyglot podcast. All programming languages, platforms, and communities are welcome. Open source moves fast. Keep up.
Large models on CPUs
Model sizes are crazy these days with billions and billions of parameters. As Mark Kurtz explains in this episode, this makes inference slow and expensive despite the fact that up to 90%+ of the parameters don’t influence the outputs at all. Mark helps us understand all of the practicalities and progress that is being made in model optimization and CPU inference, including the increasing opportunities to run LLMs and other Generative AI models on commodity hardware.
Causal inference
With all the LLM hype, it’s worth remembering that enterprise stakeholders want answers to “why” questions. Enter causal inference. Paul Hünermund has been doing research and writing on this topic for some time and joins us to introduce the topic. He also shares some relevant trends and some tips for getting started with methods including double machine learning, experimentation, difference-in-difference, and more.
Capabilities of LLMs 🤯
Large Language Model (LLM) capabilities have reached new heights and are nothing short of mind-blowing! However, with so many advancements happening at once, it can be overwhelming to keep up with all the latest developments. To help us navigate through this complex terrain, we’ve invited Raj - one of the most adept at explaining State-of-the-Art (SOTA) AI in practical terms - to join us on the podcast. Raj discusses several intriguing topics such as in-context learning, reasoning, LLM options, and related tooling. But that’s not all! We also hear from Raj about the rapidly growing data science and AI community on TikTok.
Computer scientists as rogue art historians
What can art historians and computer scientists learn from one another? Actually, a lot! Amanda Wasielewski joins us to talk about how she discovered that computer scientists working on computer vision were actually acting like rogue art historians and how art historians have found machine learning to be a valuable tool for research, fraud detection, and cataloguing. We also discuss the rise of generative AI and how we this technology might cause us to ask new questions like: “What makes a photograph a photograph?”
Accelerated data science with a Kaggle grandmaster
Daniel and Chris explore the intersection of Kaggle and real-world data science in this illuminating conversation with Christof Henkel, Senior Deep Learning Data Scientist at NVIDIA and Kaggle Grandmaster. Christof offers a very lucid explanation into how participation in Kaggle can positively impact a data scientist’s skill and career aspirations. He also shared some of his insights and approach to maximizing AI productivity uses GPU-accelerated tools like RAPIDS and DALI.
Explainable AI that is accessible for all humans
We are seeing an explosion of AI apps that are (at their core) a thin UI on top of calls to OpenAI generative models. What risks are associated with this sort of approach to AI integration, and is explainability and accountability something that can be achieved in chat-based assistants? Beth Rudden of Bast.ai has been thinking about this topic for some time and has developed an ontological approach to creating conversational AI. We hear more about that approach and related work in this episode.
AI search at You.com
Neural search and chat-based search are all the rage right now. However, You.com has been innovating in these topics long before ChatGPT. In this episode, Bryan McCann from You.com shares insights related to our mental model of Large Language Model (LLM) interactions and practical tips related to integrating LLMs into production systems.
End-to-end cloud compute for AI/ML
We’ve all experienced pain moving from local development, to testing, and then on to production. This cycle can be long and tedious, especially as AI models and datasets are integrated. Modal is trying to make this loop of development as seamless as possible for AI practitioners, and their platform is pretty incredible! Erik from Modal joins us in this episode to help us understand how we can run or deploy machine learning models, massively parallel compute jobs, task queues, web apps, and much more, without our own infrastructure.
Success (and failure) in prompting
With the recent proliferation of generative AI models (from OpenAI, co:here, Anthropic, etc.), practitioners are racing to come up with best practices around prompting, grounding, and control of outputs. Chris and Daniel take a deep dive into the kinds of behavior we are seeing with this latest wave of models (both good and bad) and what leads to that behavior. They also dig into some prompting and integration tips.
Applied NLP solutions & AI education
We’re super excited to welcome Jay Alammar to the show. Jay is a well-known AI educator, applied NLP practitioner at co:here, and author of the popular blog, “The Illustrated Transformer.” In this episode, he shares his ideas on creating applied NLP solutions, working with large language models, and creating educational resources for state-of-the-art AI.
Serverless GPUs
We’ve been hearing about “serverless” CPUs for some time, but it’s taken a while to get to serverless GPUs. In this episode, Erik from Banana explains why its taken so long, and he helps us understand how these new workflows are unlocking state-of-the-art AI for application developers. Forget about servers, but don’t forget to listen to this one!
MLOps is alive and well
Worlds are colliding! This week we join forces with the hosts of the MLOps.Community podcast to discuss all things machine learning operations. We talk about how the recent explosion of foundation models and generative models is influencing the world of MLOps, and we discuss related tooling, workflows, perceptions, etc.
3D assets & simulation at NVIDIA
What’s the current reality and practical implications of using 3D environments for simulation and synthetic data creation? In this episode, we cut right through the hype of the Metaverse, Multiverse, Omniverse, and all the “verses” to understand how 3D assets and tooling are actually helping AI developers develop industrial robots, autonomous vehicles, and more. Beau Perschall is at the center of these innovations in his work with NVIDIA, and there is no one better to help us explore the topic!
GPU dev environments that just work
Creating and sharing reproducible development environments for AI experiments and production systems is a huge pain. You have all sorts of weird dependencies, and then you have to deal with GPUs and NVIDIA drivers on top of all that! brev.dev is attempting to mitigate this pain and create delightful GPU dev environments. Now that sounds practical!
Machine learning at small organizations
Why is ML is so poorly adopted in small organizations (hint: it’s not because they don’t have enough data)? In this episode, Kirsten Lum from Storytellers shares the patterns she has seen in small orgs that lead to a successful ML practice. We discuss how the job of a ML Engineer/Data Scientist is different in that environment and how end-to-end project management is key to adoption.