Making artificial intelligence practical, productive, and accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, etc). The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!
Similar Podcasts
Ship It! DevOps, Infra, Cloud Native
A show about getting your best ideas into the world and seeing what happens. We talk about code, ops, infrastructure, and the people that make it happen. Gerhard Lazu and friends explore all things DevOps, infra, and running apps in production. Whether you’re cloud native, Kubernetes curious, a pro SRE, or just operating a VPS… you’ll love coming along for the ride. Ship It honors the makers, the shippers, and the visionaries that see it through. Some people search for ShipIt or ShipItFM and can’t find the show, so now the strings ShipIt and ShipItFM are in our description too.
Founders Talk: Startups, CEOs, Leadership
In-depth, one-on-one conversations with founders, CEOs, and makers. The journey, lessons learned, and the struggles. Let’s do this! Host Adam Stacoviak dives deep into the trials, tribulations, successes, and failures of industry leading entrepreneurs, leaders, innovators, and visionaries.
JS Party: JavaScript, CSS, Web Development
Your weekly celebration of JavaScript and the web. This show records LIVE on Thursdays at 1pm US/Eastern time. Panelists include Jerod Santo, Feross Aboukhadijeh, Kevin Ball, Amelia Wattenberger, Nick Nisi, Divya Sasidharan, Mikeal Rogers, Chris Hiller, and Amal Hussein. Topics discussed include the web platform (Chrome, Safari, Edge, Firefox, Brave, etc), front-end frameworks (React, Ember, Angular, Vue, etc), Node.js, web animation, SVG, robotics, IoT, and much more. If JavaScript and/or the web touch your life, this show’s for you. Some people search for JSParty and can’t find the show, so now the string JSParty is in our description too.
Should kids still learn to code?
In this fully connected episode, Daniel & Chris discuss NVIDIA GTC keynote comments from CEO Jensen Huang about teaching kids to code. Then they dive into the notion of “community” in the AI world, before discussing challenges in the adoption of generative AI by non-technical people. They finish by addressing the evolving balance between generative AI interfaces and search engines.
AI vs software devs
Daniel and Chris are out this week, so we’re bringing you conversations all about AI’s complicated relationship to software developers from other Changelog pods: JS Party, Go Time & The Changelog.
Prompting the future
Daniel & Chris explore the state of the art in prompt engineering with Jared Zoneraich, the founder of PromptLayer. PromptLayer is the first platform built specifically for prompt engineering. It can visually manage prompts, evaluate models, log LLM requests, search usage history, and help your organization collaborate as a team. Jared provides expert guidance in how to be implement prompt engineering, but also illustrates how we got here, and where we’re likely to go next.
Generating the future of art & entertainment
Runway is an applied AI research company shaping the next era of art, entertainment & human creativity. Chris sat down with Runway co-founder / CTO, Anastasis Germanidis, to discuss their rise and how it’s defining the future of the creative landscape with its text & image to video models. We hope you find Anastasis’s founder story as inspiring as Chris did.
YOLOv9: Computer vision is alive and well
While everyone is super hyped about generative AI, computer vision researchers have been working in the background on significant advancements in deep learning architectures. YOLOv9 was just released with some noteworthy advancements relevant to parameter efficient models. In this episode, Chris and Daniel dig into the details and also discuss advancements in parameter efficient LLMs, such as Microsofts 1-Bit LLMs and Qualcomm’s new AI Hub.
Representation Engineering (Activation Hacking)
Recently, we briefly mentioned the concept of “Activation Hacking” in the episode with Karan from Nous Research. In this fully connected episode, Chris and Daniel dive into the details of this model control mechanism, also called “representation engineering”. Of course, they also take time to discuss the new Sora model from OpenAI.
Leading the charge on AI in National Security
Chris & Daniel explore AI in national security with Lt. General Jack Shanahan (USAF, Ret.). The conversation reflects Jack’s unique background as the only senior U.S. military officer responsible for standing up and leading two organizations in the United States Department of Defense (DoD) dedicated to fielding artificial intelligence capabilities: Project Maven and the DoD Joint AI Center (JAIC). Together, Jack, Daniel & Chris dive into the fascinating details of Jack’s recent written testimony to the U.S. Senate’s AI Insight Forum on National Security, in which he provides the U.S. government with thoughtful guidance on how to achieve the best path forward with artificial intelligence.
Gemini vs OpenAI
Google has been releasing a ton of new GenAI functionality under the name “Gemini”, and they’ve officially rebranded Bard as Gemini. We take some time to talk through Gemini compared with offerings from OpenAI, Anthropic, Cohere, etc. We also discuss the recent FCC decision to ban the use of AI voices in robocalls and what the decision might mean for government involvement in AI in 2024.
Data synthesis for SOTA LLMs
Nous Research has been pumping out some of the best open access LLMs using SOTA data synthesis techniques. Their Hermes family of models is incredibly popular! In this episode, Karan from Nous talks about the origins of Nous as a distributed collective of LLM researchers. We also get into fine-tuning strategies and why data synthesis works so well.
Large Action Models (LAMs) & Rabbits 🐇
Recently the release of the rabbit r1 device resulted in huge interest in both the device and “Large Action Models” (or LAMs). What is an LAM? Is this something new? Did these models come out of nowhere, or are they related to other things we are already using? Chris and Daniel dig into LAMs in this episode and discuss neuro-symbolic AI, AI tool usage, multimodal models, and more.
Collaboration & evaluation for LLM apps
Small changes in prompts can create large changes in the output behavior of generative AI models. Add to that the confusion around proper evaluation of LLM applications, and you have a recipe for confusion and frustration. Raza and the Humanloop team have been diving into these problems, and, in this episode, Raza helps us understand how non-technical prompt engineers can productively collaborate with technical software engineers while building AI-driven apps.
Advent of GenAI Hackathon recap
Recently, Intel’s Liftoff program for startups and Prediction Guard hosted the first ever “Advent of GenAI” hackathon. 2,000 people from all around the world participated in Generate AI related challenges over 7 days. In this episode, we discuss the hackathon, some of the creative solutions, the idea behind it, and more.
AI predictions for 2024
We scoured the internet to find all the AI related predictions for 2024 (at least from people that might know what they are talking about), and, in this episode, we talk about some of the common themes. We also take a moment to look back at 2023 commenting with some distance on a crazy AI year.
Open source, on-disk vector search with LanceDB
Prashanth Rao mentioned LanceDB as a stand out amongst the many vector DB options in episode #234. Now, Chang She (co-founder and CEO of LanceDB) joins us to talk through the specifics of their open source, on-disk, embedded vector search offering. We talk about how their unique columnar database structure enables serverless deployments and drastic savings (without performance hits) at scale. This one is super practical, so don’t miss it!
The state of open source AI
The new open source AI book from PremAI starts with “As a data scientist/ML engineer/developer with a 9 to 5 job, it’s difficult to keep track of all the innovations.” We couldn’t agree more, and we are so happy that this week’s guest Casper (among other contributors) have created this resource for practitioners. During the episode, we cover the key categories to think about as you try to navigate the open source AI ecosystem, and Casper gives his thoughts on fine-tuning, vector DBs & more.