The podcast about Python and the people who make it great

Accelerate The Development And Delivery Of Your Machine Learning Applications Using Ray And Deploy It At Anyscale

March 06, 2022 00:45:58 35.26 MB Downloads: 0

Summary

Building a machine learning application is inherently complex. Once it becomes necessary to scale the operation or training of the model, or introduce online re-training the process becomes even more challenging. In order to reduce the operational burden of AI developers Robert Nishihara helped to create the Ray framework that handles the distributed computing aspects of machine learning operations. To support the ongoing development and simplify adoption of Ray he co-founded Anyscale. In this episode he re-joins the show to share how the project, its community, and the ecosystem around it have grown and evolved over the intervening two years. He also explains how the techniques and adoption of machine learning have influenced the direction of the project.

Announcements

  • Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Your host as usual is Tobias Macey and today I’m interviewing Robert Nishihara about his work at Anyscale and the Ray distributed execution framework

Interview

  • Introductions
  • How did you get introduced to Python?
  • Can you describe what Anyscale is and the story behind it?
  • How has the Ray project and ecosystem evolved since we last spoke? (2 years ago)
    • How has the landscape of AI/ML technologies and techniques shifted in that time?
  • What are the main areas where organizations are trying to apply ML/AI?
  • What are some of the issues that teams encounter when trying to move from prototype to production with ML/AI applications?
    • What are the features of Ray that help to mitigate those challenges?
  • With the introduction of more widely available streaming/real-time technologies the viability of reinforcement learning has increased. What new challenges does that approach introduce?
  • What are some of the operational complexities associated with managing a deployment of Ray?
    • What are some of the specialized utilities that you have had to develop to maintain a large and multi-tenant platform for your customers?
  • What is the governance model around the Ray project and how does the work at Anyscale influence the roadmap?
  • What are the most interesting, innovative, or unexpected ways that you have seen Anyscale/Ray used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on Ray and Anyscale?
  • When is Anyscale/Ray the wrong choice?
  • What do you have planned for the future of Anyscale/Ray?

Keep In Touch

Picks

Closing Announcements

  • Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
  • Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
  • If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
  • To help other people find the show please leave a review on iTunes and tell your friends and co-workers

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA