Workshop 2: Bring Your Models to Production with Ray Serve

apply(conf) - May '22 - 60 minutes

In this workshop, we will walk through a step-by-step guide on how to deploy an ML application with Ray Serve. Compared to building your own model servers with Flask and FastAPI, Ray Serve facilitates seamless building and scaling to multiple models and serving model nodes in a Ray Cluster.

Ray Serve supports inference on CPUs, GPUs (even fractional GPUs!), and other accelerators – using just Python code. In addition to single-node serving, Serve enables seamless multi-model inference pipelines (also known as model composition); autoscaling in Kubernetes, both locally and in the cloud; and integrations between business logic and machine learning model code.

We will also share how to integrate your model serving system with feature stores and operationalize your end-to-end ML application on Ray.


Shreyas Krishnaswamy

Software Engineer

Anyscale

Shreyas Krishnaswamy is a software engineer focusing on Ray Serve and Ray infrastructure at Anyscale.

Phi Nguyen

GTM Technical Lead

Anyscale

Phi has been working with Fortune 500 customers in Retail, CPG, HCLS, Financial services and startups to accelerate their machine learning practices. This includes a wide range of engagements such as helping teams organize and build a center of excellence for ML, MLOps processes and automation, ML use cases development and feasibility to providing cloud best practices combining Ray and public cloud such as AWS and GCP or open source projects running on Kubernetes.