Apache Kafka, Tiered Storage and TensorFlow for Streaming Machine Learning Without a Data Lake

April 21, 2:34 pm - 2:44 pm (10 Minutes)

Add to Calendar

Machine Learning (ML) is separated into model training and model inference. ML frameworks typically use a data lake like HDFS or S3 to process historical data and train analytic models. But it’s possible to completely avoid such a data store, using a modern streaming architecture.

This talk compares a modern streaming architecture to traditional batch and big data alternatives and explains benefits like the simplified architecture, the ability of reprocessing events in the same order for training different models, and the possibility to build a scalable, mission-critical ML architecture for real time predictions with muss less headaches and problems.

The talk explains how this can be achieved leveraging Apache Kafka, Tiered Storage and TensorFlow.

Kai Waehner

Field CTO

Confluent

Kai Waehner is Field CTO and Global Technology Advisor at Confluent. He works with customers across the globe and with internal teams like engineering and marketing. Kai’s main area of expertise lies within the fields of Big Data Analytics, Machine Learning, Hybrid Cloud Architectures, Event Stream Processing and Internet of Things. He is a regular speaker at international conferences such as Devoxx, ApacheCon and Kafka Summit, writes articles for professional journals, and shares his experiences with new technologies on his blog: www.kai-waehner.de.