Search Relevance Architect
Search Relevance Architect at Salesforce, working on search, applied ML, and distributed systems. Previously Chief Data Engineer at Lucidworks; built Twitter's user/account search and personalization systems and teams; even further back, worked at LinkedIn in search infrastructure and RecSys. Apache Mahout committer, PMC Member (and former PMC Chair).
Jake Mannix is speaking at the following session/s
ml4ir - An Open Source Deep Learning Library for Search Relevance
ml4ir is an open source library for unified training and serving of deep learning models for search relevance. Built on top of tensorflow 2.0+, ml4ir is designed for scale using TFRecord data pipelines. ml4ir is built as a network of loosely coupled deep learning subcomponents. This allows users to define custom sub-models and combine them using a simple pluggable interface to build really complex models for a wide variety of applications. Alternatively, ml4ir can be used with little to no code only interacting via configuration files. This architecture allows ml4ir to be an ideal place for search data science collaboration with users being able to share different neural network layers with each other. ml4ir models are being used in production environments at Salesforce today as they are compatible with tf serving and also come packaged with the necessary code to deploy to JVM based environments.
Data Scientists and ML Engineers in the industry who want to learn about a component based deep learning abstraction for building enterprise ready models for search. The session would also be of interest to the Learning to Rank community and anyone interested in building deep learning models even with limited python experience.
The audience will learn about a new open source deep learning library for search. They will learn how to configure and customize ml4ir to train and serve a Learning to Rank model. This will enable them to onboard their applications and training data to ml4ir to build production ready models.
Table Stakes ML for Smart Search
Maximizing business value in a digital environment where search queries are the primary view onto your user's needs dictates that Search is a Machine Learning problem. But how much "ML" is really needed these days, and what kind of infrastructure is required to support all this ML?
In this talk, Jake discusses a short list of what is required (in terms of product feature, architectural component, and engineering technique) for our search engines to get a "seat at the table" of our user's highly divided attention.
For some, these will be reminders that yes: you need a Feature Store, personalization, session history, and the like. For those new to ML-in-Search, it'll be short list of places to start (but it might be 3-5 years before you've added it all!)
Engineers, Data Scientists, and Product Managers for Search teams, Executives, and Managers making Search Engine tech purchasing decisions