17 February, 2016, USA: In order to take machine learning models into production, Google has launched its TensorFlow Serving Project. The project aims to assist developers to take their machine learning models into production. Unsurprisingly, TensorFlow Serving is optimized for Google’s own TensorFlow machine learning library, but the company says it can also be extended to support other models and data.
On one hand, TensorFlow makes it easier to build machine learning algorithms and train them in certain types of inputs. On the other hand, TensorFlow Serving enables the user to make these models usable in production environment. Now, with the help of TensorFlow, developers can train their models and by using TensorFlow Serving’s APIs, they can react to input from client.
Noah Fiedel, Google software engineer, wrote in a blog post “TensorFlow Serving makes the process of taking a model into production easier and faster. It allows you to safely deploy new models and run experiments while keeping the same server architecture and APIs”.
Written primarily in C++, the technology should make it a little easier for people to get off the ground when serving up machine learning models using open source tools such as TensorFlow. And while TensorFlow Serving is flexible, because it natively supports TensorFlow, it could help boost adoption of that framework from Google.