Deploy Models with TensorFlow Serving and Flask

Deal Score0
Deal Score0

Deploy Models with TensorFlow Serving and Flask TensorFlow Serving makes the process of taking a model into production easier and faster. It allows you to safely deploy new models

In this Guided Project, you will:

Serve a TensorFlow model with TensorFlow Serving and Docker.

Create a web application with Flask to work as an interface to a served model.

In this 2-hour long project-based course, you will learn how to deploy TensorFlow models using TensorFlow Serving and Docker, and you will create a simple web application with Flask which will serve as an interface to get predictions from the served TensorFlow model. This course runs on Coursera’s hands-on project platform called Rhyme. On Rhyme, you do projects in a hands-on manner in your browser. You will get instant access to pre-configured cloud desktops containing all of the software and data you need for the project. Everything is already set up directly in your Internet browser so you can just focus on learning. For this project, you’ll get instant access to a cloud desktop with (e.g. Python, Jupyter, and Tensorflow) pre-installed. Prerequisites: In order to be successful in this project, you should be familiar with Python, TensorFlow, Flask, and HTML.

In this 2-hour long project-based course Coursera

Notes:

– You will be able to access the cloud desktop 5 times. However, you will be able to access instructions videos as many times as you want.

– This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.

Learn step-by-step
In a video that plays in a split-screen with your work area, your instructor will walk you through these steps:

Introduction

Getting Started with the Flask App

Index Template

TensorFlow Serving

Getting Predictions

Connecting to Model Server

Displaying the Results

Compare items
  • Total (0)
Compare
0