Optimize your network inference time as a data scientist, not as a web developer
You’ve already trained your great neural network. It reaches 99.9% of accuracy and saves the world. You would like to deploy it. However, you don’t have a server with expensive discrete GPUs. Moreover, you don’t want to build an API. After all, you are a Data Scientist, not a Web Developer… So, is it possible to automatically optimize and run the network on both CPU and iGPU you have already? Let’s check! During the talk, I’ll present the OpenVINO™ Toolkit. You’ll learn how to automatically convert the model using Model Optimizer and how to run the inference with OpenVINO Runtime. The magic with only a few lines of code. After all, you’ll get a step-by-step jupyter notebook, so you can try it at home.