Skip to main content

Does anybody have experience deploying multiple machine learning models in the cloud? I would like to know if some of you have a similar experience to share.

J
Written by Jasmine Sunga
Updated over 5 years ago

If you are a SaaS provider, you have multiple options. If you are running on AWS, you could just spawn a spot request of hefty configuration on a weekly basis and if the job succeeds, you can replace the existing model. There could be multiple options, but for now stick to what works for you and slowly automate as you go. Else the automation itself will be a project on it’s own.

With Lambda you can’t run arbitrary code and it has a timeout in seconds or minutes.  https://medium.com/tooso/serving-tensorflow-predictions-with-python-and-aws-lambda-facb4ab87ddd )

On the PaaS solution (AWS), Sagemaker (i.e. a completely managed service to serve models) may also be an option.

Did this answer your question?