From the course: Rust for Data Engineering

Unlock this course with a free trial

Join today to access over 24,900 courses taught by industry experts.

EFS ONNX Rust inference with AWS Lambda

EFS ONNX Rust inference with AWS Lambda - Rust Tutorial

From the course: Rust for Data Engineering

EFS ONNX Rust inference with AWS Lambda

- [Instructor] Let's take a look at this project which is MLOPs Inference using the ONNX model format mounted via EFS and also invoked via AWS Lambda. So the reason for doing this is so that you can use serverless technology, like in this case, it would be AWS Lambda to serve out inference. And this is an emerging standard here where the advantages of serverless is you don't have to manage it. It's easy to deploy, especially, if you're using a high performance language that supports binary deployment like Rust. All you need to do is go through here and deploy a binary that has the ability to mount the model via EFS. So let's go ahead and walk through exactly how this would work. So first step, you're going to need to have access to EFS. You're need to create an EFS mount point, we'll cover that in a second. And your development environment, at least in terms of putting the files there, probably should be something like…

Contents