2

Context

I have a lambda with the following handler function:

import json
from langchain_community.llms import ollama

def lambda_handler(event, context):
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from LLM Lambda!')
    }

I have a .venv active and ran pip install langchain and then running pip list on my local I get:

aiohttp             3.9.1
aiosignal           1.3.1
annotated-types     0.6.0
anyio               4.2.0
async-timeout       4.0.3
attrs               23.2.0
certifi             2023.11.17
charset-normalizer  3.3.2
dataclasses-json    0.6.3
exceptiongroup      1.2.0
frozenlist          1.4.1
greenlet            3.0.3
idna                3.6
jsonpatch           1.33
jsonpointer         2.4
langchain           0.1.3
langchain-community 0.0.15
langchain-core      0.1.15
langsmith           0.0.83
marshmallow         3.20.2
multidict           6.0.4
mypy-extensions     1.0.0
numpy               1.26.3
packaging           23.2
pip                 23.3.2
pydantic            2.5.3
pydantic_core       2.14.6
PyYAML              6.0.1
requests            2.31.0
setuptools          58.0.4
sniffio             1.3.0
SQLAlchemy          2.0.25
tenacity            8.2.3
typing_extensions   4.9.0
typing-inspect      0.9.0
urllib3             2.1.0
yarl                1.9.4

As you can see the package pydantic_core is present.

From .venv/lib/python3.9/sites-packages I'm copiyng, renaming and zipping site-packages into python.zip then uploading it to an S3 bucket.

I then create a lambda layer with the following configs and referencing the mentioned zip file S3 object url like this https://bucketname.s3.us-east-2.amazonaws.com/lib/python.zip

enter image description here

THE PROBLEM

Is that the lambda function is currently crashing with the following error message:

{
  "errorMessage": "Unable to import module 'lambda_function': No module named 'pydantic_core._pydantic_core'",
  "errorType": "Runtime.ImportModuleError",
  "requestId": "7b2bba93-151f-4168-86fd-9ddad2d787fe",
  "stackTrace": []
}

Notes on what I have tried:

  • If I comment out the from langchain_community.llms import ollama line in the lambda source code it runs without problems.
  • I also tried uploading the zip file directly to the layer with the same result.
  • I did the whole same process but uploading and importing the requests package instead of langchain and the lambda ran successfully.
  • I'm using python 3.9 and installing packages for an x86_64 architecture.
1
  • 1
    I suggest you to use docker images to build lambda function containers to avoid these setup problems Commented Jan 24, 2024 at 18:15

3 Answers 3

9

I found a solution to my own problem.

Previously I was doing the pip install on my local environment, zipping the sites-packages and uploading it to S3

My local environment is a Mac.

I tested running the pip install inside an EC2 instance, zipping and uploading it straight to S3 and then the Lambda had no issues with any package.

I'm guessing the difference in OS between my Mac and the Lambda was making the install of pydantic_core break at some point.

Sign up to request clarification or add additional context in comments.

1 Comment

More helpful reference in this answer: stackoverflow.com/a/77721364/7001697
0

As I had Windows, the module was installed without that dependency.

The solution that I found was downloading this OpenAI-AWS-Lambda-Layer from GitHub and uploading it as the zip file when creating a layer.

Comments

0

another option that I found is using SAM to carry out the deployment.

I recommend this one because if we add the Openai module to our requirements.txt file without specifying the version we can always use the last Openai version.

SAM will carry out the layer creation automatically when we deploy our lambda. Therefore, when we deploy it specifying the required libraries in the requirements.txt, it will automatically build the layer.

In simple words, but way more complicated than this, this is the process:

  • Write your code in VS Code.
  • Add Openai library to requirements.txt file.
  • Create a template.yaml file specifying the lambda and layer details.
  • Use SAM to create a lambda function and deploy it.
  • Bonus: we can test out lambda locally using SAM + Docker.

Downbelow I let you some useful resources to understand this:

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.