25

AWS lambda deployment of FastAPI gives the following error:

[ERROR] Runtime.ImportModuleError: Unable to import module 'users_crud': No module named 'pydantic_core._pydantic_core'
Traceback (most recent call last):

Though the pydantic lib is already installed. I am using version 3.10 which is now supported by AWS.

3
  • No known issues running pydantic V2 (which is why pydantic_core is being imported). Maybe you need to explicitly include it in requirrments.txt? Commented Jul 10, 2023 at 7:13
  • I am facing same issue on AWS Lambda. Note, I am not even using FastAPI. Commented Jul 10, 2023 at 7:55
  • No you don't need to explicitly include pydantic or pydantic_core if using it via FastAPI. pip install fastapi lets import pydantic and import pydantic_core._pydantic_core run just fine. In fact the error might be caused by explicitly including pydantic (with a v1 version?) in the dependencies, but can't really say without further info. @krish___na please can you make a new question with more info, this one gives no details on how the packages are installed, so it's hard to reproduce your problem, and comments aren't appropriate for describing separate cases. Commented Jul 10, 2023 at 7:57

16 Answers 16

37

Lambda requires packages built for a specific architecture. Many packages have distributions for multiple architectures (see available distributions for pydantic-core). By default, pip installs the distribution suitable for the machine where you are running it on, which is not necessary the same architecture as your Lambda.

But you can force pip to install packages for the architecture you want. If your Lambda uses x86_64, then you should select platform manylinux2014_x86_64:

pip install pydantic-core --platform manylinux2014_x86_64 -t . --only-binary=:all:

-t . - means install in the current directory.

However, the best way is to install all your dependencies with the required platform, rather than doing it for each package.

pip install -r requirements.txt --platform manylinux2014_x86_64 --target ./python --only-binary=:all:

Credits to this answer.

Sign up to request clarification or add additional context in comments.

3 Comments

This was very helpful to me. But needed this answer and user6595600 's answer as well to get over the hump because the Python version needs to match the Lambda runtime too.
I wish there was a way to vote to combine two answers, because this one + user6595600 would be the most complete
I've tried to be a "vibe coder" and wanted to crack this bug by ChatGPT and WindSurf. I've spend the whole day and only the oldboy StackOverflow helped me for real. Thank you, so much, guys!
8

Please see pydantic/pydantic#6557 - basically you've probably installed pydantic-core for the wrong architecture.

Comments

8

what helped me was to:

  1. create a lambda layer with all the dependencies:
  • I tried to create a lambda with the dependencies installed with the following command:
pip install --platform manylinux2014_x86_64 --target=<layer-folder> --implementation cp --python-version 3.11 --only-binary=:all: --upgrade langchain==0.0.349

pip install --platform manylinux2014_x86_64 --target=<layer-folder> --implementation cp --python-version 3.11 --only-binary=:all: --upgrade openai==1.6.1                   

(You can optionally use requirements.txt - in my case, I needed only these 2 dependencies, and openai is also required for langchain.) 2) be sure that the dependencies match the lambda architecture (you have 2 options: x86_64 or arm64):

  • you noticed that in the pip command I used this:
--platform manylinux2014_x86_64 --target=python2 --implementation cp --python-version 3.11 --only-binary=:all:

It is necessary for the architecture of my choice. Adjust according to yours. 3) make sure that the dependency versions contain all necessary classes:

  • the newest versions of langchain (1.6.2) lack some modules that are apparently necessary (tracers.langchain_v1).

It is worth to mention that the dependencies are so big that it is useful to place all of them in a layer, and attach the layer. In case of Python, the layer should contain a zip file with the following path: == python/lib/python3./site-packages. Consult the AWS docs for that.

Comments

5

I had the same error this morning. I checked the release notes of FastAPI: new release 0.100.0 has some changes wrt Pydantic. I don't understand all of them, but a quick & temporary workaround to my problem is to version pin FastAPI==0.99.0. Hope that helps for you as well.

1 Comment

Not working, because the pydantic is depends on the newest version
4

I had the same error using Serverless Framework, with these dependencies in the requirements.txt:

boto3==1.28.1
boto3-stubs==1.28.68
pydantic==2.4.0
pydantic[email]
pydantic-settings==2.1.0
pytest==7.3.1
pytest-mock==3.11.1
mock==5.0.2
pipreqs==0.4.13
black==23.7.0
backoff==2.2.1
mypy==1.5.1

But there aren't any dependency issues; the errors occur when you build the dependency package (in pip install step) for your Lambda in a different architecture. In my case, I use a devcontainer with VSCode with a Docker image that I build:

  1. My first mistake was this:
FROM ubuntu:latest

Basically, I didn't specify the OS architecture in my Dockerfile. So, I replaced that line:

FROM arm64v8/ubuntu:latest
  1. Second mistake in serverless.yml: If you don't specify the type of architecture that you want to use in your Lambdas, AWS sets the default to x86_64 – 64-bit x86 architecture for x86-based processors. So, I added the default architecture type for all my Lambdas:
provider:
  name: aws
  runtime: python3.10
  architecture: arm64

I also specified the architecture type in the default layer definition:

 layer:
      name: ${self:custom.resources.layers.commons.name}
      description: Python requirements lambda layer 
      compatibleRuntimes:
        - python3.10
      compatibleArchitectures: # optional, a list of architectures this layer is compatible with
        - arm64

Finally, I deployed my project again and solved my issue.

Conclusion: You must build your dependency package on the same architecture you use to deploy your Lambdas.

PD: About arm64 architecture vs x86 architecture: https://docs.aws.amazon.com/lambda/latest/dg/foundation-arch.html?icmpid=docs_lambda_help#foundation-arch-adv

Comments

4

I was facing the same issue while deploying Pydantic Core on AWS Lambda. Here's how I fixed it:

Check 1: Ensure Python Versions Match

The Python version used during build and AWS Lambda runtime must be the same.

  • In my case, I was using Python 3.12 both locally and on AWS Lambda.
  • To check your versions:
    • Locally: Run
      python --version
      
    • AWS Lambda: Go to Lambda ConsoleYour FunctionRuntime Settings.

If there's a mismatch, either change your Lambda runtime or rebuild dependencies using the correct version:

pip install --target . --python-version 3.12 pydantic pydantic_core

Check 2: Verify OS Architecture Compatibility

AWS Lambda supports two architectures:

  • x86_64 (Intel/AMD)
  • arm64 (AWS Graviton)

Your local environment and AWS Lambda must use the same architecture.

  • Check locally:
    uname -m
    
    If it returns x86_64, your system is 64-bit Intel/AMD.
  • Check AWS Lambda:
    aws lambda get-function --function-name YOUR_FUNCTION_NAME | grep Architecture
    
    If they don’t match, recompile dependencies using the correct architecture.

Check 3: Ensure .so File is Present in pydantic_core

Pydantic Core relies on a compiled shared object (.so) file.

  • Check if it exists in your package:

    ls -R pydantic_core
    

    Expected output:

    pydantic_core/
    ├── __init__.py
    ├── _pydantic_core.cpython-312-x86_64-linux-gnu.so
    ├── core_schema.py
    
  • If missing, reinstall Pydantic Core in your deployment package:

    pip uninstall pydantic pydantic_core -y
    pip install --no-cache-dir pydantic pydantic_core --target .
    

2 Comments

Thanks! Thie was the most complete answer and in my case, the problem was that I was working with Python 3.11 but not specifying this version while installing dependencies. Once I indicated the version, it magically started working. Of course architecture is not a minor thing to consider, but I was doing that right.
Yes, the problem was I was installing the dependencies (together with pydantic_code) under WSL and I had python 3.12 there. While the lambda required 3.13. Using python 3.13 on WSL solved it.
2

I had the same problem, what fixed the issue for me was to go back to python 3.9 on the Lambda runtime. I hope it helps.

Comments

2

The error is related to the architecture of your local machine vs the Lambda architecture selected. You can choose x86_64 or armd64, in my case, my Mac is armd64 and I choose x86_64 for Lambda, so I had to install the packages using x86_64, like this:

pip install --target ./lib -r requirements.txt --platform manylinux2014_x86_64 --only-binary=:all:
(cd lib; zip ../aws_lambda_artifact.zip -r .)
zip aws_lambda_artifact.zip -u main.py  

Comments

0

In my case, the solution to this problem was running pip install on a Linux system(it can auto install correct version). I used pydantic without fastapi and created libraries at a lambda layer with py39 x86 architecture. green is correct Additionally, if you are unable to access a Linux environment, directly unzipping the 'whl' file might also be helpful.

1 Comment

Could you please give more detail on exactly how you created these?
0

using openai==0.28.0 and install async-timeout==4.0.3 in the same folder. Be sure by creating the venv the version of your local-machine is same as the version of your Lambda python.

Comments

0

Got this error working with Mac. Problem was that my Mac is of arm architecture (apple silicon chips M1, M2, M3) which installed the openai dependency according to arm architecture. But the lambda architecture is different even though it gives option of arm. My problem got resolved by explicitly giving the architecture as manylinux2014_x86_64 to my pip install command - pip3 install openai --platform manylinux2014_x86_64 -t . --only-binary=:all: When I updated the layer with this new installation and connected to lambda function, it worked fine.

Comments

0

Similar to the answers above using x86_64 arch to both build and run the lambda solved this issue for me:

frameworkVersion: "3"

plugins:
  - serverless-python-requirements

provider:
  name: aws
  runtime: python3.11
  architecture: x86_64
custom:
  stage: ${sls:stage}
  pythonRequirements:
    dockerizePip: true
    dockerRunCmdExtraArgs: [ '--platform', 'linux' ]
    dockerImage: public.ecr.aws/sam/build-python3.11:latest
    useStaticCache: true
    useDownloadCache: true
    slim: true
    slimPatternsAppendDefaults: false
    slimPatterns:
      - "**/*.py[c|o]"
      - "**/__pycache__*"
    layer:
      name: ${self:service}-layer-deployment-${self:custom.stage}
      description: Python requirements lambda layer
      compatibleRuntimes:
        - python3.11

Comments

0

I'm using Docker to create my Lambda layer. Because I'm running my code on a Mac (which uses arm64 architecture), I solved this issue by specifying the x86_64 architecture in the build command:

docker build --platform linux/amd64 -t python-lambda-layer-maker .

Comments

0

I faced this issue, and after a thorough research following steps worked for me

  1. Create a x86_64 EC2 with Amazon Linux
  2. Setup AWS CLI - use sudo yum, Setup needed env var - https://docs.aws.amazon.com/cli/latest/reference/ec2/
  3. Create a python folder
  4. Run the following command to Install OpenAI - pip install async-timeout==4.0.3 -t ./python --platform manylinux2014_x86_64 --only-binary=:all:
  5. Finally, Zip the folder zip -r python.zip .
  6. use aws s3 cp command to copy to s3, and set in Lambda

Comments

0

Summary: The main issue was that I was not building for the lambda environment as a target.

In my case, I was creating the package on an M1 Mac (I also tried on an Intel one and got the same issue).

I just updated the way I was installing the dependencies into the package folder before zipping it by specifying the target

pip3 install --platform manylinux2014_x86_64 --only-binary=:all: -r requirements.txt -t package/  || { echo "Dependency installation failed!"; exit 1; }

Comments

0

I had the same issue and I have solved with similar approach to the previous answers (x86_64)

--platform linux_x86_64 --only-binary=:all:

if you want to build the package locally and then deploy it to lambda

pip install -r requirements.txt -t ./package --platform linux_x86_64 --only-binary=:all:

If you want to create a lambda layer you can use this docker example:

virtualenv --python=/usr/bin/python3.13 python
source python/bin/activate
pip install -r requirements.txt -t python/lib/python3.13/site-packages --platform linux_x86_64 --only-binary=:all:

zip -r9 python.zip python

and then use the generated zip to upload as a lambda layer

Upload the zip to a s3 bucket:

aws s3 cp <local_folder>/ <s3_bucket_dependency_folder> --recursive

publish layer:

lambda publish-layer-version --layer-name <lambda_layer_name> \
--description <lambda_layer_description> \
--content S3Bucket=<s3_bucket>,S3Key=<path_to_file> \
--compatible-runtimes python3.13 \

Add permission to other aws envs to use this layer (if needed)

aws lambda add-layer-version-permission \
--layer-name <lambda_layer_name> \
--version-number <lambda_layer_version> \
--statement-id shareWithTEST \
--principal <aws_account_id> \
--action lambda:GetLayerVersion

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.