When you docker run the container, you can supply a command at the end of it. This replaces the Dockerfile's CMD, so you need to repeat the script name, but any arguments there will be passed through as-is.
If the script begins with #!/usr/bin/env python as its very very first line, and it's executable chmod +x app.py, then you don't need to repeat the python interpreter; these examples will assume you've done that.
docker run --rm your-image \
./app.py first-argument
It's also possible to use a Dockerfile ENTRYPOINT here. This isn't my preferred use of ENTRYPOINT -- it makes some kinds of debugging a little bit harder, and there is a different pattern for it I prefer -- but it is common enough to be included in the Docker documentation. The basic setup here is that the CMD is passed as arguments to the ENTRYPOINT so you can set the ENTRYPOINT to the actual command and the CMD to its arguments.
FROM python:3.10
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY ./ ./
ENTRYPOINT ["./app.py"] # must be JSON-array syntax
CMD [] # must be JSON-array syntax, could include defaults
In this setup both the ENTRYPOINT and CMD must use JSON-array "exec" syntax (shell-form ENTRYPOINT won't allow CMD arguments; shell-form CMD makes sh -c visible as arguments to your program) but that means you can't use shell features like environment-variable expansion as part of the command (the program can still use os.environ as normal).
The docker run option only overrides the CMD, and so only supplies the program's arguments
docker run --rm your-image \
first-argument
But some kinds of routine debugging need an awkward docker run --entrypoint option to not run the script.
# run an interactive shell instead of the program
docker run --rm -it --entrypoint bash your-image
# without ENTRYPOINT: `docker run --rm -it your-image bash`
# double-check the file listing and permissions
docker run -it --entrypoint ls your-image -l /app
# without ENTRYPOINT: `docker run --rm your-image ls -l /app