uv python pin <version> will create a .python-version file in the current directory.
uv virtualenv will download the version of Python specified in your .python-version file (like pyenv install) and create a virtualenv in the current directory called .venv using that version of Python (like pyenv exec python -m venv .venv)
uv pip install -r requirements.txt will behave the same as .venv/bin/pip install -r requirements.txt.
uv run <command> will run the command in the virtualenv and will also expose any env vars specified in a .env file (although be careful of precedence issues: https://github.com/astral-sh/uv/issues/9465)
# Ensure we always have an up to date lock file.
if ! test -f uv.lock || ! uv lock --check 2>/dev/null; then
uv lock
fi
Doesn't this defeat the purpose of having a lock file? If it doesn't exist or if it's invalid something catastrophic happened to the lock file and it should be handled by someone familiar with the project. Otherwise, why have a lock file at all? The CI will silently replace the lock file and cause potential confusion.I think 2 languages are enough, we don't need a 3rd one that nobody asked for.
I have nothing against Rust. If you want a new tool, go for it. If you want a re-write of an existing tool, go for it. I'm against it creeping into an existing eco-system for no reason.
A popular Python package called Pendulum went over 7 months without support for 3.13. I have to imagine this is because nobody in the Python community knew enough Rust to fix it. Had the native portion of Pendulum been written in C I would have fixed it myself.
https://github.com/python-pendulum/pendulum/issues/844
In my ideal world if someone wanted fast datetimes written in Rust (or any other language other than C) they'd write a proper library suitable for any language to consume over FFI.
So far this Rust stuff has left a bad taste in my mouth and I don't blame the Linux community for being resistant.
By default `uv` won't generate `pyc` files which might make your service much slower to start.
See https://docs.astral.sh/uv/reference/settings/#pip_compile-by...
>In docker you can just raw COPY pyproject.toml uv.lock* . then run uv sync --frozen --no-install-project. this skips your own app so your install layer stays cacheable. real ones know how painful it is to rebuild entire layers just cuz one package changed.
>UV_PROJECT_ENVIRONMENT=/home/python/.local bypasses venv. which means base images can be pre-warmed or shared across builds. saves infra cost silently. just flip UV_COMPILE_BYTECODE=1 and get .pyc at build.
> It kills off mutable environments. forces you to respect reproducibility. if your build is broken, it's your lockfile's fault now. accountability becomes visible
Speed is okay, but security of a package manager is far more important.
Current Dockerfile pip is as simple as:
COPY --chown=python:python requirements.txt .
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir --compile -r requirements.txt
COPY --chown=python:python . .
RUN python -m compileall -f .
As someone who usually used platform pythons, despite advise against that, uv is now what got me to finally stop doing so.
- Removing requirements.txt makes it harder to track the high-level deps your code requires (and their install options/flags). Typically requirements.txt should be the high level requirements, and you should pass them to another process that produces pinned versions. You regenerate the pinned versions/deps from the requirements.txt, so you have a way to reset all dependencies as your core ones gain or lose nested dependencies.
- +COPY --from=ghcr.io/astral-sh/uv:0.7.13 /uv /uvx /usr/local/bin/ seems useful, but the upstream docker tag could be repinned on a different hash, causing conflicts. Use the hash, or use a different way to stage your dependencies and copy them into the file. Whenever possible, confirm your artifacts match known hashes.
- Installing into the container's /home/project/.local may preserve the uv pattern, but it's going to make a container that's harder to debug. Production containers (if not all containers) should install files into normal global paths so that it's easy to find the, reason about them, and use standard tools to troubleshoot. This allows non-uv users to diagnose the application running, and removes extra abstraction layers which create unneeded complexity.
- +RUN chmod 0755 bin/ && bin/uv-install* - using scripts makes things easier to edit, but it makes it harder to understand what's going on in a container, because you have to run around the file tree reading files and building a mental map of execution. Whenever possible, just shove all the commands into RUN lines in the Dockerfile. This allows a user to just view the Dockerfile and know the entire execution without extra effort. It also removes some complexity in terms of checking out files, building Docker context, etc.
- Try to avoid docker compose and other platform-constrained tools for the running of your tests, for the freezing of versions, etc. You SDLC should first be composed of your build tools/steps using just native tools/environments. Then on top of that should go the CI tools. This separation of "dev/test environment" from CI allows you to take your "dev/test environment" and run it on any CI platform - Docker Compose, GitHub Actions, CircleCI, GitLab CI, Jenkins, etc - without modifying the "dev/test environment" tools or workflow. Personally I have a dev.sh that sets up the dev environment, build.sh to run any build steps, test.sh to run all the test stuff, ci.sh to run ci/cd specific stuff (it just calls the CI/CD system's API and waits for status), and release.sh to cut new releases.