Local Index for Python Packages With Docker

The longer I code arbitrary stuff, the more distinct code blobs take the shape of reusable packages or modules. These packages are of no interest to a broader audience and are rather valuable to me. I classify them as my precious utils.

With a modern Python toolchain, building packages is a piece of cake, so the next question naturally arises is how to supply these packages to new projects, and, if these projects are considered running inside a container, how to build these containers better.

Build a Package

Let’s swiftly recap two things. What is the difference between packages and modules in Python? And, how to build them in 2026?

Coarsely, a module is represented by a single Python file. Remember the famous condition if __name__ == '__main__':. It is used in modules to allow them to bootstrap something inside when they are directly called, and ignore these bootstrapping procedures otherwise.

On the other side, a package is a namespace whose contents are provided by one or more directories, and those directories contain modules (and possibly sub packages). A package may contain the __main__.py file to allow us running this package via the -m key. For more information, refer to PEP420.

We will discuss the building of packages. Perhaps the simplest way to do so is to use the famous uv tool. Just type the uv build in a directory with your project, and you’ll have your package built inside the dist subfolder. Note how a version from the pyproject.toml (I assume the project was made by the uv init) is part of the package’s name.

The Index

So far, so good. There is a fresh and really useful package, and the next big thing wants it to change this world together. Time to marry them.

There is no secret, Python imports modules and packages via the import ... statement. The interpreter uses sys.path to look for places where an import target may possibly live. One of the simplest and coarsest ways to make your package available is to place it where sys.path can reach it. Obviously, not the best way.

The better option is to ask a tool to fetch the package when we need it, and place it accordingly, with the option to remove it later. This looks like a job for pip, and everybody just uses it. Especially, it supports virtual environments without which no modern Python development is imaginable.

Where does pip fetch packages? The answer is an index (or registry). There is at least one official and public index, which we use implicitly when run the pip install ... command (or uv add ...), PyPI. Anyone can sign up there, and upload a package for a common interest.

A publishing of handmade stuff in common indexes is not always desirable. For a local development, folks created several solutions. I prefer the pypiserver project.

PyPI Server

A local index is good for many reasons. One can use it as a caching proxy for any packages from any remote index (including the official one), for instance, if there are connectivity issues (working from a camp in the woods on vacation). The other case I’ve just described above. Anyway, having such a tool in a Pythonista’s arsenal is pretty useful.

pypi comes with an official Docker image, which is especially important in my case, as I use it as a sidecar container near my infrastructure.

Here is a part of my template for a Docker-compose file:

pypi:
  container_name: pypi
  image: pypiserver/pypiserver:latest
  ports:
    - "8080:8080"
  volumes:
    - pypi:/data/packages
  command: run -a . -P . --overwrite /data/packages

volumes:
  pypi:

The command knob specifies how exactly start an index daemon. To disable authentication, there is a sequence of keys -a . -P ..

Security The --overwrite key allows overwriting existing packages in the index. To me, it is a pivotal aspect as I often modify packages with the same version and expect them to be fresh on the index.

Twine

There is one remarkable Python project, a package of it, and, finally, a running local index, and we still can not install the package anywhere. A final step is to upload the package to the index.

For this goal, I use the twine tool (there are also some sound alternatives). To install it, let’s use another tool from the uv toolkit: uv tool install twine.

twine is straightforward. All that we need is to call it from the project directory: twine upload --repository-url http://127.0.0.1:8080/ dist/*. In this case, I’m uploading all packages (note the asterisk) from the subfolder dist. If the index contains any of these packages (or all), they will be silently overwritten (as I’ve noted before).

In most cases, I upload packages from a host machine to the index inside a container, which is why the container is listening to the 8080 port outside.

twine will warn you about an empty username and password, and propose to specify them:

Uploading distributions to http://127.0.0.1:8080/
Enter your username:
WARNING  Your username is empty. Did you enter it correctly?
WARNING  See https://twine.readthedocs.io/#entering-credentials for more information.
Enter your password:
WARNING  Your password is empty. Did you enter it correctly?
WARNING  See https://twine.readthedocs.io/#entering-credentials for more information.

It allows us to leave these fields empty anyway. To make things even simpler, modify the command above: twine upload --repository-url http://127.0.0.1:8080/ -u . -p . dist/*. I love the smell of security in the morning!

Now, the package is finally home. pypi has a simple web interface where you can list the index content.

Docker

Imagine we expect our Python package to run inside a container like a service, maybe a third-party API, or something else. The package has the __main__.py file in its root and is capable to be invoked with the -m flag. This scenario is the most popular in my case, so here is the way I do things.

In ancient days, I copied the package inside a container via the COPY command, and then suffered from firing it up. Relative paths, dependency management, you know. The container knew nothing about my package, which required me to make some witchcraft every time. I hated it with a passion. Then I finally comprehend, I need to install the package, not just copy it.

My initial intention was to call pip install locally for the copied package. However, it is ok for a single package, even with some dependencies (from the public index), but it loudly fails when there are several handmade dependencies. Manually copying them all is the real evil.

That’s why I have a local index. For every project, which I consider running inside a container, there is a file pip.conf:

[global]
index-url = http://host.docker.internal:8080/simple

The host.docker.internal is the special hostname in Linux that the Docker build system resolves to an address of the host system. The address itself is unimportant now.

Here is a Dockerfile example:

FROM python:3.14-slim
COPY pip.conf /etc/pip.conf
RUN pip install --no-cache-dir --trusted-host host.docker.internal mypackage
CMD [ "python", "-m", "mypackage"]

This file instructs the Docker build system to configure pip to follow our local index. In case of missing any packages, pip will visit the default index, which is pypi.org. It installs my package and resolves all of its dependencies. Finally, it just runs it.

To build an image, I use the next command: sudo docker build --no-cache --add-host host.docker.internal:host-gateway -t myimage:latest ..

Linux requires specifying the --add-host host.docker.internal:host-gateway key. It tells the build system that the hostname host.docker.internal must be resolved to the special address (yes, Docker treats it as an address) host-gateway. This is the address of the docker0 network interface, whatever it is. Interestingly, I do not need to provide this key in MacOS and Windows.

This pattern is the same for most of my projects. I changed the name of the package and bish, bash, bosh.

The overall process is something like:

git pull
uv build
twine upload ...
sudo docker build ...

It may seem redundant, but I love it for fully automatic dependency management. Another thing, you can decouple the image building from the project files altogether.

uv

Additionally, packages are often required during the development of other things. We call such packages libraries. These libraries often must be installed inside a virtual environment.

Let’s modify pyproject.toml, adding the next lines to allow the uv tool become aware of the local index:

[[tool.uv.index]]
name = "dev"
url = "http://127.0.0.1:8080"

uv also uses this index (dev one) as the first in order, with the default one as the last resort.

Security

You didn’t think I left it unattended, did you? Buckle up.

When we specify:

pypi:
  ...
  ports:
    - "8080:8080"

It listens to this port on all system interfaces, because a socket is bound to 0.0.0.0:8080 in a host system.

If the host is reachable to random people, it is better to pay attention to this moment. One possible way is to configure a firewall to filter everything outside destined to port 8080. Despite the fact that I know iptables, I found that restricting access to Docker-managed ports is not as simple as I thought initially. I must confess, I failed.

The next option I tried was binding a socket to localhost:

pypi:
  ...
  ports:
    - "127.0.0.1:8080:8080"

pypi is hidden from the outside. Check. A package can be uploaded there via twine. Check. An image can be built via the Docker build system. Fail.

The build system is not something running directly inside a host system, as I thought. It has no access to the lo interface. And even if we remap the hostname like --add-host host.docker.internal:127.0.0.1, it won’t work. I do not know the details, and, frankly speaking, I do not want right now (maybe one day). As I think, the build system creates and runs a temporary container where it executes instructions from the Dockerfile. Actually, it makes sense. If this hypothesis is correct, the remap above governs the build system to access a localhost inside the temporary container. Maybe I’m wrong.

Then I finally paid enough attention to host-gateway. If any container has access to the docker0 interface and its address to talk to the host system, then binding a socket to this exact address should hide pypi from outside and provides access from containers to it at the same time.

pypi   /entrypoint.sh run -a . -P ...   Up   172.17.0.1:8080->8080/tcp

pypi is hidden from the outside. Check. A package can be uploaded there via twine. Check. An image can be built via the Docker build system. Check. No iptables was hurt. Check.

I’m not sure this way is 100% safe in terms of Docker itself. What if it does not expect a random port opened on its address? I do not know. I never encountered a single problem, but it does not prove anything. Also, such a solution is questionable when you go to Kubernetes things. I stress that the solution is only for simple cases, like a local development or services on a single server.