In my previous report, I talked about how my initial plan to setup Docker containers to help build the Linux-based version of Toy Factory Fixer was a bit unclear. I didn’t know what workflow I was trying to support, which made it difficult to envision how to setup Docker to help me make it happen.
This week, I continued the work with a clearer vision.
Sprint 58: Ports
Planned and Incomplete:
- Create Linux port
Despite putting in more hours than expected, I only made a little progress.
My work was split up into the following tasks:
- T: Create 32-bit docker container using Debian
- T: Use bind mount to load current project and relevant Projects directories as read-only
- T: Build custom libraries for 32-bit
- T: Build custom libraries for 64-bit
- T: Build game using libraries for 32-bit
- T: Build game using libraries for 64-bit
- T: Update Cmake files to package up assets correctly
- T: Write script to coordinate all of the tasks to put together a finished binary with 64-bit and 32-bit support
I only finished the first task, as I spent the week struggling to figure out why Docker’s support for multiple architectures (multi-arch) was not working as advertised.
My host system is using amd64 as the architecture. According to the documentation, I should be able to use a Dockerfile or a docker-compose.yml file, and either way, I should be able to specify the platform as pretty much anything supported by the specific Docker image I’m referencing.
In my case, I am using debian/eol:etch, which has an i386 version out there.
So my docker-compose file looks like:
version: "2.4" networks: linuxportbuilder: external: false volumes: linuxportbuilder: driver: local services: linuxportbuilder64: image: gbinfra/linuxportbuilder64 build: context: . restart: "no" networks: - linuxportbuilder volumes: - linuxportbuilder:/data ports: - 22 linuxportbuilder32: platform: linux/386 image: gbinfra/linuxportbuilder32 build: context: . restart: "no" networks: - linuxportbuilder volumes: - linuxportbuilder:/data ports: - 22
And my Dockerfile was:
FROM debian/eol:etch RUN apt-get update && apt-get install -y mawk gcc autoconf git build-essential libgl1-mesa-dev chrpath cmake pkg-config zlib1g-dev libasound2-dev libpulse-dev CMD ["/bin/bash", "-c", "echo The architecture of this container is: `dpkg --print-architecture`"]
As you can see, the service linuxportbuilder32 specifies a platform of linux/386.
But when I run docker-compose up, I see output indicating that it is running amd64.
I tried modifying my Dockerfile’s FROM line to say:
FROM --platform=linux/386 debian/eol:etch
And I still see amd64.
I eventually learned about the buildx plugin for Docker, which uses Buildkit (something I haven’t looked into yet but apparently it is a Thing), but the docs say that it comes with Docker for Desktop out of the box. Well, I don’t use Docker for Desktop, as I just have the docker package that comes with Ubuntu 20 LTS’s apt repos. I would expect that things should work, right?
Well, I find I can download buildx and place it in ~/.docker/cli-plugins. And then I need to use it to create a new builder, then tell Docker I want to use that builder instead of the default one.
And I think I needed to turn on experimental features by creating a ~/.docker/config.json file and adding an entry for it.
And now if I use buildx directly, I can spin up an image and a container that thinks it is using a 32-bit architecture.
But when I used docker-compose, it still seemed to use the old builder. Frustrating. And the docs make it sound like it should just work.
I eventually got it to build a 32-bit image, which I can determine from using docker manifest, but I couldn’t run a 32-bit container because I would get: “WARNING: The requested image’s platform (linux/386) does not match the detected host platform (linux/amd64) and no specific platform was requested” and I get this error DESPITE THE FACT THAT I WAS SPECIFYING THE PLATFORM EVERYWHERE!
But I found that despite a certain environment variable value supposedly being a default, I needed to specify two different environment variables:
DOCKER_BUILDKIT=1 COMPOSE_DOCKER_CLI_BUILD=1 docker-compose up
And then I was able to spin up containers in both 64-bit and 32-bit! Success!
That is, until a colleague informed me that docker compose (note the lack of a hyphen) was a thing.
WHAT?!
So I discover that docker-compose is v1 and old, while docker compose is v2 and the new hotness.
And while I didn’t want to install Docker for Desktop for Linux, I did find that docker compose didn’t exist in the default apt repos, so I uninstalled my existing Docker and Docker-related tooling, added Docker’s repos for my Ubuntu system, and installed those, and now I have the official Docker tooling installed.
And suddenly I no longer needed to setup a new builder or use environment variables. Everything worked as advertised.
And I’m a little annoyed that I spent a good chunk of the week figuring out what was essentially a workaround for using older Docker tools, but I suppose that means now you know in case you need it.
Annoyingly, if instead of dpkg –print-architecutre I had used uname -m, I would still see amd64, even though supposedly the docs claim that it should show the i386 architecture. I see people report this behavior as of posts from a few years ago, so it seems to be the case today as well. I am not sure if there is something special happening with Alpine images or if the docs are just wrong, but it also makes sense, as Docker containers are using the host system’s resources.
Anyway, once I could consistently spin up 32-bit and 64-bit containers, the next step was to get my toolchains and source libraries for SDL2 and such mounted into the containers. It was both easy and hard, because while the mechanism to mount a directory as read-only into the container is straightforward, some of my build scripts assume the directory structure on my host system is in place.
Basically, I needed to change some things to make it more generic so that it works locally on my host system as well as in the containers.
For example, my third-party libraries and my toolchains are located at ~/Projects/Tools but I didn’t want my container to require that my own user account’s name is being used, and it makes no sense to have my home directory be replicated in the container either.
Currently, my CMake toolchains hardcode the value. Instead, I can specify the environment variable myself when I run cmake.
THIRD_PARTY_LIB_DIR=/home/[my_username]/Projects/Tools cmake ../..
And when I create the docker containers, my new docker-compose.yml file has these entries added to volumes:
volumes: - linuxportbuilder:/data - ${THIRD_PARTY_LIB_DIR:?err}:/home/gbgames/Projects/Tools:ro - ${THIRD_PARTY_LIB_DIR:?err}/CustomLibs:/home/gbgames/Projects/Tools/CustomLibs:rw
And my Dockerfile adds the gbgames user and ensures that the home directory for that user exists.
So now I need to do:
THIRD_PARTY_LIB_DIR=/home/[my_username]/Projects/Tools docker compose up
And the resulting containers will have my Tools directory mounted read-only. If I don’t specify the environment variable, it fails instead of silently trying to continue to spin things up. I am also pleased that I can make a separate subdirectory read-write, as my current scripts like to build the custom libraries and place them in that directory, which means I won’t need to change my scripts too much.
Next up is mounting my current project’s git repo. I think I will similarly want to use an environment variable here to specify the source directory to make things more generic across projects.
But now that I don’t have to fight against tools that aren’t matching the docs, I should have an easier time of it.
Thanks for reading!
—
Want to learn when I release updates to Toytles: Leaf Raking, Toy Factory Fixer, or about future Freshly Squeezed games I am creating? Sign up for the GBGames Curiosities newsletter, and get the 19-page, full color PDF of the Toy Factory Fixer Player’s Guide for free!
One reply on “Freshly Squeezed Progress Report: More Docker Work”
[…] last week’s report, I had figured out that my installation of Docker was missing some of the newer features available, […]