Creating a docker container from a Raspberry Pi Zero image... and the other way around
29 Dec 2020I’m a big fan of the Raspberry Pi Foundation and a user of their single-board computers as well. In the past two years, I worked developing tiny, under-250g, collision resilient quadcopters that had as the main computer a Raspberry Pi Zero W (RPI Zero W). The reasons why I chose the RPI Zero W were size/weight, power consumption, price and the huge community of users. I even considered to use the Banana Pi Zero because it had a faster CPU with more cores, but I gave up in favor of the RPI after talking to a friend that was struggling to set it up. Nowadays, I’m starting a new project on smart IoT sensors that, I hope, will help businesses in the tourism sector to recover faster by understanding the flow of the tourists while respecting people’s privacy. For that reason, I will need a hardware that is low power, small, resonably priced and with good support… the RPI Zero W was the first thing that came to my mind, but it is not powerful enough for some on the edge image processing I’m planning to do. One way to speed up things is to directly compile them for (on) the RPI Zero W. Currently, it’s possible to use cross-compilers, but I was having trouble to cross-compile the TensorFlow Lite runtime library and that’s why I’m writing this post.
My solution to the cross-compilation failure was to compile it directly using the RPI Zero W. However, the RPI Zero has only one core (armv6), 512MB of memory and it uses a SD card as its harddrive, so it is definitely not the right tool for this. So, I solved my problem with the help of Docker and QEMU, but I did it in a different way people usually do. Instead of using an RPI Zero compatible image from Docker Hub, I decided to create an image from my current RPI Zero W SD card.
Everything I’m explaining here expects you are using Linux (Ubuntu 20.04, Docker version 19.03.13). If you don’t have it, just use a virtual machine like VirtualBox. Once you connect your micro SD to your computer (e.g. using an adapter), Ubuntu will mount it automatically under /media/<your username>/rootfs
. In my situation, Ubuntu mounted the linux partition of my RPI SD card under /media/ricardodeazambuja/rootfs
, please keep this in mind when copying the instructions from here ;)
Before you begin blindly copying and pasting commands: they may have typos that could cause the destruction of planet Earth depending where you are logged in… just saying.
The first step is to copy all the stuff needed to create a docker image into a tar file. I did this using this command:
$ sudo tar -cvf raspbian_aiy.tar --exclude="dev/*" --exclude="proc/*" --exclude="sys/*" --exclude="tmp/*" --exclude="run/*" --exclude="mnt/*" --exclude="media/*" --exclude="lost+found" -C /media/ricardodeazambuja/rootfs/ .
The exclude
arguments are there to avoid copying directories that are more closely related to the machine, so we don’t use them. Docker will be able to create a container based on ARM by using QEMU. However, all you really need to do is to execute the line below:
$ docker run --privileged linuxkit/binfmt:v0.8
Now docker will be capable of importing the files into a new image:
$ docker import raspbian_aiy.tar ricardodeazambuja/pizero:aiy
Finally, you can create a new container (it will automatically mount the current location inside /data
, so be careful with what you do to the files inside that directory… after all you are root by default inside docker!):
$ docker run --name rpi_img -v $(pwd):/data -it -e QEMU_CPU=arm1176 ricardodeazambuja/pizero:aiy bash
And confirm it’s really armv6
:
# uname -a
If you get an error like this one:
ERROR: ld.so: object '/usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so' from /etc/ld.so.preload cannot be preloaded (cannot open shared object file): ignored.
The solution is to edit /etc/ld.so.preload
:
# nano /etc/ld.so.preload
And change the line /usr/lib/arm-linux-gnueabihf/libarmmem-${PLATFORM}.so
with /usr/lib/arm-linux-gnueabihf/libarmmem-v6l.so
.
Using this docker container, you can install and compile lots of things using all the memory and CPU cores available in the host computer! When you just compile something, at the end you can copy the files using the /data
directory to share files between the docker container and the host. However, you can also go the opposite way and generate a new RPI image from docker!
Going from docker image to SD card is not as easy, though… but it’s not impossible either ;)
Considering you are done with the docker container, just use ctrl+d
to exit from it. I used --name rpi_img
, so your hard work should not be lost (I can’t guarantee anything…). A container is NOT the same as an image, so we need to commit to update the image:
$ docker commit rpi_img ricardodeazambuja/pizero:aiy
After this, if you fancy, you can remove the container for good (docker rm rpi_img
).
It’s necessary to know the size of the docker final image, therefore the command docker images | grep pizero
comes handy. If the size is smaller than the original rootfs partition (basically, the size of your SD card minus some few megabytes), it’s much easier. It would be possible to do the next steps directly using the SD card, but it would be slow… for that reason I will make a copy (use lsblk
to confirm the address to your SD card - plug and unplug to check what disappears/appears, in my situation it was /dev/sdb
). The command below (sudo dd ...
) is capable of causing havok, think thrice before hitting enter.
$ sudo dd if=/dev/sdb of=rpi_image.img bs=1M status=progress oflag=dsync
As I mentioned, it’s quite annoying to increase the size of the partition. Why would you need that? Usually, when you download an image file for your RPI from the internet, it will expand the first time you boot it automatically (or you can use sudo raspi-config
for that). In this case, the rootfs partition will be almost the same size of your SD card (the boot partition is, in general, much smaller than the rootfs).
So, I will repeat, the easiest way is to increase the size of a partition is booting it on the RPI and using raspi-config (if it doesn’t expand automatically). Here is the HARD way, feel free to skip to the line starting with Done!. Let’s check the size of rootfs (it’s the Linux
one from the output below):
$ fdisk -l rpi_image.img
And here is my output:
Device Boot Start End Sectors Size Id Type
rpi_image.img1 8192 532479 524288 256M c W95 FAT32 (LBA)
rpi_image.img2 532480 11790208 11257729 5.4G 83 Linux
Copy the Start value for the linux one (532480, it may be a different value for you!!!).
Imagine the docker image is 1G bigger than what you read below the Size column. Let’s add 1G to our RPI image file.
$ truncate -s +1G rpi_image.img
The next command will edit the partition table:
$ fdisk rpi_image.img
Delete the Linux partition (d), remember, it was the second (2). Create a new partition (n). Select primary (p). Partition number is 2. Pass the number copied at the beginning for the First Sector (532480). Accept the default value for the last sector. Don’t remove the signature, answer N! Print to confirm (p). Save it (w).
All the stuff above was only to resize the partition, the filesystem hasn’t changed yet. Check for the next available loop:
$ losetup -f
The output depends on each computer, so I will pretend it was /dev/loop46
. Remember the number we copied some lines ago (532480), it will be multiplied by 512 resulting in 532480*512:
$ sudo losetup -o $((532480*512)) /dev/loop46 rpi_image.img
$ sudo e2fsck -f /dev/loop46
$ sudo resize2fs /dev/loop46
$ sudo losetup -d /dev/loop46
Done! Finally our RPI image has a rootfs partition that is big enough to receive the extra payload generate while using docker. One thing about mounting image files is that they by default are read-only unless you copy them to an external device like a SD card. So, remember the 532480*512? It will be used again to mount the rootfs partition as read and write.
$ sudo mount -o loop,rw,sync,offset=$((532480*512)) rpi_image.img /mnt/
$ cd /mnt
$ docker run --rm -v $(pwd):/data -it -e QEMU_CPU=arm1176 ricardodeazambuja/pizero:aiy bash
Inside the docker container:
# sudo rsync -aAXv / --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found","/data/*"} /data
If you want to test the command above (I had to delete a space using it on raspios-buster…):
# sudo rsync --dry-run -aAXv / --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found","/data/*"} /data
The command above will sync the changes back to the original RPI image (I learned about all those options here). It worked with everything I tested, but it may not work with something else… Close the container (ctrl+d
) and:
$ cd ~
$ sudo umount /mnt
Write the RPI image back to the SD card (remember to check using lsblk
if the of=
is pointing to the SD card or it will destroy something else):
$ sudo dd if=rpi_image.img of=/dev/sdb bs=1M status=progress oflag=dsync
Probably it is better to use the Raspberry Pi Imager instead of dd, just to be safe.
And that’s it! I hope everything worked as expect and now you have your super-duper-new Raspberry Pi image :)
UPDATE (08/04/2021):
If you think your disk image is too big, it may be insteresting to give PiShrink a try.
UPDATE (28/05/2021):
I’m always searching for the steps on how to reset a user password WITHOUT having to boot into the system. This solution allows you to reset the password directly in the sdcard (you need to do it from another a system, it could be a VM, running linux). It’s quite handy when you are working with headless Raspberry Pis.
UPDATE (17/07/2023):
The solution mentioned in the paragraph above will use the default password encryption of the host machine. That may not match the one you want to change the password (e.g. old Raspbian uses SHA512 and latest Debian/Ubuntu uses Yescrypt). When you check the password for the user pi
inside the sdcard root partition /etc/shadow
, you will see that it starts with $6$
where the 6
(SHA512) could be another character (e.g. 1
, 5
or y
, more details here) and it indicates the method used to hash the password. Considering you found a 6
there, the easiest solution to generate the hashed password, salt included, is to use openssl passwd -6 "your_new_password_here"
. I have no idea why, but some encryptions are not easily available. The best tool is mkpasswd
, but you usually need to install it (sudo apt install whois
, why is it in the whois package? No idea…). Using mkpasswd
you can easily check for the available methods using mkpasswd -m help
, and for the yescrypt you just type mkpasswd -m yescrypt your_new_password_here
or it will prompt you for a password.
UPDATE (04/02/2023):
To allow us to specify the platform (at least my tests with import failed), we need to create a file called Dockerfile
with the content below:
FROM scratch
ADD raspbian_aiy.tar /
CMD ["bash"]
After that, we will enable the execution in a different way:
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
And let’s create the image using buildx:
docker buildx build --platform linux/arm/v6 -t ricardodeazambuja/pizero:aiy .
Since we are using armv6 it’s necessary to specify the -e QEMU_CPU=arm1176
:
docker run --rm -it --platform linux/arm/v6 -e QEMU_CPU=arm1176 ricardodeazambuja/pizero:aiy uname -m
Other platforms (e.g. --platform linux/arm64
or --platform linux/arm/v7
) it’s not necessary to set the env variable QEMU_CPU
.
One extra note here: while compiling software using docker you may get errors related to some automatic detection of the host cpu (see this discussion for more details). In the specific case of OpenCV, I had to modify some tests inside cmake/checks/
(E.g. cpu_neon.cpp
and cpu_fp16.cpp
).