Software Engineering, Architecture, Technology.

My Desktop Setup (Linux)

When I am on my linux machine, I do absolutely everything inside of Docker containers. People think I am crazy, but I am thinking "why the hell would you install all that crap on your computer?"

Perhaps I am just living too far in the future. In a world where you can simply run a process on your machine without it leaving shit stains everywhere. Moreover, I do not care what dependencies your process needs, as long as it is in a container, it is not infecting my machine. I would rather spend half a day figuring out what dodgy driver calls an application is trying to make and fix the container image than install it directly on my machine. The result is spotless!

It was this great talk at DockerCon 2015 by Jessie Frazelle, that made me realise this was actually possible and inspired me to pursue this goal.

Understanding containers

It's only once you fully understand what a container is, that you realise it's just a process with a bunch of duct tape around it to prevent it from seeing outside things on the host. Unlike virtual machine hyper visors, container methods like "cgroups" and "namespaces" are all accessible to linux users and easy to learn about. The words, "syscalls", "cgroups", "namespaces" scare people off. And they stick to the idea that a container is some kind of VM.

In a video below, I attempt simplify Docker container concepts, explaining some of these concepts to folks who are new to the topic. Most people have used a Windows machine and can relate how to operate it. I use known Windows concepts to translate these "cgroups" and "namespaces" into known relatable things. Hope this helps.

My Desktop Repository

Inspired by all of the above, I created a Github Repository and started accumulating docker files

First of all, Linux has a feature called aliasing. It gives us the ability to add aliases to a series of commands or shell scripts to execute. This gave me an idea. Instead of starting program "chrome" in command line, I could alias "chrome" to docker run chrome ... Heres an example:

alias chrome='xhost local:root docker run -d --rm --net host
-v /etc/localtime:/etc/localtime:ro
-v /tmp/.X11-unix:/tmp/.X11-unix
-v /home/marcel/apps/chrome/profile/:/data
-v /home/marcel/Downloads:/home/chrome/Downloads
--device /dev/snd:/dev/snd
--device /dev/dri
-v /dev/shm:/dev/shm
--name chrome aimvector/chrome'

On the first line, we use xhost local:root to allow root user access to the running X server, so the GUI will work. This is for obtaining a graphical interface for the container app. We mount the system time with /etc/localtime, so that chrome runs on the same time as the host operating system. (emails and calendar) Mounting /tmp/.X11-unix allows the container access to the X11 driver so that chrome start in GUI mode. We will also need to set the display variable so it knows which display to bind to, with DISPLAY=unix$DISPLAY To be able to gather downloaded files on the host, we mount in our "Downloads" folder, pretty straight forward one. Lastly, we mount in sound and rendering devices so chrome can use them. So sound and video works correctly.

Dockerizing applications

Most importantly, we have to treat the process like any other native process. Natively installed applications will look for a graphical interface like X11 to bind to. Processes will also try to maintain state on disk. For example, Chrome has your user profile. When you close and open the browser it is able to maintain session state of websites you've logged into. So we have to mount folders in from the host if we'd like to maintain this state. These are usually easy to find under your ~\ user folder as "." folders. (Example: "~/.chrome" folder)

The easy part is mounting those in, and mounting all devices. You either know what they need and add the mount points, or just add all the mount points you know of :) The hardest part is loading libraries and dependencies when creating the dockerfile. Most apps will assume you have libraries installed because a desktop Linux OS is normally packed with it. Some websites of apps are well documented and will list the dependencies you need. Other websites are not this lenient. When we run inside a container, not all libraries are installed, as we are in a virtual file system for the container. Most 'C' applications are friendly enough to explode with an error message stating libraries that are missing, so you can simply apt-get them in.

The most difficult I have found so far is applications that just do weird things. Like try to access some folders that are not documented. Or they expect some graphics drivers, like NVIDIA or GL drivers and libraries. Thats where the strace tool comes in. You can track sys calls of the container and find what calls \ requests its trying to make. You should see I/O errors looking for files if this is the case. Now video calling like "zoom" and "skype" are quite simple as they don't appear to do weird things other than just use devices that you mount in.

One problem I still have not solved is screen sharing applications. ZoomUS and OBS simply get black screens when propagating screen content through streaming or calls. I'm sure to figure it out one day :)

So far the "Container on the Desktop" dream's been real and impressive. As a former Windows user, it's helped me dramatically in learning how containers and Linux work. This is something I am still learning every day. As I learn more things I will create videos and share. Head over to my desktop repo and give these containers a spin if you are on Linux. Come say hi on YouTube too! :)