SQL Server 2019 + Docker + WSL 2

For a long time, I have been thinking about what is the best way to experiment with SQL Server 2019 on my local laptop. But I wanted to keep my localhost as clean as possible, just for running client applications such as Visual Studio Code, SQL Server Management Studio etc…

After a few days of the experiment, I finally concluded that the best way (for me) is to install an Ubuntu with WSL 2, and install Docker in the Ubuntu, and run SQL Server 2019 container from there, though there are a few bumps along the way to set it all up.

In this post, I will share the different options I have tried and the installation and configuration steps to set up such an environment.

What Have Been Tried

Before I landed on my final solution, I tried a few different options. The key requirements for me are:

  • Running SQL Server from a VM or Docker, not from the host machine.
  • Database files need to be persisted on the host or outside of Docker containers.

First Try: Linux VM from HyperV

The first try I had was to have a Linux VM running from HyperV, and install SQL Server on the Linux VM. However, this used a lot of local disk space (VM itself and the VHD file). Plus I am not familiar with Linux, and running a lot of bash commands is not something I am used to.

Second Try: Docker Desktop with Linux Backend

Then I turned into Docker Desktop, another technology I know I should have started looking into ages ago, now is a great opportunity to explore it. And there are a lot of benefits to running SQL Server with Docker, such as fast installation, and resource isolation (more benefits from here).

To start with, I used the Linux VM back end and ran a SQL Server 2019 container from there. It worked fantastically without many challenges. I liked the graphic Interface in which I can configure Docker CPU/Memory resources, start/stop Docker containers etc… I can open SSMS from the host connected to it as I normally connect to any server on the network.

However, when I decided to attach a Docker volume to the SQL Server 2019 container to save the database files in the volume so that when the container is gone, my database files are still persisted with Docker, and can be used by other containers in future, I had issues starting up the container when mapping the volume to /var/opt/mssql/data. The container exited with an error:

SQL Server 2019 will run as non-root by default. This container is running as user mssql. To learn more visit https://go.microsoft.com/fwlink/?linkid=2099216. 2020-11-13 03:01:34.09 Server Setup step is copying system data file 'C:\templatedata\master.mdf' to '/var/opt/mssql/data/master.mdf'. 2020-11-13 03:01:34.30 Server ERROR: Setup FAILED copying system data file 'C:\templatedata\master.mdf' to '/var/opt/mssql/data/master.mdf': 2(The system cannot find the file specified.) ERROR: BootstrapSystemDataDirectories() failure (HRESULT 0x80070002)

After some research online, it appears that the container itself runs with non-root user “mssql” and this user does not have permission to write to the Docker volume. If the container ran with root, we would not see this issue. But, Docker containers are supposed to run with non-root user as the best practice for security.

To get around this permission issue I would need to change the owner of the volume folder to “MySQL”. However, the Docker VM itself is not directly accessible. To be able to change the owner of the Docker volume folder, I would need to start another bootstrap container (can be of any image) with docker VM mounted to it, and change docker volume folder ownership from within that bootstrap container. This to me is just too much effort. I decided to explore other options.

Third Try: Docker Desktop with WSL 2

After enabling WSL 2 on Windows and configuring Docker with WSL 2, there are 2 distros installed by Docker Desktop (more from Introducing the Docker Desktop WSL 2 Backend):

PS C:\Users\Jay Fu> wsl -l -v

   NAME                   STATE           VERSION
 docker-desktop         Running         2
 docker-desktop-data    Running         2 

Now I can see the volume folders from Windows File Explorer from the address below:

\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes

I still faced the same issue of not able to change the owner of my volume folder due to the docker-desktop-data disto is not directly accessible. I would still need a bootstrap container for changing volume folder ownership.

Now I am thinking I should give the native Docker in Linux a go.

Final Solution

I have WSL 2 installed on Windows, why not install Docker on a WSL 2 distro? Finally, with this configuration, I am able to change ownership of volume folders within the WSL 2 distro.

1. Install Ubuntu on WSL 2

Before we install Ubuntu, we will need to enable WSL 2 on Windows 10 (more from Windows Subsystem for Linux Installation Guide for Windows 10).

Once WSL 2 is enabled, open Windows Store from Start Menu on the host, and search for “Ubuntu”

Once Ubuntu is installed, from the Start Menu search for Ubuntu to open a shell. When it opens for the first time, it will ask for a new root user name and password.

From the shell, type uname -a to view the version.

I have successfully installed Ubuntu on WSL 2. I can see the new distro with the WSL command on the ost:

PS C:\Users\Jay Fu> wsl -l -v

   NAME                   STATE           VERSION
 Ubuntu                 Running         2
 docker-desktop         Running         2
 docker-desktop-data    Running         2 

2. Install Docker in Ubuntu

The Docker docs have detailed steps on how to install it (Install Docker Engine on Ubuntu and Post-installation steps for Linux). In bash run the following commands.

# Update apt-get
$ sudo apt-get update  $ sudo apt-get install \      apt-transport-https \      ca-certificates \      curl \      gnupg-agent \      software-properties-common # Add Docker’s official GPG key $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - # Verify that you now have the key with the fingerprint $ sudo apt-key fingerprint 0EBFCD88  pub   rsa4096 2017-02-22 [SCEA]       9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88 uid           [ unknown] Docker Release (CE deb) <docker@docker.com> sub   rsa4096 2017-02-22 [S] # Add Docker repo $ sudo add-apt-repository \    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \    $(lsb_release -cs) \    stable" # Install Docker  $ sudo apt-get update  $ sudo apt-get install docker-ce docker-ce-cli containerd.io

By default Docker command needs sudo. Use the code below to create a docker group and add the current user to it, then we can start using docker without sudo.

$ sudo groupadd docker $ sudo usermod -aG docker $USER

Once the Docker is installed, running the testing hello-world image gives an error:

# Start the Docker daemon, Docker myst be run as root.
$ sudo service docker start

# Create and run hello-world container.
$ docker run hello-world $ docker: Error response from daemon: cgroups: cannot found cgroup mount destination: unknown.

This is due to the Docker hello-world image expecting sysemd sysfs entries as a normal Linux system would have. However, the Ubuntu distro from Microsoft seems not to have these.

A workaround from WillUdstrand is to create a mockup mount for cgroup, and it worked for me! However, SQL Server images don’t need cgroup.

$ sudo mkdir /sys/fs/cgroup/systemd $ sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd

The hello-world runs successfully.

docker run hello-world

 Hello from Docker!
 This message shows that your installation appears to be working correctly.
 To generate this message, Docker took the following steps:
 The Docker client contacted the Docker daemon.
 The Docker daemon pulled the "hello-world" image from the Docker Hub.
 (amd64)
 The Docker daemon created a new container from that image which runs the
 executable that produces the output you are currently reading.
 The Docker daemon streamed that output to the Docker client, which sent it
 to your terminal. 
 To try something more ambitious, you can run an Ubuntu container with:
  $ docker run -it ubuntu bash
 Share images, automate workflows, and more with a free Docker ID:
  https://hub.docker.com/
 For more examples and ideas, visit:
  https://docs.docker.com/get-started/

3. Run SQLServer 2019 Docker Image

First of all, retrieve the image from Microsoft (more from Microsoft SQL Server on Docker Hub):

$ sudo docker pull mcr.microsoft.com/mssql/server:2019-latest

Have a look at the local images:

$ docker image ls

 REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
 mcr.microsoft.com/mssql/server   2019-latest         5ced205176bc        5 weeks ago         1.43GB
 hello-world                      latest              bf756fb1ae65        10 months ago       13.3kB

Start SQL Server container with some default parameters:

$ docker run -e "ACCEPT_EULA=Y" -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d --name "SQLServer2019" 5ced205176bc

 1fe65acd075c6f1b9e576b513d8ba5f6bdec25555d07dfc3979c9ea426bde594

$ docker container ls

 CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
 1fe65acd075c        5ced205176bc        "/opt/mssql/bin/perm…"   37 seconds ago      Up 36 seconds       0.0.0.0:1433->1433/tcp   SQLServer2019

Then create a Docker volume which will be attached to the SQL Server container.

$ docker volume create SQLServer2019Volume

Take a look where the volume folder is:

$ docker volume inspect SQLServer2019Volume

 [
     {
         "CreatedAt": "2020-11-13T02:33:58+13:00",
         "Driver": "local",
         "Labels": {},
         "Mountpoint": "/var/lib/docker/volumes/SQLServer2019Volume/_data",
         "Name": "SQLServer2019Volume",
         "Options": {},
         "Scope": "local"
     }
 ]

We need to change to the owner of directory “/var/lib/docker/volumes/SQLServer2019Volume/_data” to the user “mssql”. For this, we will need to know the user id of “mssql”. Bash into the SQLServer2019 container:

$ docker exec -it SQLServer2019 bash

Once in the container, we need to find the user:

$ id
 uid=10001(mssql) gid=0(root) groups=0(root)

We can see the uid for “mssql” is 10001, and this uid needs to be to the owner of directory “/var/lib/docker/volumes/SQLServer2019Volume/_data” on Ubuntu (It is OK even if this user doesn’t exist on it).

Let’s exit the SQLServer2019 container bash and get back to Ubuntu.

To access docker directory and its subdirectories, we will need to run bash with root by:

$ sudo -i

# Run sudo bash, as we can't use "sudo cd" into a restricted folder.
$ sudo -i
$ cd /var/lib/docker/volumes/SQLServer2019Volume
$ ls -l

total 4
 drwxr-xr-x 2 root root 4096 Nov 14 12:52 _data

As we can see from the permissions other users only have “Read” and “Execute” permissions on this folder.

Let’s change the owner to be 10001(mssql, even it doesn’t exist in Ubuntu):

$ chown 10001 _data
$ ls -l

drwxr-xr-x 2 10001 root 4096 Nov 14 12:52 _data

Start another SQL Server 2019 container with a volume mounted:

# Delete the container created before.
$ docker container remove -f SQLServer2019

# Create new container with volume.
$  docker run -e "ACCEPT_EULA=Y" -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d --name "SQLServer2019" -v SQLServer2019Volume:/var/opt/mssql/data 5ced205176bc 

We can verify that data (database files) has been written into the volume directory.

# cd /var/lib/docker/volumes/SQLServer2019Volume
$ cd _data
$ ls
Entropy.bin  mastlog.ldf  model_msdbdata.mdf  model_replicatedmaster.ldf  modellog.ldf  msdblog.ldf  tempdb2.ndf  tempdb4.ndf
 master.mdf   model.mdf    model_msdblog.ldf   model_replicatedmaster.mdf  msdbdata.mdf  tempdb.mdf   tempdb3.ndf  templog.ldf

Switch back to my host machine and access the SQL Server from SSMS with localhost.

If localhost doesn’t work, try use [::1] in the server name. In WSL2, port 1433 is using IP/TCPv6, SSMS some times is not able to resolve localhost to loopback IP [::1].

Create a new test database:

In Ubuntu, I can see the new database files are saved in the volume directory:

# We should be at /var/lib/docker/volumes/SQLServer2019Volume
$ cd _data
$ ls
Entropy.bin  mastlog.ldf  model_msdbdata.mdf  model_replicatedmaster.ldf  modellog.ldf  msdblog.ldf  mytest_log.ldf  tempdb2.ndf  tempdb4.ndf
 master.mdf   model.mdf    model_msdblog.ldf   model_replicatedmaster.mdf  msdbdata.mdf  mytest.mdf   tempdb.mdf      tempdb3.ndf  templog.ldf

After I removed and recreated the SQL Server container, mytest database is still shown in SSMS (this is because the new container loads the master database from SQLServerVolume, and the master database was updated with mytest info from the previous container). Yeah, finally I think I get what I needed!

Conclusion

I have set up a local SQL Server 2019 instance (WSL 2 (Ubuntu) -> Docker -> SQL Server 2019 container), which I can use SSMS on my host to access.

Now I can start experimenting with the new features of SQL Server 2019. And once I have done my experiments, I can just remove the Docker container, or even remove the Ubuntu WSL if I wanted to (I think I will keep playing with Ubuntu).

Even with the challenges along the way, I feel the whole experiment is rewarding and I have learnt some bash commands and understand Docker much better than before.

On a side note, the experience of setting up a SQL Server instance with Docker is far much better than traditional Windows Install. Once the SQL Server image is downloaded (only takes a few minutes), starting up a SQL Server container just takes seconds! But not so sure if Docker production deployment would be the same. Hopefully one day I have good reasons and try it out with a production setup.