Adding a Salt master
After I managed to provision an Nginx server using a masterless Salt setup, I felt it was time to introduce a master. This proved to be a frustrating challenge, but in the end I managed to do it. This post describes how I did it. It might save you some time.
Given the current single-server setup, I want to reach the exact same result using a Salt master. The master should run on the Nginx server itself.
How hard can it be?
PKI: The problem
Salt relies on a public-key infrastructure to let the minions communicate with the master. This means that every minion has its own public/private key pair, and that the master knows the public key of every minion. Generating key pairs is not very complex, but distributing keys is.
The private key of the minion
- Must only reside on the minion
- Must be securely transfered to the master
- Must be explicitly accepted by the master
- Must be accepted before the provisioning starts
- Can not reside in my Git repo
The Salt documentation describes the basic concept of preseeding, but it’s severely lacking when you try to find a solution for all five requirements above. The Salty Vagrant documentation provides some help, but more on that later.
PKI: The short answer
I have devised the following solution to the problem of preseeding the Salt master with minion keys:
openssltools to create all public/private key pairs in a separate directory and exclude this directory using
- Use the Salty Vagrant plugin to distribute the correct keys to the correct minions and to preseed the master with the public keys
This violates the first requirement, since the private keys now reside on the computer building the infrastructure and on the minion that is being built, but for now that is acceptable. The second requirement is probably fulfilled – I hope and assume that Vagrant syncs its files over SSH. The third and fourth requirement are taken care of by the Salty Vagrant plugin. The last requirement is fixed by providing a means to generate the keys, instead of storing the keys themselves in my Git repository.
“But what if you lose the keys?”, you might ask. This is actually a funny thing. If I lose the keys, for example by
make clean, I just have to regenerate them and rebuild my infrastructure. And since my
infrastructure is designed to be rebuilt from scratch over and over again, this is actually not a problem at all.
At least for now. I might revisit this opinion when my infrastructures grows beyond twenty servers.
PKI: The hard part
Of course, this all fell apart when I tried to implement this. As it happens, the Salty Vagrant documentation refers to the Git HEAD, not to the official v0.4.0 you ordinarily install. Furthermore, there is a problem with the current HEAD where preseeding is done incorrectly. There is a pull request for that, but it has not been merged yet.
So, save yourself a lot of hairpulling, fork the plugin, apply the pull request, and install the plugin from source. After this, we’re getting close…
Where is my master?
By default, Salt minions look for a host called
salt. Which of course cannot be found in the setup described above,
because the master is called
nginx01.intranet. The easy solution would be to provide a custom minion configuration,
but I decided for a more future-proof solution: the Hostmanager.
This cute plugin updates the
/etc/hosts-file on the guests and supports host aliases. It can also be used as a provisioner,
ensuring that all hosts are known to each other before the Salt provisioning starts. And it can also update the
/etc/hosts-file on the host machine, which is a nice feature to have.
Not so fast…
More fun ahead. For some idiotic reason, starting the Salt minion takes longer than starting the Salt master. Which
means that during a
vagrant up provisioning run, the minion seems to be down when the master is trying to call its
I tried all kinds of config tweaking to fix this, but in the end I gave up. I wanted to see some results.
Did you notice the subtle
sleep 60 call? This scientifically determined delay ensures that the
salt minion is up, running and connected when we finally do what we’ve always wanted to do: running the highstate.
Time for a
It’s fun to have a virtul infrastructure on your own computer, but my original goal has always been to deploy this to Digital Ocean as well. And guess what: the setup described above does not work on a VPS.
Remember this line?
This line is necessary when working with the virtualbox provider, to ensure that the host files are seed with private
instead of the well-known but useless
127.0.0.1 IP. But this setting is global, which means that my VPS will also
search for the
salt host on
10.1.14.100. Which is not going to work.
In theory, this can be overridden by using Vagrant’s provider override functionality:
But due to Vagrant’s arcane configuration inheritance settings, this nullifies my
I tried to solve this by making the value of
ignore_private_ip depend on Vagrant’s
But that does not exist. And it is not going to exist either.
As it turns out, the recommended approach is to define your own functions. The
Vagrantfile is just a piece of
Ruby code, so as long as you can program Ruby, you can do everything you want. It’s just that I was never interested
in learning Ruby…
If anyone has a better solution, please let me know! But despite my misgivings about this hack, it works.
Mission Accomplished! (for real now)
Don’t forget: my infra is a repo, so go ahead and fork it
Do the wave
Follow the path
Choosing your target
the humble beginnings