Terraform, how to provision droplet with salt-minion (masterless)

saltstackterraformvagrant

I've managed to get a local development server running with Vagrant, provisioning it with Salt in a masterless configuration. Now, I'm trying to take advantage of the existing salt states to provision a production server, but so far haven't figure out how to do it.

I've managed to created a Digital Ocean droplet with Terraform, and now I would like to provision it with Salt, ideally using the same .sls files used to provision the development machine. In Vagrant, this is rather trivial, as one just needs to declare it in Vagrantfile, like this:

config.vm.provision :salt do |salt|
salt.minion_config = "salt/minion"
salt.run_highstate = true

Afterwards, it just a question of creating a state tree pointing to our state files, where we declare the packages we would like our machine to be provisioned with, the files that we would like to sync, etc.

With Terraform however, there seems to be no similar way to do it. Terraform's documentation on provisioners is rather scarce. Other than invoking a Chef Client, or declaring the files we want to copy directly on a provisioner, or invoke a script, I couldn't find any reference on how to invoke Salt to provision an instance. I wonder if it's even possible?

Here's my servers.tf:

module "hosting" {
    source                  = "./modules/server"
    droplet_count           = 1
    droplet_image           = "ubuntu-14-04-x64"
    droplet_region          = "nyc2"
    droplet_size            = "512mb"       
    dns_record_domain       = "site.org"
    droplet_backups         = true
    droplet_ipv6            = true
    droplet_privatenet      = true
}

The following is my droplet.tf file:

provider "digitalocean" {
    token           = "${var.do_token}"
}

resource "digitalocean_droplet" "droplet" {
    image           = "${var.droplet_image}"
    name            = "${var.droplet_type}-${format("%02d", count.index+1)}"
    region          = "${var.droplet_region}"
    size            = "${var.droplet_size}"
    ssh_keys        = ["${var.ssh_fingerprint}"]
    backups         = "${var.droplet_backups}"
    ipv6            = "${var.droplet_ipv6}"
    private_networking  = "${var.droplet_privatenet}"
}

resource "digitalocean_ssh_key" "default" {
    name            = "rsa-key-nopass"
    public_key      = "${file("./.ssh/rsa-key-nopass")}"
}

And finally, my dns_records.tf file:

provider "cloudflare" {
    email   = "${var.cf_email}"
    token   = "${var.cf_token}" 
}

resource "cloudflare_record" "ipv4" {
    count   = "${var.droplet_count}"
    domain  = "${var.dns_record_domain}"
    name    = "${element(digitalocean_droplet.droplet.*.name, count.index)}"
    value   = "${element(digitalocean_droplet.droplet.*.ipv4_address, count.index)}"
    type    = "A"
    ttl     = 3600
}

resource "cloudflare_record" "ipv6" {
    count   = "${var.droplet_count}"
    domain  = "${var.dns_record_domain}"
    name    = "${element(digitalocean_droplet.droplet.*.name, count.index)}"
    value   = "${element(digitalocean_droplet.droplet.*.ipv6_address, count.index)}"
    type    = "AAAA"
    ttl     = 3600
}

Thanks in advance for any help!


UPDATE

I've added the following two provisioner blocks:

provisioner "file" {
    source = "../salt"
    destination = "/etc/salt"
}

provisioner "remote-exec" {
    inline = [
      # install and configure salt-minion
      "curl -L https://bootstrap.saltstack.com -o install_salt.sh",
      "sudo sh install_salt.sh",
      "salt '*' state.apply"
    ]
}

The /salt directory is being copied succesfully into /etc/salt and salt is being installed as well, but I'm getting a Script exited with non-zero exit status: 127 message before any state is applied. Why exactly, I don't know yet.

Best Answer

I've published an example to setup a Salt master with Terraform on DigitalOcean, and from there to start some minions. Beware that some parts of the example assume that terraform ist started on a Windows machine (but not the ones listed below).

The essential bits from saltmaster.tf:

  1. copy your SLS files to the new server using terraform

    provisioner "file" {
        source = "master/srv"
        destination = "/"
    }
    
  2. copy a startup file to the new server

    provisioner "file" {
        source = "complete-bootstrap.sh"
        destination = "/tmp/complete-bootstrap.sh"
    }
    

    The contents of the complete-bootstrap.sh:

    #!/bin/bash -x
    # create the minion's key pair and accept it on the master
    mkdir -p /etc/salt/pki/master/minions
    salt-key --gen-keys=minion --gen-keys-dir=/etc/salt/pki/minion
    mkdir -p /etc/salt/pki/minion
    cp /etc/salt/pki/minion/minion.pub /etc/salt/pki/master/minions/master
    service salt-master start
    salt-call -l debug state.highstate
    service salt-minion start
    
  3. install salt minion and master

    provisioner "remote-exec" {
        inline = [
        # install salt-minion and salt-master, but don't start services
        "curl -L https://bootstrap.saltstack.com | sh -s -- -M -X -A localhost",
        # work around possible missing executable flag
        "cat /tmp/complete-bootstrap.sh | sh -s"
        ]
    }
    

This should give you a running Salt master at DO started from Terraform.