Thoughts on using GitLab as a single source of truth for SSH public keys

We have a whole bunch of Linux servers, but we don’t currently have a good central authentication system for them. In practice, this isn’t normally such an issue, except when managing SSH keys for allowed logins. Any time anyone with SSH access gets a new private key (a new development computer, a new developer, etc.) or decommissions an old one, someone has to log in to every server and add (or remove) the corresponding public key from that user’s account.

What I am thinking of doing is setting up a cron job that synchronizes everyone’s SSH public keys with those found in our GitLab instance. In effect, here is what that looks like (I have not yet tested this script):

USERS=`awk -F':' '/sshlogin/{print $4}' /etc/group`
IFS=, for user in $USERS
do
    mkdir -p /home/$user/.ssh
    curl https://git.example.com/$user.keys > /home/$user/.ssh/authorized_keys
    chown $user:$user -R /home/$user/.ssh
    chmod 0700 /home/$user/.ssh
    chmod 0600 /home/$user/.ssh/authorized_keys
done

This will go through the users who are members of the group sshlogin and replace each user’s authorized_keys file with the keys that GitLab knows about.

(Obviously, I will not be implementing this on the machine that hosts GitLab itself so that GitLab going down can’t lock us out.)

Before I actually implement this, I have the following questions:

  1. Is this a good idea?
  2. Am I missing anything?
  3. Is there any way to syntax-check the response to make sure it is valid (and not an error page, such as a 502 while GitLab is restarting when doing an update)?
  4. Is there any way to get the public keys (maybe via the API) that gets them with their descriptions instead of having all of their descriptions replaced by user (git.example.com)?
1 Like

Hi,

that’s something we did with Git before we had Puppet, Ansible, etc. with managing this in a more usable and comfortable way. Do you have these tools available in your environment or are you planning to go this route? If so, I’d recommend them instead of the manual approach.

Cheers,
Michael

1 Like

Thanks. I’m trying to work on moving us to Chef and to centralized authentication, but it’s a long and drawn-out process to convince the relevant stakeholders and I’m looking for a simple interim solution.

I would strongly recommend Ansible for this.

Store the keys in a folder, make a simple Playbook and inventory file, then run the Playbook.

You can then store the Ansible files in a Gitlab repository.

2 Likes

Thanks for chiming in, I was about to say that likely Ansible would be a better fit than Chef. Also because the open source commitment of Chef is somewhat unclear – they announced that the OSS version of Chef won’t get any binary packages (rpm/deb) after April 2020, you will get that only for paid enterprise versions. From a business model view, that’s totally fine … but it will hinder the adoption in the open source community, and likely as a drawback on missing contributions. Here’s an interesting thread on that topic: https://www.reddit.com/r/devops/comments/b8l22o/chef_is_going_to_stop_open_source_releases/

I’m not saying that Ansible solves all the problems, I am coming from the Cfengine and now Puppet world myself. Ansible for example does not have the well-tested and approved modules like Puppet, on the other hand it acts agent-less with executing tasks with no delay. Puppet had mcollective, and now invented Bolt for this. Another player on the market is Salt/SaltStack with its minions, also worth a try.

I personally would pick the tool where I can see the most community engagement, and also a backing vendor with a vision for caring about their community.

For a short term solution, keeping the keys in a Git repository and using cronjobs for syncing is totally legit. Just consider that people in your company may want to extend this, and add more configuration templates and files there as well. Especially with permissions and user handling, this can get cumbersome, also with the different distributions in mind. You’ll likely say you are a RHEL shop, but who knows which container and VM is actually being run and requiring maintenance and adoption.

Therefore I’d suggest to start writing a simple Ansible playbook, which does the tasks of syncing the SSH keys. You can test that on your local desktop as well, as in agent-less, just SSH on the remote host required.

Cheers,
Michael

1 Like

If I pull the public keys from GitLab, keys can be self-managed by each individual. If I put them in the repository, then only users who are allowed to modify the Playbook will be able to add keys.

What about having Ansible (or Chef, Puppet, etc) itself pull the public keys from GitLab?

The last time I convinced my boss to let me try to set up Chef was over two years ago, so I’m not surprised that I didn’t see this.

Ansible looks nice, but I think we’ll probably lean toward Puppet since I have some friends who use it extensively and they can help me out.

1 Like

Puppet has a great community, and a good variety of modules. Some of them are maintained by Voxpupuli as community modules. Like, you don’t need to reinvent the wheel with installing Mysql, or Nginx.

https://github.com/voxpupuli & https://forge.puppet.com/

Getting into it might be overwhelming, so it is good to have someone you can ask :slight_smile:

To add my 2¢, we use Puppet and Ansible. The agentless Ansible allows you to run Ansible as GitLab pipeline job, which makes it easy to use for a ssh-keys distribution repository. The Ansible playbook itself can also be tracked inside an repository for example.

2 Likes

No, that’s not a good idea. At least not unless all of those users have root access, because they will once you run that (consider what happens if ~/.ssh/authorized_keys is a symlink). You can mitigate this by running each update as the user you’re updating. But there are better ways to do this!

Some of the better ways:

  • OpenSSH has an AuthorizedKeysCommand which you can set in the system sshd config to get authorized keys for a user, when the user attempts to log in. This is relatively easy; you just need a (simpler!) script.
  • OpenSSH actually has CA support for SSH keys. So you can tell your machines to trust the CA key, and then you just need to sign each new key and give the user the signature. This is, admittedly, pretty complicated (like all CA stuff), but scales really well and doesn’t require, e.g., the CA machine to be up for authentication to work. Revocation lists (CRLs) are supported, too.
  • LDAP (or similar) isn’t actually that bad to run, and there are even free things you can quickly get up and running like FreeIPA in addition to less-friendly solutions like rolling your own with OpenLDAP. You get support for all account information, not just ssh keys. You get a way to set up different roles and which systems they have access too. You get a way to centralize even, e.g., sudoers config. Etc. Seriously consider this.

One thing you probably haven’t thought about much (because hopefully it hasn’t come up) is how quickly you can de-authorize a key. If a developer realizes they accidentally disclosed the private key—how quickly can you stop that key from being accepted? All three of the options above can do that very quickly: LDAP and AuthorizedKeysCommand do it instantly, CA as quickly as you can get a CRL update out.

2 Likes

as @m4r10k wrote - we are using gitlab/puppet/ansible.
for your question in special we are using puppet (and for sure with git…). i think it will be possible (with some automation) to let the users manage their ssh-keys in their own git repositories and let puppet use those to deploy them (or let the agents gather them) during their puppet-run. (maybe, if one puts some more thoughts in it it would also be possible to use an ssh-key from the gitlab users account if you use gitlab…)
for us we manage them centrally so changing/removing a users ssh-keys needs us puppet admins to change/remove it from puppet and wait for all puppet agents that they’ve run. works like a charm in our case, no problems there. :slight_smile:
as for more technical details - we use the “ssh_authorized_key” type module which automatically comes with the puppet agent itself, no need for a special puppet module there.
https://puppet.com/docs/puppet/latest/types/index.html & https://forge.puppet.com/puppetlabs/sshkeys_core

2 Likes

Seconded. The AuthorizedKeysCommand is the way to go here. Here’s a simple Python script I wrote that can be used to fetch keys from an LDAP server. It can be easily modified to fetch the keys from any other store.

Even if you’re not ready to switch to a centralized auth system, setting up an LDAP server just to let users manage their SSH keys would be better than cobbling a non-standard solution together. It also gives you a foundation to start building your centralized auth system from. GitLab can sync SSH keys from an LDAP server so your users only have one place to manage their keys.

I also recommend you turn off ~/.ssh/authorized keys. Too much risk of an attacker who gets access to one machine leveraging that to move laterally inside your network. For users that need keys installed locally (instead of using AuthorizedKeysCommand, like system maintenance accounts that you want to ensure always work), put the keys someplace only root can write to, like /etc/ssh/authorized_keys/{username} and configure SSH to look there instead.