As the next subject of my exploration of “what does the DevOps can do for me”, I went back to play with my favourite CMS. I prefer Ansible, as this is more controlled – push rather then pull – and it is agent-less. I do not have to set up services, scheduled tasks and agents on the controlled nodes, and sit around and wait – I know, Chef and Puppet can do push too, but still… – to the CMS to do it’s job. More importantly my company does not have that huge of an environment – we are talking about 80-100 VM-s – to validate another configuration management system. I am also just exploring, running ANSIBLE playbooks about servers in our staging environment yet (although if all goes well, there will be soon an upcoming RDS-deployment, where we finally retire the aged 2008 R2 server farm and replacing it with a 2019 one…which will be a good time to test out my newly gained Terraform and ANSIBLE skills).

Seting up the ANSIBLE controller

Ok so lets jump right in. What did I do so far.I will not go into too much details on how you set up Ansible – there is plenty of guide online from people more experienced fin that matter. I have set up a hyper-v VM on my work laptop – we mostly use VMWare in the company, but I like to keep this fully separate, also unique 🙂 I used an Ubuntu 18.04. I am mostly connecting to it via SSH (it is Linux after all) and using my trusty VSCode.

I have set up all the 5 of our current domains of the company in the Kerberos config (again no point to go into huge details, there are guides online one that)

Ansible lives in the /etc/ansible/ directory, and my setup looks like this currently:

  • the ansible.cfg – one can probably guess – is the generic config file for the program
  • the “group_vars” folder currently contains an “all” file, which is my generic settings (telling ANSIBLEt o use Kerberos, WinRM, etc. – probably would not recommend this, should Linux VM-s are expected, but in my case this is unlikely). The rest of the files are specific to each domain. They are not yet encrypted – will play with vault soon, then I likely upload the folder to my GitHub too.
  • “hosts” is where I keep my host list. I will review it soon, but currently the default .ini format will server. The other “host*” files are just backups or an unfinished .yml conversion I am still working on.

Now, I am storing – for the moment – my playbooks in my home folder.

This folder currently also available on my GitHub. As there is no variables here, think it is ok to show it for the world.

In action

Ok so lets see, what we can do here. First lets just see, if all my hosts are reachable:

Probably it is very self explanatory, but it tells Ansible to run agains “all” nodes a “-m” (module), which is in our case the “win_ping”, which is a module that does not just do an icmp echo, like the normal ping, but actually verifies, if the Windows-host is connectable, via the defined method (in our case Kerberos via WinRM).

Ok so then, I cooked up some playbook, with a role from Ansible Galaxy. Well, actually 2 roles.

It is quite basic – I am just getting the hang of it, allow me – also I would not recommend this in production environments. It is basically doing a Windows patching (subject of any pending updates), then, if the patching requires it, reboots the system.

First we gather facts, provided all hosts availabl
Then we check for outstanding Windows updates – there was none this time.
Then we check for pending reboots and would reboot, if there would be any. Finally a summary tells us there were no changes this time.

More action

Now lets see something more interesting.

More colours are good yes? Ok so what transpires here..?

  1. So this is another playbook now. In this case it meant to deploy Notepad ++, a trusty text editor. Using chocolatey, the awsome package manager for Windows. And more specifically it is meant to download the x86 version.
  2. I deliberately left this to happen (of course!). Since I am on VPN, that server’s DNS somehow was not available to me. And for the Ansible-VM. I fixed this, and it will work, I promise!
  3. You can see the servers available but without the package are getting this installed.
  4. Finally the recap tells you that one node was unreachable, and some installed our package.

Let’s re-run this now.

After fixing the DNS problem, since that node had the package installed, re-run confirms that no changes were made/needed, we have the package on all on them.

This concludes our business for today!