Tag Archives: kvm

Docker vs LXC/Ansible?

Containers

Why this question?

During last DevOPS meetup @GrzegorzNosek asked very good question – why should one use Docker instead of pure LXC/Ansible?

Honestly I’ve been trying to answer myself this question for a while. I did in some part (included this in my talk I gave during that meetup: http://www.slideshare.net/d0cent/docker-rhel); while it’s about developers running development envs Docker is just so much easier to use.

But how should I explain using Docker for myself? I’m sysadmin and I love low-level – so LXC for me is just natural way of doing things :)

Your face, your ass – what’s the difference?

(If you feel embarassed / disgusted somehow with this header please rewind 18 years and remember that: https://en.wikiquote.org/wiki/Duke_Nukem)

One thing you should know about me – I’m contributing to FedoraProject; lately I’ve been poking around Fedora-Dockerfiles project (https://git.fedorahosted.org/cgit/dockerfiles.git/) – I’m doing it for fun and also I wanted to learn more about Docker as I’m running some Open-Source projects with friends and had to find a easy way for them to rollup own development envs. Docker is the answer in this case.

So – currently I’m using Docker to prepare dev-envs for guys who knows nothing about DevOPS / SysOPping; writing Dockerfiles is so much fun (and sometimes so big hell :) ). And LXC? Together with Ansible I’m managing some servers’ resources (like VPN, DNS, some webservices etc). It’s also fun, it’s fast, rather reliable and it makes things so much easy to live with.

So any winners here?

But still – for me as guy who use rather fdisk than gparted (or virsh than virt-manager ;) ) Docker is not the case for managing services. And honestly I’m still looking for an answer for the question from subject of this blogpost. For now after couple of weeks poking around Docker (and months with LXC) I can tell this one obvious thing that when You know LXC than Docker is just so easy (e.g. running some daemons inside spartan-like Docker images can be a tough fight whe some libs or dependencies are missing). Also creating and running Dockerfiles is very easy – just like creating Ansible playbooks.

I think that I’m gonna do this one thing that I did couple of years ago when XEN and KVM were running shoulder to shoulder in the FOSS full-virt race. I’m just gonna use them both – Docker and LXC and see how things will develop. Docker is very great and easy to manage apps only (so Continuous Development with Docker is killing feature) and I’ll LXC/Ansible within some basic services (GitLab, DNS, VPN etc). But for more fun – I’m gonna keep both tracks, so e.g. when deploying GitLab within LXC I’ll create also Dockerfile for this.

This way I think that I will have a really good answer in just a couple of weeks and this should be nice subject for some conference talk?

Follow my GitHub account (or even better – Twitter) – I’ll post there updates about new playbooks and Dockerfiles.

KVM L2 filtering / virsh nwfilter

KVM

A few days ago while deploying another KVM host (this time in Hetzner.de datacenter) I had to lurk into deep networking internals. Hetzner has port security enabled on switches’ ports so there’s no way to use classical L2 bridging in netfilter. But i’ll write another post about resolving this one (yup, I did it – might be also usable for OVH users) ;)

This time I wanted to write a short post about network security in KVM host. Especially about ARP/IP spoofing. Problem? By default VMs can easily attack each other by spoofing each others MAC / IP addrs. Normally those type of attacks are mitigated on L2 – so we use e.g. port security, storm control, secure-arp-table and so on (sorry Juniper, I’m pure Cisco). So we know that L2 switch can be easily simulated on software side with netfilter / bridging. It’s easy to create network bridge, but it’s harder to create security policy for L2. And aAll that has to be done is to turn on ebtables and create some rules.

And here KVM / libvirt appears as very helpful. Writing ebtables rules is not a rocket science, but when managing multiple VMs it’s really easy to handle those with some higher – level tool. I ended up adding some rules to VMs’ XML definitions:

So above you can see the “clean traffic” filter. What is that? Here a little explanation:

So basically “clean traffic” is a group of predefined filter references. Please read the libvirt documentation for details. Brief explanation would be: if clean traffic is applied on VM than such an VM will not be able to spoof MAC or IP addr (and some more rules as you can see above).

One could ask – why the heck didn’t I configured DHCP and instead of that I put static IP addr into VM XML config file? So – DHCP is great, but when you want to enable migration for VMs than before new host learns new VMs IP addr / MAC this VM can easily spoof it. So – it’s better to place IP into XML file.

Reference:

Ganglia, multicast && KVM on CentOS

http://ganglia.sourceforge.net

This is just a short note – I have to post it as this problem was really annoying and I couldn’t find any solutions in Google, so had to resolve it by myself.

Don’t know what Ganglia is? Check here: http://ganglia.sourceforge.net/ – it just kicks ass :)

Problem? I installed gmond on all our hosts / guests (CentOS 5/6, KVM virt, latest Ganglia daemons), also configured properly gmetad daemons and started this whole stuff using multicast. And it was working – for a while. After about 10-20 minutes it just stopped working on KVM guests. I saw no charts for those machines – but gmonds (even in debug mode) didn’t reveal any problems. And KVM hosts’ charts were fine (mostly..).

One more thing – in KVM guests I always set “deaf = yes” (just don’t want to have too much multicast traffic – i set it to “no” only on some bare hosts).

Ok so the problem.. I hung for some time on tcpdump / strace and came to the root of this problem – somehow there was no multicast traffic on KVM guests (I turned off iptables on KVM guests for the time of this whole issue – resolving). After some time I found 2 possible root causes:

  1. On KVM hosts by default there is multicast filter set on: no-ip-multicast (You can check if You have it turned on with following command: 

    If it’s turned on – You can turn it off with:

    And that should do this part of the trick
  2. And also – on CentOS KVM guests we have to turn off rp_filter in /etc/sysctl.conf:

    and:

    (You can try setting it to “loose mode” (so value: 2) instead of 0 – it can work for You and it’s always safer

Thats all for now. My sources for this one?