KVM.
I don’t like that is only set in Linux at all.
However, I like that allows me to emulate several computer architectures with not much resources in comparison with containers which cannot (no emulation) or other VM hypervisors (XEN).
Depends on your needs. If you need to emulate several computer architectures absolutely go with KVM, but if you just need to run a bunch of services Docker/Kubernetes may be the best option.
If I use Libvirt I could just deploy applications in LXD.
I like Kubernetes.
- It encourages immutable infrastructure for apps by default. You update the pod to a new image rather than slowly mutating a VM with new versions.
- It has a basic rollout system which will be sufficient for quite a while.
- Its HTTP load balancing and routing is sufficient for most services, especially if you stick a CDN in front of it.
- Its TCP+UDP load balancing is enough to get started with, and the APIs are there for bypassing it when you need to.
- It makes it very easy to support failover between multiple VMs and cloud availability zones so that you don’t have (significant) downtime for machine failures or node updates.
- Lots of tooling built around it.
I think my main tip is don’t get too caught up in the various tooling. If you are trying to be productive just pay GCP or another cloud and run with it. You can always migrate to another solution later when the costs are significant relative to the opportunity costs of your development time. The migration to things like self-hosted NGINX ingresses or self-hosted kubernetes are relatively small so focusing on your product at the beginning is the most important.
Cool, thanks.
Honestly for what I do in my work and daily life, the container technology I end up using most is a tarball and systemd-nspawn/machinectl. It does most of the stuff I need (configuring the network, binding paths in, setting limits, whatever) with less fuss than the more ‘image’ oriented ones.