It's that time of the year again: spring means Kubecon! AWS Container Days Before the main Kubecon started, I attended the AWS Container Days event, live-streamed on Twitch (hip!). Being AWS, of course it was highly EKS-centric, and very interesting. AWS is working on their own container-centric linux distribution, called Bottlerocket. It sounds like a spiritual successor to CoreOS (my words, not theirs), with a focus on security and transactional, in-place updates.
I recently set up the kube-prometheus-stack Helm chart (formerly known as prometheus-operator) on our Kubernetes clusters as $dayjob. This chart sets up a full monitoring and alerting stack, with Prometheus for metrics collection & retention, Grafana for visualisation, and AlertManager for, well, managing alerts (!). The out-of-the-box monitoring is awesome: extremely detailed, with a wealth of built-in metrics & alerts. On the other hand, some of the warnings are very twitchy, and may be triggered under normal operations while everything is absolutely fine.
Last week saw KubeCon + CloudNativeCon Europe 2020 taking place fully remotely, rather than in-person in sunny Amsterdam. Here are my notes from the conference, and links to talks that I thought were worth mentioning! Of course, I didn't attend all the talks, so this isn't an exhaustive list – here is the full schedule. I've linked to the presentations on sched, and they should all be posted to YouTube shortly.
Developers are spoiled. Every single package management system these days will allow a developer to define their project's dependencies in a simple format, whether that's a cargo.toml in Rust, Gemfile for Ruby apps, a pom.xml for Maven-based projects, package.json for NodeJS, composer.json in PHP… Declaring your dependencies and their desired version in a standard, easily-parseable language allows you to track outdated dependencies, make sure you keep up to date with security updates, ensures other developers working on the same project use the same version of dependencies, lets you set up reproducible builds… The benefits are immense, and it's universally acknowledged as a best practice in modern software development.
HTTP compression is ubiquitous on the modern web as a way to trade a small amount of computing power in exchange for vastly reduced bandwidth. It is usually achieved with the gzip algorithm, so I'll refer to HTTP compression and gzip compression interchangeably in this post. YNAP uses compression across the board to load pages faster, which makes users happier, and reduce bandwidth costs, which makes the finance department happier.
A core tenet of infrastructure as code is automation, which we took to heart when setting up the Kubernetes infrastructure for the frontend applications at Net-a-Porter. We split our infrastructure-as-code into three main repositories: Terraform The Terraform repository sets up the AWS infrastructure, including bringing up an EKS cluster and its related resources: autoscaling groups, S3 buckets, security groups, etc. Helm The Helm repository bootstraps a Tiller server in the kube-system namespace and installs a slew of infrastructure-level Helm charts that we rely on to deploy, monitor and maintain applications running in the cluster.
This is a follow-up to Caleb Doxsey's great article, Kubernetes: The Surprisingly Affordable Platform for Personal Projects. I think Caleb is absolutely right in his description of Kubernetes as a great platform even for small projects that would usually end up in “a small VPS somewhere”, especially if you already have experience with k8s. I've been using Kubernetes on AWS EC2 instances at work, and I was keen on trying Google's fully-managed experience in GKE, so I followed Caleb's steps and created my own cluster.
We recently got bitten by an innocent and standards-compliant improvement in Azure AD that effectively broke our OIDC-based authentication system for Kubernetes 1.9.x clusters. OIDC, short for OpenID Connect, is a convenient way of providing authentication in Kubernetes. The flow roughly goes as follows: User gets a JWT token from its OIDC provider (Azure AD) User sends this token to Kubernetes alongside its request Kubernetes validates this token by verifying the JWT signature against the provider's public key Kubernetes lets the authenticated request through This theoretically ensures Kubernetes doesn't need to “phone home” by calling the authN provider for every request, as happens for example under the Webhook Token authentication mode.
I recently read Caleb Doxsey's article on how suprisingly affordable Kubernetes is for personal projects and got inspired to spin it up for myself. I'm familiar with Kubernetes at work, but we run our clusters on top of EC2 instances in AWS, and I've always been curious about how a fully hosted Kubernetes offering like GKE would fare. Setting up Kubernetes on GKE itself following Caleb's directions was pretty straightforward (well… For the most part – but that's another subject for another post), and I ended up with an empty “hello” page from nginx.
Programming · UNIX
I've been using Radicale as a contacts / calendar server (CardDAV / DalDAV) for some time now, and it worked flawlessly across macOS and Windows Phone for contacts and calendars. However, I recently got an iPhone and synchronising calendars from Radicale just crashed the iPhone calendar app. It worked fine some of the time, but most times it just crashed, which is not great. Therefore, I went on the search for a better self-hosted calendaring server.