Tag: kubernetes

  • On Active Deployment

    I’ve watched the Kuby talk at Railsconf, and I have some opinions on Active Deployment. The talk attempts to provide a standard way of deploying Ruby on Rails using a cloud platform (Kubernetes). The talk also mentioned Capistrano and its still-relevant role in the community.

    This left me with a question: if you plan to deploy a Rails application to production in 2022, are you now forced to use Kubernetes?

    Over the years, I’ve been deploying Ruby on Rails applications on different types of infrastructure: virtual servers, managed platforms-as-a-service, and lately, Kubernetes clusters. Each type has its own strengths and weaknesses.

    I think one way to help decide is how much resources in terms of skill, time, and money you can afford to spend on not just bringing up infrastructure but also sustaining its operation over time.

    The talk mentioned several disadvantages of using Capistrano to deploy; one still needs to set up the software dependencies before successfully deploying a Rails application. I think this upfront cost is a valid trade-off in exchange for possibly cheaper operating costs (running a single server to run the whole stack versus provisioning multiple servers and running Kubernetes on top).

    I’ve had two reasons to use Kubernetes in deploying a Rails application: (1) multitenancy, where multiple instances of the same application will be deployed, and (2) adopting a container-based delivery pipeline, which eliminates a whole class of problems (e.g., dependencies) in exchange for a different set of problems.

    I wish that we keep our opinions on Active Deployment open. Not everyone needs a Kubernetes cluster to deploy, and there’s still value in keeping Capistrano around. There’s always a hosted platform, such as Heroku (and similar offerings), to ease the deployment burden for a price.

  • The Battle of Helm’s Deep

    I’m currently migrating a production Kubernetes cluster from Helm v2 to v3.

    Helm v2 has been long deprecated. We’ve been using Helm to install our services for almost 4 years, but Helm v2 has been deprecated since last year and everyone seems to have moved to Helm v3.

    Helm v3 no longer depends on a server-side daemon called Tiller, which coordinates the installation of Kubernetes resources from a chart’s template.

    This is a problem not unique to myself

    Props to the Helm team for creating a helpful migration video. This has eased a lot of my worry of breaking not just one, but multiple services running in our production cluster. I was able to go through the tutorial and was able to migrate one Redis release. I could leave still use Helm v2 in our deployments, which is highly appreciated.

    See also

  • Upgrading cert-manager from v0.10 to v1.2.0

    I found out recently that I could no longer request SSL certificates using cert-manager’s deprecated APIs. This article describes the steps I took to upgrade cert-manager and some error messages found during the process. Total upgrade time took 1 hour and 15 minutes.

    Prerequisites

    • kubernetes 1.16+ (I used 1.18)
    • kubectl 1.16+ (I used 1.18)

    Backup secrets

    $ kubectl get -o yaml -n cert-manager secrets > cert-manager-secrets.yaml

    Backup relevant objects

    $ kubectl get -o yaml \
        --all-namespaces \
        issuer,clusterissuer,certificates > cert-manager-backup.yaml

    Uninstall the old cert-manager

    The old cert-manager was installed using a Helm chart:

    $ helm delete <helm-release-name>

    Delete the cert-manager namespace

    $ kubectl delete namespace cert-manager

    Remove the old CRDs

    $ kubectl delete crd clusterissuers.certmanager.k8s.io
    $ kubectl delete crd issuers.certmanager.k8s.io
    $ kubectl delete crd challenges.certmanager.k8s.io
    $ kubectl delete crd certificates.certmanager.k8s.io

    Check for stuck CRDs

    In case CRDs could not be deleted, check for finalizers in the CRD’s manifest. Remove the finalizers from the CRD’s manifest and try to delete the CRD again.

    Install cert-manager

    This time, I installed using jetstack’s manifests and did not use Helm.

    $ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml

    Verify pods are running

    $ kubectl get pods -n cert-manager

    Example output:

    NAME                                       READY   STATUS    RESTARTS   AGE
    cert-manager-789fdcb77f-7qcgg              1/1     Running   0          3m6s
    cert-manager-cainjector-6f6d6cb496-hzhzt   1/1     Running   0          3m7s
    cert-manager-webhook-5c79844f4f-kwskp      1/1     Running   0          3m5s

    Update API endpoints from backup

    I recommend using a text editor to find-and-replace certmanager.k8s.io/v1alpha1 with cert-manager.io/v1.

    Remove outdated syntax (e.g. http01) (see Issuer/ClusterIssuer issues).

    Apply manifests to restore from backup

    $ kubectl apply -f cert-manager-secrets.yaml
    $ kubectl apply -f cert-manager-backup.yaml

    See also