Tuan Anh

container nerd. k8s || GTFO

kubectl run generators removed

Đây là merged pull request liên quan.

Tóm tắt lại, trước đây nếu cần tạo deployment, bạn chỉ cần

kubectl run nginx --image=nginx:alpine --port=80 --restart=Always

Tính năng này được sử dụng rất nhiều vì 1 minimal deployment YAML khá dài. Đây là ví dụ

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80

Trước đây, để tạo 1 deployment và expose thì chỉ cần đơn giản 2 lệnh là

kubectl run nginx --image=nginx:alpine --port=80 --restart=Always
kubectl expose deployment nginx --port=80 --type=LoadBalancer

Bây giờ, bạn cần tự nhớ deployment YAML và expose nó với lệnh kubectl expose.

Thường thì mọi người không nhớ format của deployment và chỉ xài kubectl run với flags -o yaml--dry-run để lấy output ra và edit tiếp.

Lệnh này được sử dụng cực kì phổ biến và sử dụng rất nhiều khi thi CKA (Certified Kubernetes Administrator) hay CKAD (Certified Kubernetes Application Developer).

kubectl create deployment nginx --image=nginx:alpine -o yaml --dry-run

Bởi vậy nếu ai có ý định thi CKA/CKAD thì cố gắng nhớ format của mấy loại resource cơ bản đi nhé :)


Using Synology NFS as external storage with Kubernetes

For home usage, I highly recommend microk8s. It can be installed easily with snap. I’m not sure what’s the deal with snap for Ubuntu desktop users but I’ve only experience installing microk8s with it. And so far, it works well for the purpose.

Initially, I went with Docker Swarm because it’s so easy to setup but Docker Swarm feels like a hack. Also, it seems Swarm is already dead in the water. And since I’ve already been using Kubernetes at work for over 4 years, I finally settle down with microk8s. The other alternative is k3s didn’t work quite as expected as well but this should be for another post.

Setup a simple Kubernetes cluster

Setting Kubernetes is as simple as install microk8s on each host and another command to join them together. The process is very much simliar with Docker Swarm. Follow the guide on installing and multi-node setup on microk8s official website and you should be good to go.

Now, onto storage. I would like to have external storage so that it would be easy to backup my data. I already have my Synology setup and it comes with NFS so to keep my setup simple, I’m going to use Synology for that. I know it’s not the most secure thing but for homelab, this would do.

Please note that most the tutorial for Kubernetes will be outdated quickly. In this setup, I will be using Kubernetes v1.18.

Step 0: Enable Synology NFS

Enable NFS from Control Panel -> File Services

Enable access for every node in the cluster in Shared Folder -> Edit -> NFS Permissions settings.

There’re few things to note here

  • Because every nodes need to be able to mount the share folder as root so you need to select No mapping in the Squash dropdown of NFS Permissions.
  • Check the Allow connections from non-previleged ports also.

With Helm

nfs-client external storage is provided as a chart over at kubernetes incubator. With Helm, installing is as easy as

helm install stable/nfs-client-provisioner --set nfs.server=<SYNOLOGY_IP> --set nfs.path=/example/path

Without Helm

Step 1: Setup NFS client

You need to install nfs-common on every node.

sudo apt install nfs-common -y

Step 2: Deploy NFS provisioner

Replace SYNOLOGY_IP with your Synology IP address and VOLUME_PATH with NFS mount point on your Synology.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: <SYNOLOGY_IP>
            - name: NFS_PATH
              value: <VOLUME_PATH>
      volumes:
        - name: nfs-client-root
          nfs:
            server: <SYNOLOGY_IP>
            path: <VOLUME_PATH>

Setup RBAC and storage class

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"
  allowVolumeExpansion: "true"
  reclaimPolicy: "Delete"

Step 3: Set NFS as the new default storage class

Set nfs-storage as the default storage class instead of the default rook-ceph-block.

kubectl patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 
kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Testing

We will create a simple pod and pvc to test. Create test-pod.yaml and test-claim.yaml that looks like this in a test folder

kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim

and test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "nfs-client" # nfs-client is default value of helm chart, change accordingly
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

And do kubectl create -f test/. You should see the PVC bounded and pod completed after awhile. Browse the NFS share and if you see a folder is created with a SUCCESS file inside, everything is working as expected.


Debugging Kubernetes: Unable to connect to the server: EOF

We had an EC2 instance retirement notice email from AWS. It was our Kubernetes master node. I thought to myself: we can simply just terminate and launch a new instance. I’ve done it many times. It’s no big deal.

However, this time, when our infra engineer did that, we were greeted with this error when trying to access our cluster.

Unable to connect to the server: EOF

All the apps are still fine. Thanks to Kubernetes’s design. We can have all the time we need to fix this.

So kubectl is unable to connect to Kubernetes’s API. It’s a CNAME to API load balancer in Route53. That’s where we look first.

Route53 records are wrong

So ok. There are many problems which can cause this error. One of the first thing I notice is the Route53 DNS record for etcd is not correct. It was the old master IP address. Could it be somehow the init script unable to update it?

So our first attempt to fix it was manually update the DNS record for etcd to the new instance’s IP address. Nope, the error is still the same.

ELB marks master node as OutOfService

We look a little bit more into the ELB for API server. The instance was masked OutOfService. I thought this is it. It makes sense. But what could cause the API server to be down this time? We’ve done this process many times before.

We sshed into our master instance and issue docker ps -a. There is nothing. Zero container whatsoever.

We check systemctl and there it is, the cloud-final.service failed. We check the logs with journalctl -u cloud-final.service.

We noticed from the logs that many required packages were missing like ebtables, etc… when nodeup script ran.

Manual apt update

So if we can fix that issue, it should be ok right? We issue apt update manually and saw this

E: Release file for http://cloudfront.debian.net/debian/dists/jessie-backports/InRelease is expired (invalid since ...). Updates for this repository will not be applied.

Ok, this still makes sense. Our cluster is old and the release file is expire. If we manually update it, it should work again right? We do apt update with valid until flag set to false.

apt-get -o Acquire::Check-Valid-Until=false update

Restart cloud-final service

Restart cloud-final.service or manually run the nodeup script again with

/var/cache/kubernetes-install/nodeup --conf=/var/cache/kubernetes-install/kube_env.yaml --v=8

docker ps -a at this point should show all the containers are running again. Wait for awhile (30seconds) and kubectl should be able to communicate with the API server again.

Final

While your problem may not be exactly same as this, I thought I would just share my debugging experience in case it could help someone out there.

In our case, the problem was fixed with just 2 commands but the actual debugging process takes more than an hour.


Tips for first time rack buyer

Few weeks ago, I knew nothing about server rack. I frequent /r/homelab a lot in order to learn to build one for myself at home. These are the lessions I learnt during building my very first homelab rack.

Choose the right size

You need to care 2 things about a rack size: height & depth. The width is usually pretty standard 19 inches.

  • Rack height is meassured in U (1.75 inch or 44.45mm): a smallest height of a rack-mountable unit.
  • Rack depth is very important too. Usually available in 600/800 or 1000mm. Don’t buy anything shallower than 800mm unless you plan to use the rack mostly for network devices. Otherwise, your rackmount server options are very limited. If you must go with 600mm depth rack, you can choose some half depth servers like ProLiant DL20, Dell R220ii, some Supermicro servers or build one yourself with a desktop rackmount cases.

1u rack unit

Carefully plan what kind of equiments you want to use to get the correct size. An usual rack usually have these devices:

  • 1 or more patch/brush panel for cable management (1U each)
  • 1 router (1U)
  • 1 or 2 switches. (1U each)
  • servers: this depends on how much computing power you need. Also servers come in various sizes (1U/2U/3U/4U) as well.
  • NAS maybe (1-2U)
  • PSU: usually put at the bottom (1U or 2U)
  • PDU: some people put it at the front, some puts it at the back. (1U)

Things to looks for when selecting a rack

  • Rack type: open frame / enclosures or wall-mounted rack.
  • Wheel or not wheel, that is the question. I recommend you to go with wheel for home usage.
  • If you choose wheel, get a rack that has wheel blockers.
  • Does the rack’s side panel can be taken off? If it does, it will make equipment installation a lot easier.

Cable management

The top U is patch panel. The third one is brush panel. The purpose of these panels is pretty easy to understand. I didn’t know the term to search for at first when I want to buy one.

Here are some accesories that helps with cable management:

  • Zip tie
  • Velcro
  • Cable combs
  • Patch panel
  • Brush panel
  • Multi-colored cables: eg green for switch to path link, orange for guest VLAN, etc…

Some notes on the patch panel. There is punch down type that looks like this and there’s pass-through type that looks like this. You probably want the keystone one as it’s easier to maintain.

If you cannot find cable combs, i saw people has been using zip tie to make DIY cable comb. It’s pretty cool.

diy cable comb using zip tie

Other tips

Numbering unit on the rack if it doesn’t have one will help a lot when installing equipments. Like this

label on rack

Most racks I saw on /r/homelab have this but the cheap rack I got doesn’t. I just got to be creative: use label maker tape along the rack’s height and hand wrote the number there.

Know something that isn’t on this list, please tweet me at @tuananh_org. I would love to learn about your homelab hacks.


How to setup reverse proxy for homelab with Caddy server

The end goal is to be able to expose apps deployed locally on homelab publicly. I don’t want to expose multiple ports to the Internet for several reasons:

  • I have to create multiple port-forwarding rules.
  • The address not memorable because I need to remember the ports. Eg: homeip.example.com:32400 for Plex, homeip.example.com:1194 for VPN, and so on…

The alternative is to use reverse proxy.

  • setup reverse proxy
  • setup port forward (80 & 443) for reverse proxy
  • config reverse proxy to proxy the local apps

Reverse proxy

I would have gone with nginx but I want to tinker with Caddy. I have never used Caddy in production and this seems like a good excuse to learn about it (That’s what homelab is for right?). Caddy comes with HTTPS by default via Lets Encrypt. It’s perfect for home usage.

I was wondering if it’s possible to proxy upstream to Docker host. Turns out it’s possible. You just have to use host.docker.internal as upstream address. (ref)

docker run -d -p 1880:80 -p 18443:443  --network home-net \
    -v $(pwd)/Caddyfile:/etc/caddy/Caddyfile \
    -v $(pwd)/site:/usr/share/caddy \
    -v $(pwd)/data:/data \
    -v $(pwd)/config:/config \
    caddy/caddy caddy run -config /etc/caddy/Caddyfile --watch

Notice that I run Caddy in home-net network there, so that I can easily proxy other containers.

Dynamic DNS

You need to setup

  • an A record for your home IP (eg: homeip.example.com)
  • multiple CNAME records for each of your apps (eg: plex.example.com CNAME to homeip.example.com).

I covered this topic in a previous post of mine here.

Port forwarding

You need to do port forwarding (80 & 443) for your Reverse proxy. The setting is different, largely depends on your lab equipment and your ISP.

I was stuck for a day debugging why port forwarding didn’t work and it turned out, my ISP use NAT public IP address.

port forwarding

Verify

To test this, I create an nginx container with

docker run --name nginx --network home-net -d nginx

And edit the Caddyfile to this

example.com {
    reverse_proxy / nginx:80
}

And it should show nginx default page

Also, you should see the page in HTTPS.


How to setup a home VPN with Synology NAS

Currently, I’m working on building my homelab. It’s still a very much work in progress but everything is coming along nicely.

homelab

I plan to host lots of stuff in my homelab and be able to access it while I’m not at home. I don’t feel comfortable exposing them all to the Internet so VPN to the rescue.

The setup is straight forward. It’s different, depends on your lab equipment but the steps are always the same.

  1. Setup VPN server in your homelab.
  2. Setup port forwarding in your router.
  3. [Optional] If your IP address is dynamic, you can setup dynamic DNS so that we can access the VPN server by domain.

First step is rather easy. I already have a Synology NAS and they have the built in VPN Server app ready to install from their package store. It’s just 1-click away. You install it, enable OpenVPN protocol and it’s done. Click export configuration afterward.

The UniFi Security Gateway also have built-in VPN server but I figure since the NAS is more powerful, I think I should offload the work to the NAS.

synology vpn server

The second step can be done via your router. In my case, I use UniFi hardware so I’m gonna do it via UniFi Controller in Settings -> Routing & Firewall -> Port forwarding.

unifi controller port forwarding

Optionally, if your IP address is dynamic, you may want to setup dynamic DNS (eg: myvpn.example.com). I already covered it in a previous post using Docker and CloudFlare.

Now, edit the exported configuration and replace the server IP address with your static IP address or your dynamic DNS above.

Try connect with OpenVPN client and if all is good, you should be connected.

openvpn

Troubleshoot

I found Ubiquiti has an excellent troubleshoot guide available on their website.

Some common problems are:

  • Double NAT (local): You have 2 routers on your local network. In that case, you either have to remove 1 router (change to AP mode?) or setup port-forwarding on both.

  • NAT public IP address: if you see your public IP address via your router and via, say Google, doesn’t match. That’s probably it. See the below picture, if the two IPs are not same, you got NAT public IP address.

If you go through all that and it’s still not working, it’s probably has something to do with the ISP.


How to adopt UniFi Security Gateway to an existing network

I’m by no mean a network expert. This is just my personal experience when I setup my USG to my existing network.


In my case, I was using Orbi RBK as my router and access point. With USG in place, I will use the Orbi in access point mode. The USG will be replacing the Orbi as my router.

My current network is using 10.0.0.1/8 IP range. By default, USG uses 192.168.1.1 IP which means I won’t be able to adopt it just by plugging it to my current network. So I need to change its IP address first.

You’re gonna need a PC/laptop with Ethernet port in order connect to the USG and change its IP address. Luckily, I have a desktop PC with me.

So I installed the UniFi Controller and connect the USG to the PC’s ethernet port. I set the ethernet IP of the desktop to something in the 192.168.1.1 range like 192.168.1.6 with subnetmask 255.255.255.0.

Once that’s done, I open up UniFi Controller and we will be able to see and adopt the USG. The default username and password is ubnt by the way.

After adopting the device, if you want to keep using the old IP and subnet, you will have to go to Settings -> Networks and edit the LAN network to use your desire IP and subnet.

Now, in order to replace the old router, you will have to configure the PPPOE info as well. From Settings -> Networks, click edit the WAN network and enter your Internet username & password there.

Now, I’m not sure where you’re from, what do you need to do to replace the router but here in Vietnam, I will need to call the ISP and ask them to remove the MAC address cache of the router as well.

After that, I just have to connect the internet cable to the Internet port of USG. Connect the LAN port of USG to Internet port of Orbi and it’s done. Internet is back online.

adopt usg to an existing network


Dynamic DNS with CloudFlare

I use this project oznu/docker-cloudflare-dns1 where the author implements everything in bash, curl and jq. There were a bunch of projects that does this DDNS with CloudFlare but I chose this project because of this uniqueness.

To use this, you just have to create an API token with Cloudflare that has these permissions:

  • Zone - Zone Settings - Read
  • Zone - Zone - Read
  • Zone - DNS - Edit

Also, set zone resources to All zones. And then, run the docker container

docker run -d \
    -e API_KEY=<cloudflare-token> \
    -e ZONE=<example.com> \
    -e SUBDOMAIN=<subdomain> \
    --restart=unless-stopped oznu/cloudflare-ddns

ddns cloudflare docker


Traffic from #1 post on Hacker News

My last post recently featured #1 on Hacker News. This surprises me a little bit because when I first posted the link, there were little interest (~15 points) in it. I shake it off and went on with my day.

Few days later, when I was browsing Hacker News, my blog post was featured right there on the home page through someone else’s submission. The post stays at #1 for ~ 12 hours before drift off to page 2.

It also draws some interest on Twitter with the peak is Jeff Barr wrote a tweet about it.

Here are some numbers for you stats junkie; just so you know what kind of traffic you can expect from HackerNews.

The post attracts 15k unique users on the day of submission, 3k users on second day and eventually back to normal on the 4th day.

Traffic stats on Cloudflare seems a bit inflated with the unique visitors at 36k. Maybe they do count bots as well there.

Since my site is static and hosted by Cloudflare Workers Site, the load speed is pretty much the same throughout the day. Big fan of Cloudflare and their products.


The story behind my talk: Cloud Cost Optimization at Scale: How we use Kubernetes and spot instances to reduce EC2 billing up to 80%

me at vietnam web summit 2018

This is the story behind my talk: “Cloud Cost Optimization at Scale: How we use Kubernetes and spot instances to reduce EC2 billing up to 80%”.

Now before I tell this story, I will admit first hand that the actual number is lower than 80%.

2015

The story began in mid 2015 when I was employed by one of my ex-employer. It was a .NET framework shop that struggled to scale in both performance and cost at the time. I was hired as developer to work on the API integration but I can’t help to notice too much money was sunk into AWS EC2 billing. Bear in mind I’m not an Ops guy by any mean but you know about startup, one usually has to wear many hats.

At first, when the AWS credit is still plenty, we don’t have to worry much about it. But when it ran low, it’s clearly becoming one of the biggest pain point of our startup.

The situation at the time is like this:

  • There were 2 teams: the core team using .NET framework and the API team using Node.js
  • Core team mostly uses Windows-based instance and API team uses Linux-based.
  • The core team uses a lot more instances than API team.
  • Most EC2 instances are Windows-based. All are on-demand instances. No reserved instances whatsoever 😨.
  • Few are Linux-based instances where we install other linux based applications but there weren’t many of them.
  • On-demand Windows-based instance price is about 30% higher than Linux-based.
  • We use RDS for database.
  • We don’t have any real ops guy as you think these days. Whenever we need something setup, we just have to page someone from India team to create instances for us and then proceed to set them up ourselves.

Now, the biggest cost are obviously RDS and EC2. If I were to assigned to optimize this, I will definitely take a look at those 2 first. But I wasn’t working on it at that time. I was hired to do other things.

At that time, I was using Deis - a container management solution (acquired by Microsoft later) for my projects. I experimented shortly with Flynn but ended up not using it.

2016

Spotinst

In 2016, I heard of this startup called Spotinst. I found several useful posts from their blog regarding EC2 cost optimization and find their whole startup ideas very fascinating. For those of you who are not working with infrastructure, the whole idea of Spotinst is to use spot instances to reduce the infrastructure cost for you. And they take some cut from it.

Spotinst automates cloud infrastructure to improve performance, reduce complexity and optimize costs.

Spot instances are very cheap (think 70-90% cheaper vs on-demand) EC2 offering from AWS but comes with a small problem: it can goes away anytime with just 2 minutes notice.

I thought if we can design our workload to be fault tolerant and gracefully shutdown, spot instances will make perfect sense. Or anything like a queue and worker workload would fit as well. Web apps, on another hand, will be a little bit more difficult but totally do-able.

Kubernetes

During 2016, I also learnt about this super duper cool project called Kubernetes. I believe they were at version 1.2 at the time.

Kubernetes comes with the promise of many awesome features but what caught my eyes were this “self-healing” feature. This make perfect complement with spot instances, I thought.

And so I dig a little bit more to see if I can set one up with spot instances and they do support it. Awesome!! 🥰

Now, the only problem left is our core team still need Windows and Kubernetes didn’t support Windows at the time. So my whole infrastructure revamp idea is useless now, or so I thought.

.NET Core

In mid 2016, I learnt about .NET core project. They were around 1.0 release at the time. One of the feature is cross-platform. I thought to myself: I can still salvage this.

Now, please note that I’m a Node.js guy and I don’t know much about .NET aside from my thesis in university. So I asked the lead guy from core team to take a look into it and while there are many quirks, it’s actually not very difficult to migrate our core to .NET Core. It would be time consuming but it’s very much doable. I know that .NET Core is going to be the future so eventually, we will need to migrate to it anyway.

Tests + Migration

While the core team do that, I setup a test cluster with spot instances and learnt Kubernetes. I optimized the cluster setup a little bit and migrate all my projects over to them by the end of 2016. The whole process is quite fast because all the apps I have (Node.js) are already Dockerized and have graceful shutdown implemented. I just need to learn the in-and-out of Kubernetes.

I started with a managed GKE at first using their free $300 credit, to learn the basic of Kubernetes; but then later on use kops to setup a production cluster with AWS.

Some of the changes I did for the production cluster is:

  • Setup instance termination daemon to notify all the containers + graceful shutdown for all the apps.
  • Setup multiple instance groups of various size and availablity zone, mixing spot instances with reserved instances. This is to prevent price spike of certain spot instance group; and minimize the chances of all spot instances going down at the same time.
  • Calculate and provision a slightly bigger fleet then what we actually need so that when there were instances shut off, there won’t be service downgrading. Because spot instances are so cheap, we can do this without worry much about the cost.
  • Watch to see if there were scheduling failture to scale the reserved groups.

2017

At this point, our API apps’ EC2 cost is already very managable. We’re waiting for the core team to migrate over. And we did that in 2017. The overall cost saving for EC2 was around 60-70% because we need to mix reserved instances in and provision a little higher than what we actually need. We were very happy with the result.

What we did back then is actually what Spotinst does but at much smaller scale. And it’s more doable with smaller startups with only 1 ops guy.

And that is my story behind the talk: “Cloud Cost Optimization at Scale: How we use Kubernetes and spot instances to reduce EC2 billing up to 80%”.

Update: #1 on HackerNews. Yay!