Build Price-Aware Applications
Check the Price History: In general, picking older generations of instances will result in lower net prices and fewer interruptions.
Use Multiple Capacity Pools: By having the ability to run across multiple pools, you reduce your application’s sensitivity to price spikes that affect a pool or two (in general, there is very little correlation between prices in different capacity pools). For example, if you run in five different pools your price swings and interruptions can be cut by 80%.
Debugging why k8s autoscaler wouldn't scale down
11 Jan 2017 • permalink
Symptom: autoscaler works (it can scale up) but for some reasons, it doesn’t scale down after the load goes away.
I spent sometimes debugging and turns out, it’s not really a bug per se. More of a bad luck pod placement on my Kubernetes cluster.
I first added
--v=4 to get more verbose logging in
cluster-autoscaler and watch
kubectl get logs -f cluster-autoscaler-xxx. I notice this line from the logs
<node-name> cannot be removed: non-deamons set, non-mirrored, kube-system pod present: tiller-deploy-aydsfy
This node is in fact under-ultilized but there is a
non-deamons set, non-mirrored, kube-system pod presented, that’s why it can’t be removed.
tiller-deploy is a deployment that comes with Helm package manager.
So it seems I just have to migrate the pod to another node and it’s gonna be fine.
You can also read more on how cluster-autoscaler works here on GitHub
Fluentd Docker image to send Kuberntes logs to CloudWatch
Very easy to setup. Good option for centralized logging if all of your infrastructures are already in AWS.
echo -n "accesskeyhere" > aws_access_key echo -n "secretkeyhere" > aws_secret_key kubectl create secret --namespace=kube-system generic fluentd-secrets --from-file=aws_access_key --from-file=aws_secret_key kubectl apply -f fluentd-cloudwatch-daemonset.yaml
On a side note, I think i will need to move fluend configuration file to secret as I just want to collect logs from certain namespace/filter.
A DaemonSet to be run on node instance and keep polling
http://169.254.169.254/latest/meta-data/spot/termination-time for termination notice.
The daemonset will poll every 5 seconds which will give you approx 2 minutes to drain the spot node and migrate pods to another node.
How to build a custom Kubernetes scheduler by Mr. Kubernetes
Fix Terminal no longer uses keychain in MacOS Sierra
23 Dec 2016 • permalink
Since Sierra, I got prompted for my ssh key password everytime. After digging a bit, it seems Apple just changes it recently.
On macOS, specifies whether the system should search for passphrases in the user’s keychain when attempting to use a particular key. When the passphrase is provided by the user, this option also specifies whether the passphrase should be stored into the keychain once it has been verified to be correct. The argument must be ‘yes’ or ‘no’. The default is ‘no’.
In order to fix this, you just have to enable
UseKeychain for every hosts by adding these lines into your
Host * AddKeysToAgent yes UseKeychain yes IdentityFile ~/.ssh/id_rsa
Alternatively, you can add
ssh-add -A into your
Time and time again, the young startup promotes their longest-tenured young engineer to become CTO of their 20-something startup. And it makes sense on the surface, because it’s their “best” engineer. And why not? They’ve been there for so long that they know the system they’ve built more than anyone else.
But now they have two problems: they lose their “best” engineer, and on top of that, they gain what’s probably a shit manager.