Tuan Anh

container nerd. k8s || GTFO

Building containers in pure Bash and C

From 2016. Still very good talk by Jessie Frazelle.

link bài gốc

A better way to go through terminal command history

In the past, I used to use Ctrl+R to search my terminal command history but it’s unreliable. I couldn’t wrap my head around how it search sometimes.

Thanksfully, I was introduced to fzf and it’s has been a wonderful little gem. The power of fzf is much more than just searching through command history, depends on how creative you are.

The one I show here is just an example of how powerful fzf is. Basically, it can be pipe to just anything and fuzzy search that.

fh () {
    print -z $( ([ -n "$ZSH_NAME" ] && fc -l 1 || history) | fzf +s --tac | sed 's/ *[0-9]* *//')
}

Now go brew install fzf, add the above function and thanks me later 😁


The silence of the Lambda

A lot has changed since last I look at it. Still a bit awkward but the toolings around Lambda have significantly improved.

Related:

link bài gốc

Traits of a good leader

Found this on HackerNews. Very much on point, though I don’t quite agree with the last item.

  1. You have to have your people’s back, this is the most important thing… be there for them, insulate them from problems and management stupidity and always fight for them.

  2. Lead by example, never ask them to do something you won’t do yourself.

  3. Communicate, I have booked on afternoon a week from 14:00pm till 16:00 and more to just talk with my team and discuss everything from work, to weather, sports, to bitch and moan against the company, etc…

  4. Get together as much as you can on a real “team building exercise” - the whole team in another city for at least 2 days with a great party and lots of eating and drinking on company dime…


node-prune

Easily pruning unneeded files from node_modules

A npm package to clean up node_modules folder to remove unnecessary files on production.

Use cases:

  • optimize sizes for aws lambda
  • optimize sizes for docker build using multi stage docker

Featured on npmaddict.

link bài gốc

kompression - koa compression middleware with support for brotli

This is my fork of koa-compress that add supports for brotli compression.

Available on npm.

link bài gốc

Docker Containers on the Desktop

Great idea. I never thought of Docker containers this way because I totally forgot that I can always mount the config to the container.

This totally changes my dev environment setup.

$ docker run -it \
    -v /etc/localtime:/etc/localtime \
    -v $HOME/.irssi:/home/user/.irssi \ # mounts irssi config in container
    --read-only \ # cool new feature in 1.5
    --name irssi \
    jess/irssi
link bài gốc

If Your Boss Could Do Your Job, You’re More Likely to Be Happy at Work

The benefit of having a highly competent boss is easily the largest positive influence on a typical worker’s level of job satisfaction

Related: This is why we have working managers at Basecamp

link bài gốc

Kubernetes-hosted application checklist (part 2)

This part is about how to define constraint to the scheduler on where/how you want your app container to be deployed on the k8s cluster.

Node selector

Simpleast form of constraint for pod placement. You attach labels to nodes and you specify nodeSelector in your pod configuration.

When to use

  • you want to deploy redis instance to memory-optimized (R3, R4) instance group for example.

Affinity and anti-affinity

Affinity and anti-affinity is like nodeSelector but much more advanced, with more type of constraints you can apply to the default scheduler.

  • the language is more expressive (not just “AND of exact match”)
  • you can indicate that the rule is “soft”/”preference” rather than a hard requirement, so if the scheduler can’t satisfy it, the pod will still be scheduled
  • you can constrain against labels on other pods running on the node (or other topological domain), rather than against labels on the node itself, which allows rules about which pods can and cannot be co-located

requiredDuringSchedulingIgnoredDuringExecution is the hard type and preferredDuringSchedulingIgnoredDuringExecution is the soft/preference type.

In short, affinity rules define rules/preferences to where a pod deploys and anti-affinity is the opposite.

When to use

  • affinity and anti-affinity should be used when necessary only. It has a side effect of reducing the speed of deployment.

  • affinity use case example: web server and redis server should be in the same node

  • anti-affinity example: 3 redis slaves should not be deployed in the same node.


Kubernetes-hosted application checklist (part 1)

At work, we’ve been running Kubernetes (k8s) in production for almost 1 year. During this time, I’ve learnt a few best practices for designing and deploying an application hosted on k8s. I thought I might share it today and hopefully it will be useful to newbie like me.

Liveness and readiness probes

  • Liveness probe: check whether your app is running
  • Readiness probe: check whether your app is ready to accept incoming request

Liveness probe is only check after the readiness probe passes.

If your app does not support liveness probe, k8s won’t be able to know when to restart your app container and in the event your process crashes, it will stay like that while k8s still directing traffic to it.

If your app takes some time to bootstrap, you need to define readiness probe as well. Otherwise, requests will be direct to your app container even if the container is not yet ready to service.

Usually, I just make a single API endpoint for both liveness and readiness probes. Eg. if my app requires database and Redis service to be able to work, then in my health check API, I will simply check if the database connection and redis service are ready.

try {
    const status = await Promise.all([redis.ping(), knex.select(1)])
    ctx.body = 'ok'
} catch (err) {
    ctx.throw(500, 'not ok')
}

Graceful termination

When an app get terminated, it will receive SIGTERM and SIGKILL from k8s. The app must be able to handle such signal and terminate itself gracefully.

The flow is like this

  • container process receives SIGTERM signal.
  • if you don’t handle such signal and your app is still running, SIGKILL is sent.
  • container get deleted.

Your app should handle SIGTERM and should not get to the SIGKILL step.

Example of this would be something like below:

process.on('SIGTERM', () => {
    state.isShutdown = true
    initiateGracefulShutdown()
})

function initiateGracefulShutdown() {
    knex.destroy(err => {
        process.exit(err ? 1 : 0)
    })
}

Also, the app should start returning error on liveness probe.