Tuan Anh

Developer gone wild

Traits of a good leader

Found this on HackerNews. Very much on point, though I don’t quite agree with the last item.

  1. You have to have your people’s back, this is the most important thing… be there for them, insulate them from problems and management stupidity and always fight for them.

  2. Lead by example, never ask them to do something you won’t do yourself.

  3. Communicate, I have booked on afternoon a week from 14:00pm till 16:00 and more to just talk with my team and discuss everything from work, to weather, sports, to bitch and moan against the company, etc…

  4. Get together as much as you can on a real “team building exercise” - the whole team in another city for at least 2 days with a great party and lots of eating and drinking on company dime…

node-prune

Easily pruning unneeded files from node_modules

A npm package to clean up node_modules folder to remove unnecessary files on production.

Use cases:

  • optimize sizes for aws lambda
  • optimize sizes for docker build using multi stage docker

Featured on npmaddict.

link bài gốc

kompression - koa compression middleware with support for brotli

This is my fork of koa-compress that add supports for brotli compression.

Available on npm.

link bài gốc

Docker Containers on the Desktop

Great idea. I never thought of Docker containers this way because I totally forgot that I can always mount the config to the container.

This totally changes my dev environment setup.

$ docker run -it \
    -v /etc/localtime:/etc/localtime \
    -v $HOME/.irssi:/home/user/.irssi \ # mounts irssi config in container
    --read-only \ # cool new feature in 1.5
    --name irssi \
    jess/irssi
link bài gốc

If Your Boss Could Do Your Job, You’re More Likely to Be Happy at Work

The benefit of having a highly competent boss is easily the largest positive influence on a typical worker’s level of job satisfaction

Related: This is why we have working managers at Basecamp

link bài gốc

Kubernetes-hosted application checklist (part 2)

This part is about how to define constraint to the scheduler on where/how you want your app container to be deployed on the k8s cluster.

Node selector

Simpleast form of constraint for pod placement. You attach labels to nodes and you specify nodeSelector in your pod configuration.

When to use

  • you want to deploy redis instance to memory-optimized (R3, R4) instance group for example.

Affinity and anti-affinity

Affinity and anti-affinity is like nodeSelector but much more advanced, with more type of constraints you can apply to the default scheduler.

  • the language is more expressive (not just “AND of exact match”)
  • you can indicate that the rule is “soft”/”preference” rather than a hard requirement, so if the scheduler can’t satisfy it, the pod will still be scheduled
  • you can constrain against labels on other pods running on the node (or other topological domain), rather than against labels on the node itself, which allows rules about which pods can and cannot be co-located

requiredDuringSchedulingIgnoredDuringExecution is the hard type and preferredDuringSchedulingIgnoredDuringExecution is the soft/preference type.

In short, affinity rules define rules/preferences to where a pod deploys and anti-affinity is the opposite.

When to use

  • affinity and anti-affinity should be used when necessary only. It has a side effect of reducing the speed of deployment.

  • affinity use case example: web server and redis server should be in the same node

  • anti-affinity example: 3 redis slaves should not be deployed in the same node.

Kubernetes-hosted application checklist (part 1)

At work, we’ve been running Kubernetes (k8s) in production for almost 1 year. During this time, I’ve learnt a few best practices for designing and deploying an application hosted on k8s. I thought I might share it today and hopefully it will be useful to newbie like me.

Liveness and readiness probes

  • Liveness probe: check whether your app is running
  • Readiness probe: check whether your app is ready to accept incoming request

Liveness probe is only check after the readiness probe passes.

If your app does not support liveness probe, k8s won’t be able to know when to restart your app container and in the event your process crashes, it will stay like that while k8s still directing traffic to it.

If your app takes some time to bootstrap, you need to define readiness probe as well. Otherwise, requests will be direct to your app container even if the container is not yet ready to service.

Usually, I just make a single API endpoint for both liveness and readiness probes. Eg. if my app requires database and Redis service to be able to work, then in my health check API, I will simply check if the database connection and redis service are ready.

try {
    const status = await Promise.all([redis.ping(), knex.select(1)])
    ctx.body = 'ok'
} catch (err) {
    ctx.throw(500, 'not ok')
}

Graceful termination

When an app get terminated, it will receive SIGTERM and SIGKILL from k8s. The app must be able to handle such signal and terminate itself gracefully.

The flow is like this

  • container process receives SIGTERM signal.
  • if you don’t handle such signal and your app is still running, SIGKILL is sent.
  • container get deleted.

Your app should handle SIGTERM and should not get to the SIGKILL step.

Example of this would be something like below:

process.on('SIGTERM', () => {
    state.isShutdown = true
    initiateGracefulShutdown()
})

function initiateGracefulShutdown() {
    knex.destroy(err => {
        process.exit(err ? 1 : 0)
    })
}

Also, the app should start returning error on liveness probe.

Minimal Node.js docker container

Bitnami recently releases a prod version of their bitnami-docker-node with much smaller size due to stripping a bunch of unncessary stuff for runtime.

If your app does not require compiling native modules, you can use it as is. No changes required.

However, if you do need to compile native modules, you can still use their development image as builder and copy stuff over to prod image after.

I try with one of my app and the final image size reduce from 333 MB down to just 56 MB 💪 !! All these without the sacrify of using alpine-based image.

Please note that this is the size reported by Amazon Cloud Registry so probably compressed size. I don’t build image locally often.

update: the uncompressed size of my app is 707MB before and 192 MB after.

FROM bitnami/node:8.6.0-r1 as builder

RUN mkdir -p /usr/src/app/my-app
WORKDIR /usr/src/app/my-app

COPY package.json /usr/src/app/my-app
RUN npm install --production --unsafe

COPY . /usr/src/app/my-app

FROM bitnami/node:8.6.0-r1-prod
RUN mkdir -p /app/my-app
WORKDIR /app/my-app
COPY --from=builder /usr/src/app/my-app .
EXPOSE 3000

CMD ["npm", "start"]

Non-privileged containers FTW

FROM ubuntu:latest
RUN useradd -u 10001 scratchuser

FROM scratch
COPY dosomething /dosomething
COPY --from=0 /etc/passwd /etc/passwd
USER scratchuser

ENTRYPOINT ["/dosomething"]

Quite innovative use of multi stage docker build. Of course, you can create a passwd file yourself but this one seems much rather interesting.

link bài gốc

Recent Node.js TSC fuss