Tuan Anh

container nerd. k8s || GTFO

Using k8s kind "rootlessly" without Docker

So you probably already heard the news Docker Desktop is no longer free. While this mostly affect macOS and Windows users and I use Pop!_OS, I still would like to see if we can get by without Docker at all.

I’ve been using nerdctl for quite awhile now and while nerdctl mostly fill my needs for docker CLI, I “kinda” need kind CLI to create test cluster for testing purpose. However kind still needs docker.

What if I alias nerdctl to docker? I did that and then try again

ln -s nerdctl docker
kind create cluster --name test

Now I’m getting different error.

ERROR: failed to create cluster: running kind with rootless provider requires cgroup v2, see https://kind.sigs.k8s.io/docs/user/rootless/

Well, this is good right? I just have to enable cgroup v2 and then I should be good to go? Usually I do have cgroup v2 enable but I’m trying Pop!_OS at the moment and the kernel is kinda old. So I upgrade kernel to the latest stable (5.13), using a custom kernel by Xanmod.

uname -a
Linux x300 5.13.14-xanmod1 #0~git20210903.d548864 SMP PREEMPT Fri Sep 3 13:21:07 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

The steps are a bit different when I was using Manjaro but it basically boil down

  • Adding a new kernel parameter systemd.unified_cgroup_hierarchy=1. The instruction on kind sig page doesn’t work for me. On Pop!_OS, I need to use kernelstub
  • Delegate a few more controllers, namely cpu, cpuset and io.
sudo kernelstub -a "systemd.unified_cgroup_hierarchy=1"
sudo reboot

After that, I’m getting a tiny bit different error

ERROR: failed to create cluster: running kind with rootless provider requires setting systemd property "Delegate=yes", see https://kind.sigs.k8s.io/docs/user/rootless/

This looks like, because only memory and pids controllers are delegated to non-root users but we need more, specially cpu, cpuset and io controllers.

We can verify this by, the following command. You will see only memory and pids are delegated.

cat /sys/fs/cgroup/user.slice/user-$(id -u).slice/user@$(id -u).service/cgroup.controllers
memory pids

You can delegate more by doing this, and verify with the above command.

# mkdir -p /etc/systemd/system/user@.service.d
# cat > /etc/systemd/system/user@.service.d/delegate.conf << EOF
Delegate=cpu cpuset io memory pids
# systemctl daemon-reload

If all is good, this is what you see

cat /sys/fs/cgroup/user.slice/user-(id -u).slice/user@(id -u).service/cgroup.controllers
cpuset cpu io memory pids

I thought it should be ok now but no, I still got the above error

ERROR: failed to create cluster: running kind with rootless provider requires setting systemd property "Delegate=yes", see https://kind.sigs.k8s.io/docs/user/rootless/

At this point, I decided to jump into kind codebase to see the condition that trigger that error. Turns out, they use docker info command to see if cgroup v2 is active and to see what kind of controllers got delegated. And nerdctl doesn’t emit those info yet.

nerdctl info looks like below and the docker one has a lot more information regarding where cpushare is supported, pid is supported, etc…

  "ID": "86232191-2d46-475b-be0c-1472c5174763",
  "Driver": "overlayfs",
  "Plugins": {
    "Log": [
    "Storage": [
  "LoggingDriver": "json-file",
  "CgroupDriver": "systemd",
  "CgroupVersion": "2",
  "KernelVersion": "5.13.14-xanmod1",
  "OperatingSystem": "Pop!_OS 21.04",
  "OSType": "linux",
  "Architecture": "x86_64",
  "Name": "x300",
  "ServerVersion": "v1.5.5",
  "SecurityOptions": [

So at this point, I can only log the issue on nerdctl repo and see if it’s really the only problem or there would be sth else that prevent kind working with nerdctl.

Update: So I tried to fix nerdctl info command and once I did, I got another error regarding nerdctl ps where --filter flag is not yet implemented. So thí is where I stopped for now. I will revisit this later.

ERROR: failed to create cluster: failed to list clusters: command "docker ps -a --filter label=io.x-k8s.kind.cluster=test --format ''" failed with error: exit status 1
Command Output: Incorrect Usage: flag provided but not defined: -filter

Làm quen với Pod Security Admission (PSA)

K8s 1.22 giới thiệu Pod Security Admission (sau này gọi tắt là PSA) phiên bản alpha, để thay thế cho Pod Security Policy (PSP).

Bài viết này sẽ hướng dẫn qua cách bạn setup PSA và sử dụng PSA 1 cách cơ bản nhất.

Enable PSA

Để cho mục đích lab đơn giản, mình sẽ sử dụng kind để tạo 1 cluster local. Mình sẽ tạo 1 cluster và enable PSA lên với config như sau

Chú ý chút chỗ mình enable PodSecurity trong featureGates sang true.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
 PodSecurity: true
- role: control-plane
- role: worker

Tạo cluster với lệnh dưới đây

kind create cluster \
  --image=kindest/node:v1.22.0@sha256:b8bda84bb3a190e6e028b1760d277454a72267a5454b57db34437c34a588d047 \
  --config kind-config.yaml

Tạo thành công, bạn sẽ thấy log kiểu như vậy

Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.22.0) 🖼
 ✓ Preparing nodes 📦 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a nice day! 👋

Giờ chúng ta đã có 1 cluster đc enable PSA sẵn sàng để vọc.


Có 3 loại policies cơ bản

  • Privileged: mở, và ko recommend xài
  • Baseline: cơ bản
  • Restricted: chặt nhất và là best practices.

3 loại policies này sẽ đc áp dụng ở 1 trong 3 mode sau (khá là self-explained rồi nên mình ko đi sâu nữa).

  • enforce
  • audit
  • warn

Các loại policies này sẽ siết chặt việc mình sử dụng 1 số fields trong Pod specs như container ports, hostPath và securityContext.

Áp policy lên namespace

Để preview thì mình xài dry-run flag trước: apply baseline warn mode và warn ver 1.22 lên default namespace

kubectl label --dry-run=server --overwrite ns default \
  pod-security.kubernetes.io/warn=baseline \

Thường thì cái namespace này mới cài nên ko có gì.

Để thử cho vui thì mình sẽ thử tiếp dry-run với tất cả namespace và enforce mode xem thế nào.

kubectl label --dry-run=server --overwrite ns --all \

H thì bạn sẽ thấy 1 số warning như sau. Cái này là expected behavior vì apiserver, etcd control plane, kube-proxy cần những cái đó. Nothing to alarm here.

namespace/default labeled
namespace/kube-node-lease labeled
namespace/kube-public labeled
Warning: kindnet-s4b45: non-default capabilities, host namespaces, hostPath volumes
Warning: kube-apiserver-kind-control-plane: host namespaces, hostPath volumes
Warning: etcd-kind-control-plane: host namespaces, hostPath volumes
Warning: kube-controller-manager-kind-control-plane: host namespaces, hostPath volumes
Warning: kube-scheduler-kind-control-plane: host namespaces, hostPath volumes
Warning: kube-proxy-m69hp: host namespaces, hostPath volumes, privileged
Warning: kube-proxy-gcm2c: host namespaces, hostPath volumes, privileged
Warning: kindnet-pvg4x: non-default capabilities, host namespaces, hostPath volumes
namespace/kube-system labeled
namespace/local-path-storage labeled

Áp dụng cho cả cluster thì sao

bạn cần config thêm cho API server, thêm path tới configuration dưới đây cho flag --admission-control-config-file

Đây là 1 ví dụ bạn bật baseline ở mode enforce lên tất cả namespaces trừ kube-system

apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
- name: DefaultPodSecurity
    apiVersion: pod-security.admission.config.k8s.io/v1alpha1
    kind: PodSecurityConfiguration
      enforce: "baseline"
      enforce-version: "latest"
      usernames: []
      runtimeClassNames: []
      namespaces: [kube-system]

Để start thì bạn có thể bắt đầu với baseline. Sau đó có thể build thêm các policies thêm tùy theo nhu cầu của từng tổ chức :)

Happy hacking k8s :)

My blogging setup these days

Previous setup

I used a small VPS instance on RamNode to host my blog previously. No particular reason. I just happened to have lots of unused credits there.

I have a local git repo on my Macbook. Setup a git hook to trigger jekyll build on the VPS. Nothing fancy. No CI/CD whatsoever.

The new setup

I recently migrated my blog from self-hosted on RamNode to Cloudflare Pages. There are still some quirks but it’s alright for personal use.

I hosted the source code of my blog on GitHub. They already have a nice Markdown editor with preview which I can use from anywhere via a web browser.

I can just create a new file in _posts, preview with GitHub editor and when I’m done with it, commit and push. Cloudflare Pages will take it from there.

This is important because these days, I’m mostly using my iPad for all of my computational needs. And the web browser on iPad is desktop grade now :).

Prior this, I tried another setup where I setup code-server on a local machine at home and use vscode via browser. To access the code-server from the Internet while I’m not at home, I installed Tailscale on that machine and my iPad. Btw, Tailscale is amazing. You ought to try it out.

While the code-server setup works and quite extensible, I feel it’s still too heavy for blogging needs. I still keep this setup around when I need to tinker with Golang or Rust.

How to write Node modules with Rust

TLDR: there’s a sample repo here if you’re lazy to read this post. The sample repo include GitHub Actions sample for CI as well.

The Rust bit

It’s very simple. You write your function in Rust

fn say_hello(ctx: CallContext) -> Result<JsString> {
  let name = ctx.get::<JsString>(0)?.into_utf8()?;
  let name = name.as_str()?;
  let s = ctx.env.create_string_from_std(format!("Hello, {}!", name))?;

And then you expose it to Node.js runtime with

fn init(mut exports: JsObject) -> Result<()> {
  exports.create_named_method("say_hello", say_hello)?;



You then build it with napi by using this command napi build --platform

And if you want to build for multi-platform, you will have multiple binaries, so you need to load accordingly in Node.js. We use detect-libc library for this.

let parts = [process.platform, process.arch];
if (process.platform === 'linux') {
  const {MUSL, family} = require('detect-libc');
  if (family === MUSL) {
  } else if (process.arch === 'arm') {
  } else {
} else if (process.platform === 'win32') {

module.exports = require(`./node-module-rust-example.${parts.join('-')}.node`);

And that’s it. You can replace test.js with your favorite test runner. The test code there is just dummy code.

What the fuck is even GitOps

Nguồn gốc

2017, term GitOps đc WeaveWorks promote lên với bài viết “Operations by Pull Request”, đại để là

  • k8s system state đc lưu ở 1 git repo
  • changes made thông qua pull request rồi chạy CI/CD pipeline
  • có công cụ hỗ trợ detect configuration drift và reconciler.

Nhìn qua thì cũng chẳng có gì đặc biệt vì từ 2016, khi bên mình triển khai Kubernetes, mình cũng đã CI/CD và store system state ở git repo rồi. Chẳng qua là thiếu phần drift detector và reconciler thôi là đc gọi là GitOps rồi ;))

Nếu ai quan tâm xem GitOps thực sự như thế nào thì có thể tham khảo thêm repo của GitOps working group.

Thế thực sự thì GitOps có gì mới

Thực sự là với các bạn DevOps thì GitOps chẳng có gì mới cả.

Nếu bạn đã làm việc với infrastructure as code thì các định nghĩa bên trên cũng đều có đủ cả. Kết hợp với CI/CD thì cả 2 chẳng qua cũng chỉ là 1 ý tưởng nhưng scope hơi khác nhau 1 chút. 1 bên là infrastructure config + state và 1 bên là application workload state.

Với mình, GitOps chỉ là 1 trend ko hơn ko kém đc promoted bởi WeaveWorks. Kể cả GitOps working group cũng bị influenced bởi WeaveWorks khá nhiều.

Cộng với việc k8s popularity tăng nhanh thì việc đú trend GitOps trở nên hot hơn bao giờ hết.

Mình có cần GitOps ko?

Câu trả lời là tùy :D. Không phải bộ phận nào trong tổ chức của bạn cũng familiar với Git. Nếu tổ chức của bạn ko phải là engineering driven thì có lẽ là ko nên xài.

Nên hiểu sâu về cách thức, điểm lợi, hại của từng approach mà áp dụng phù hợp với tổ chức của bạn.

Distributed tracing is the new structured logging

Structured logging

Một best practice vẫn được recommend cho tới bây giờ là structured logging.

Structured log là 1 dạng logging theo kiểu key=val để có thể giúp chúng ta dễ dàng parse log và đưa vào 1 log store để tiện query và phân tích.

    request_time: 1000,
    payload_size: 2000

sau đó 1 đoạn structured log sẽ đc generated ra kiểu này, ngoài các metadata chúng ta log thì còn đi kèm thêm 1 số metadata khác như timestamp, hostname, deployment name, pod name, etc.. tùy vào cách chúng ta muốn annotate thêm gì


Nhìn qua thì chúng ta có thể thấy structured logging giống như việc chúng ta define 1 bảng quan hệ (relational table) hoặc 1 schema.

Nhưng việc structured logging giống với 1 bảng quan hệ thì cũng ko nên nhét chúng vào 1 database quan hệ nha. Cái đó sẽ là thảm họa nếu app của bạn log nhiều.

Cho tới lúc này, chúng ta có thể làm các việc kiểu filter/query hoặc aggregate dữ liệu log như SQL được. Trừ việc JOIN để lấy dữ liệu của query context.

Tới đây thì mọi việc vẫn tạm ổn đúng ko?

Come microservices!!

Mọi việc vẫn ổn cho tới khi chúng ta đổi sang kiến trúc microservices. Với microservice, chúng ta chỉ có 1 phần của bức tranh (context) mà chỉ available ở microservice đó thôi.

Nếu muốn log thêm thì sao? Ví dụ: nếu bạn cần log ra các experiment flags trong context của request đó.

Đơn giản đúng ko? Chúng ta chỉ cần pass thông tin đó qua cho microservice mà chúng ta cần phải ko? và giải quyết việc JOIN đó ở tầng app code khi chúng ta log thêm thông tin mà chúng ta vừa pass qua.

    request_time: 1000,
    payload_size: 2000,
    experiment_flags: ['ABTEST_1', 'ABTEST_2']

Nhưng việc làm thế này là cực kì tốn công sức maintain, dễ gây lỗi và ko thể scale đc.

Distributed tracing to the rescue

Distributed tracing nổi lên cùng thời điểm kiến trúc microservice đã khá mature. Khi mà mọi người đã hiểu rõ microservice hơn và các điểm hạn chế của nó (namely obversability).

Và đây là lý do mình nói “distributed tracing is the new structured logging”. Nó sẽ là 1 phần ko thể thiếu với microservice architecture và có thể thay thế (có thể ko hoàn toàn) cho structured logging.

The state of Linux on desktop (2020)

I got fed up with macOS. While the new hardware(Apple Silicon) got amazing feedbacks, the OS itself is so lag behind.

I got a Windows 10 desktop at home and heck, it was even much more pleasant to use than using macOS.

  • As a typical user (web browsing, mail and office stuff), Windows 10 is very good.
  • As a developer, it’s getting a lot better with WSL/Microsoft Terminal/etc…

I decided to give Linux another evaluation test. I pick Manjaro - an Arch-based over an Ubuntu-based distro this time after hearing all kind of praise from its users. But I also don’t want to configure everything I need to use, hence Manjaro.

Manjaro is a user-friendly Linux distribution based on the independently developed Arch operating system. Within the Linux community, Arch itself is renowned for being an exceptionally fast, powerful, and lightweight distribution that provides access to the very latest cutting edge - and bleeding edge - software. However, Arch is also aimed at more experienced or technically-minded users. As such, it is generally considered to be beyond the reach of those who lack the technical expertise (or persistence) required to use it.

via wiki.manjaro.org

So looks like it got the best of both worlds right?

The test setup

I built a new mini PC recently. It’s the Asrock Deskmini X300W that use AMD processor. If you prefer Intel, you can choose the Intel version of the box.

I went with AMD because I like their Zen offering and I would love to support them.

I just throw a 6 cores AMD 4650G processor, 32GB of 3200Mhz Crucial memory, 512GB Samsung NVME drive for OS and other stuff plus another 1TB 2.5’ SSD for storage.

For OS, I went with Manjaro KDE variant because I like the look of it.

The experience

Almost everything works out of the box.

  • The graphic works right. I do not have an Intel GPU so it’s much easier for me but I hear terrifying stories from other side of the world.

  • WiFi works. Zero complaints here.

  • The bluetooth is almost ok. Most stuff I throw at it works, except an old Xbox One controller of mine. The one came with Xbox One S works with 1 minor additional step (disable ERTM). I tested with 4 bluetooth mouses, 2 keyboards, 1 speaker and 2 Xbox controllers.

  • Since I pick KDE, it’s a bit troublesome to use setup i3 wm. After reading several tutorials, I decided not to bother with one. Instead, I settled with krohnkite plugin for KWin. It works really well for my needs , given that my needs are pretty basic.

  • I do gaming once in awhile and Manjaro even came bundled with Steam (LOL). One might say it’s so bloat but I’m ok. Storage is cheap these days.

  • Developer experience is awesome. Linux is usually first-class platform for open source projects. Everything just works. Docker is so fast because no VM required. It’s the best platform for developers, hands down.


So far, I’m loving it. It does everything I need and works with all the peripherals I have, with the exception of the Xbox One controller (wired connection still work though). I’m gonna stick with Manjaro for now. I don’t see myself moving to Arch since my love for tweaking the system is long gone. I just want something that works and Manjaro does work very well for me.

Using Cloudflare Warp on Linux

Cloudflare Warp is currently not supporting Linux. However, since it’s just Wireguard underneath, we can still use it unofficially.

Install wgcf and wireguard-tools

  • Get wgcf from its repo.
  • Install wireguard-tools. I use Manjaro so I will use pacman for this pacman -S wireguard-tools.

Generate Wireguard config

You can now use wgcf to register, and then generate Wireguard config.

wgcf register
wgcf generate
  • register command will create a file named wgcf-account.toml.
  • generate command will generate wireguard config file named wgcf-profile.conf.


Now, copy the generated profile over to /etc/wireguard and use wg-quick utility to simplify setting wireguard interface.

sudo cp wgcf-profile.conf /etc/wireguard
wg-quick up wgcf-profile

Verify it’s working with wgcf trace or navigate to this page: https://www.cloudflare.com/cdn-cgi/trace. The output should have warp: on.

link bài gốc

An extremely fast streaming SAX parser for Node.js

TLDR: I wrote a SAX parser for Node.js. It’s available here on GitHub : https://github.com/tuananh/sax-parser

I got asked about complete XML parsing with camaro from time to time and I haven’t yet managed to find time to implement yet.

Initially I thought it should be part of camaro project but now I think it would make more sense as a separate package.

The package is still in alpha state and should not be used in production but if you want to try it, it’s available on npm as @tuananh/sax-parser.


The initial benchmark looks pretty good. I just extract the benchmark script from node-expat repo and add few more contenders.

sax x 14,277 ops/sec ±0.73% (87 runs sampled)
@tuananh/sax-parser x 45,779 ops/sec ±0.85% (85 runs sampled)
node-xml x 4,335 ops/sec ±0.51% (86 runs sampled)
node-expat x 13,028 ops/sec ±0.39% (88 runs sampled)
ltx x 81,722 ops/sec ±0.73% (89 runs sampled)
libxmljs x 8,927 ops/sec ±1.02% (88 runs sampled)
Fastest is ltx

ltx package is fastest, win by almost 2 (~1.8) order of magnitude compare with the second fastest (@tuananh/sax-parser). However, ltx is not fully compliant with XML spec. I still include ltx here for reference. If ltx works for you, use it.

module ops/sec native XML compliant stream
node-xml 4,335
libxmljs 8,927
node-expat 13,028
sax 14,277
@tuananh/sax-parser 45,779
ltx 81,722


The API looks simply enough and quite familiar with other SAX parsers. In fact, I took the inspiration from them (sax and node-expat) and mostly copied their APIs to make the transition easier.

An example of using @tuananh/sax-parser to prettify XML would be like this

const { readFileSync } = require('fs')
const SaxParser = require('@tuananh/sax-parser')

const parser = new SaxParser()

let depth = 0
parser.on('startElement', (name) => {
    let str = ''
    for (let i = 0; i < depth; ++i) str += '  ' // indentation
    str += `<${name}>`
    process.stdout.write(str + '\n')

parser.on('text', (text) => {
    let str = ''
    for (let i = 0; i < depth + 1; ++i) str += '  ' // indentation
    str += text
    process.stdout.write(str + '\n')

parser.on('endElement', (name) => {
    let str = ''
    for (let i = 0; i < depth; ++i) str += '  ' // indentation
    str += `<${name}>`
    process.stdout.write(str + '\n')

parser.on('startAttribute', (name, value) => {
    // console.log('startAttribute', name, value)

parser.on('endAttribute', () => {
    // console.log('endAttribute')

parser.on('cdata', (cdata) => {
    let str = ''
    for (let i = 0; i < depth + 1; ++i) str += '  ' // indentation
    str += `<![CDATA[${cdata}]]>`

parser.on('comment', (comment) => {

parser.on('doctype', (doctype) => {
    process.stdout.write(`<!DOCTYPE ${doctype}>\n`)

parser.on('startDocument', () => {
    process.stdout.write(`<!--=== START ===-->\n`)

parser.on('endDocument', () => {
    process.stdout.write(`<!--=== END ===-->`)

const xml = readFileSync(__dirname + '/../benchmark/test.xml', 'utf-8')
link bài gốc

camaro v6

I recently discover piscina project. It’s a very fast and convenient Node.js worker thread pool implementation.

Remember when worker_threads first introduced, the worker startup is rather slow and pool implementation is generally advised. However, there wasn’t any good enough implementation yet until piscina.

Since v4 when I move to WebAssembly, camaro performance took a huge hit (3 folds) and I was still trying to find a way to fix this perf regression.

Well, piscina (worker_threads) seems to be the answer to that.

Take a look at piscina example:

const Piscina = require('piscina');

const piscina = new Piscina({
  filename: path.resolve(__dirname, 'worker.js')

(async function() {
  const result = await piscina.runTask({ a: 4, b: 6 });
  console.log(result);  // Prints 10

and worker.js

module.exports = ({ a, b }) => {
  return a + b;

Sure it looks simple enough so I wrote a quick script to wrap camaro with piscina. And the performance improvement is sweet: it’s about five times faster (ops/sec) and the CPU on my laptop is stressed nicely.

camaro v6: 1,395.6 ops/sec
fast-xml-parser: 153 ops/sec
xml2js: 47.6 ops/sec
xml-js: 51 ops/sec

More importantly, it scales nicely with CPU core counts, which camaro v4 with WebAssembly isn’t.

In order to use this, I would have to drop support for Node version 11 and older but the performance improvement of this magnitude should guarantee such breaking changes right?

I published the first alpha build to npm if anyone want to give it a try.