Frantic Notes

Leading yak shaving expeditions

React is a virus

Early 2012 MVC dominated the world.

In this context thinking outside of MVC box was impossible, and React disguised itself as “V in MVC” (quoted from original landing page). But some people felt it’s not entirely accurate…

“React is the V” just like my iPhone is just a phone. — @_chenglou

Engineers who used React noticed that there’s no place for a Controller in there — React appeared to handle that responsibility quite well.

Then there was the Model. Traditional models were mutable and the View would subscribe to updates to change relevant bits on UI. This pattern was so common that JavaScript almost added Object.observe API. With React, however, this granular model updates wasn’t needed — just re-render the whole UI, it’s fairly cheap.

In fact, mutable models caused a lot of troubles and engineers noticed that using immutable data makes things much simpler and more performant.

Immutable models changed the way the “business logic” was written. It became more natural to write “pure” functions that take immutable data and return immutable data.

Then React started infecting what was known as “separation of concerns” best practice. The prevailing belief was that the presentation should be separated from the data, and it manifested itself in putting HTML, CSS and JS in separate folders.

But to people who wrote React every day it was clear this practice is no good. React components already very elegantly bundle state management and presentation, and having CSS and data fetching in separate places was unnatural. This ignited the boom of CSS-in-JS libraries and declarative data fetching via GraphQL.

And then the virus got to HTML. React could run on the server and render components into a static string, so technically there was no need for HTML shell that used to host it.

React broke free from web with the release of React Native. The component model was so elegant and so powerful that the DOM already an implementation detail. React can now render into mobile primitives, canvas, GL, terminal, etc.

React also changed the way web apps were bundled. In the old days you could get away with simply putting JS files in a folder. React shipped with non-standard JS syntax extension (JSX) and required a small build script that transformed files. It was simple, but it opened the door for more syntax extensions, complex transforms and other sophistication.

React has infected iOS and Android developers, although they are probably never going to admit it. SwiftUI and JetPack Compose build on top of the same programming model.

React has infected me with its ideas. It took over the way I think about building systems. And I’m grateful it did.

4 mon   react

TODO apps are meant for robots

In my lifetime I’ve tried a dozen todo apps. In the beginning they all seem different, novel and special. Slick UI, shortcuts, tags, subtasks, the list goes on and on.

But all our stories were the same: I start using the new app, then after awhile I stop using it.

Up until the last week I thought the problem was in myself (you probably think so too). After all, David Allen seems to have figured this shit out. Also there are people leaving long 5 star reviews on every major todo list app, they discuss them on forums, recommend them to friends.

But then I read Andy Matuschak’s notes, and it really resonated with me. What if I’m a left-handed person in the world of right-handed tools? All popular todo apps out there have the same problems:

  1. Willpower needed to make decisions is a limited resource. And most TODO apps are lazy and don’t consider the impact on your willpower. You want to postpone a task? Please enter the exact date to postpone this to. Which project to add this to? Tags? Subtasks? The amount of things one can customize is really large, but making all this decisions has a cost.
  1. Long lists are overwhelming. TODO apps are all about lists. And these lists tend to get large when the tasks inflow exceeds the tasks outflow (i.e. every modern knowledge worker’s queue). Looking at the ever-growing list of things that need to get done is not inspiring to say the least. As the lists get longer, there’s less and less chance that anything from it will get done, which also decreases the motivation to look into these lists. Removing stuff without getting it done is also painful, it requires a complex emotional and rational decision to be made (see the point about the willpower above).
  1. Sense of accomplishment is important but rare in the digital world. When you mark a task as done in your TODO app, it just hides it. That’s it, no reward, no sense of accomplishment (unless you make your own). I think that’s why some people like Trello or pen-and-paper TODO list: when you get something done, you can see a card moved or a text crossed out. An artifact that proves there was a task here, and now it’s done. Now you are one step closer to your goal.
  1. We need to trust our systems. GTD works only when you follow the rules. If you let your inbox grow unbound, the whole point about GTD gets lost and you also start losing trust in GTD. Another negative feedback loop. I’ve never seen a TODO app that lets you recover from this downward spiral.
  1. Tasks are not the same. Get milk, write an essay, plan a vacation, reconnect with a friend. These are things of different magnitude, different emotional connection, different context and time commitment. Some tasks aren’t even tasks, e. g. simply items to keep track of or be reminded of. But TODO apps treat them the same. They get the similar looking rows neatly organized in a unified interface.
  1. Sometimes humans need help. A little nudge here and there can make a huge difference. It’s also very personal: different things work for different types of people. I’ve made a list of strategies to help me get things done, and ended up with 13 items (things like “extract the next smallest step as a separate task” or “work on it for just 2 minutes”). Thirteen! Guess how many nudges all my TODO apps have? Zero (except the deadline push notification reminder which just adds anxiety).
  1. Context is important. We are tired in the evening and have less willpower. Getting a small task done first thing in the morning can boost our confidence and energy levels. Work tasks are better be hidden during the weekend. Sophisticated TODO apps have the flexibility to do this, but they require a lot of investment in configuration

I now see all TODO apps as a shallow copy-pasta of the same rigid, inhuman, anxiety-inducing template.

But there’s hope!

In fact the advanced solution technology lies in the hands of productivity enemies: social media apps and games. Instagram, TikTok and Candy Crush have figured all this out. They know how to make you do something with very little willpower. They know how to present information in a way that’s not overwhelming. They give you rewards for doing things. Hints, nudges, suggestions.

I think there’s plenty of room for TODO innovations.

As for me -- I’m not registering a domain name for a new pet project. Not yet :)


TODO file for personal projects

If you know me, you know that I’m not a very organized person. I hate rigid productivity systems. I’ve tried many things: Trello, Things, Github Issues, Pivotal Tracker, etc. But they all end up in the same state — detached from the real work I’m doing.

Here’s what worked for me.

In most of my personal projects I have a file called TODO. I use it like this.

When I have an idea about a feature or a bug, I just open the TODO file (Cmd+P → TODO → Enter), go to the end (Cmd+↓) and start typing.

If I’m away from my computer, I’ll use Things to capture the ideas and then move the to the TODO file.

My TODO file captures a whole bunch of things related to the project. I don’t have to actually do anything about these things at the moment, just capture items in my backlog.

Later on, when I have time to reflect on the progress, I plan a new milestone from the backlog.

A milestone is just a section in the TODO file that looks like this

# v1.5 Polished in-game UI

The game screen looks tidy and clean, the player should
be able to figure out what state the game is in and what
should happen next. No new features!

A milestone has a title and a short description. The text describes the desired outcome, not how to get there. This that helps me narrow down my focus.

I force myself to have only one milestone active at a time. All random items I want to do go to the Backlog section first.

I add milestones in the reverse order, the newest one is always at the top. This way when I open the file I see the most important thing first. Also I can still use the append workflow to add items to my backlog (which is always at the end).

Inside each milestone I have a bunch of todos, they look like this:

[ ] *•• Display user avatars

The first pair of square brackets is a “checkbox”. I don’t remove items when they are done, instead I put “x” into the space between square brackets.

Then goes the estimate of how much effort I think the task will take. The scale is logarithmic: one star for simple straightforward tasks, two for cross-file change or little refactoring, three for a task that will take me a couple of hours. If the task needs four stars, I should break it down.

To make text of the todos align nicely, I prepend a corresponding number of dots or spaces.

Here’s what it looks like in one of the projects I work on:

Why does this work for me?

  1. There’s no context switch. TODO file is much faster to open than any external tool I’ve used, and all my editor shortcuts just work there the same way they work in my code.
  2. Gives me sense of progress. As I mentioned earlier, I don’t delete done items, they just get a nice X next to them
  3. The history is maintained with the project via the same source control. I can blame the file and see what I did when.
  4. When I’m about to commit something, the message is ready (I just copy-paste the TODO line)
  5. It’s better than inlined // TODO comments because I can organize my file the way I want. Also different editors have different plugins for this, and I don’t want to depend on a concrete IDE plugin for this.

When does it not work?

For one, sometimes I just want to explore and have fun. I don’t have a TODO item for that, and I let myself poke and learn new things in unstructured way.

I’ve also noticed that the milestones don’t work for me when I keep adding new items to the milestone I’ve already started. I’m still trying to get better at this.


Debugging home internet connection

Having spotty internet connection is worse than having no internet at all.

In the apartment we are living now the internet is great 95% of the time. The remaining 5% were annoying enough to get serious about fixing the problem.

I should note that I’m a noob when it comes to networks. In retrospect I should have figured it out sooner. But it was a fun yak shaving expedition I want to share with you.

Step 1: Understand the problem

The symptom was the same: at random times the internet connection would just disappear. WiFi signal was strong, but no traffic is getting through.

We called ISP but got nothing useful: they said metrics on their end looked good, no disruptions in service.

I needed a way to prove that something was wrong.

I built a script that every 5 minutes downloaded a 25MB file and recorded what the download speed was. It also logged errors. Finally I could put the Raspberry Pi I had for a good use!


speed=$(curl -Lo /dev/null -skw "%{speed_download}" $url)

if [ $? -eq 0 ]
  echo "Speed: $speed" >> /var/log/dload.txt
  echo "Error" >> /var/log/dload.txt

I’ve added the script to crontab:

$ crontab -e

*/5 * * * * /home/pi/inet/

Unfortunately, first half hour of running this didn’t reveal any problems (except underwhelming connection speed). I’ve decided to up my game a little and use production grade tools.

Installing and configuring Grafana

I always wanted to learn more about Grafana, and this sounded like a perfect opportunity. I thought plotting the results of the download script would help me investigate the problem.

I’ll skip the part where I tried different backends for storing the data. I didn’t care much about any particular solution, just needed something basic to store enough data to plot a simple bar chart. However, every tool tries to sell itself as enterprise level, high scale, etc. and comes with five million services that make up an advanced distributed architecture.

In the end I settled with influxdb (1.x branch because 2.x didn’t have binaries for armv7l)

$ echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
$ echo "deb $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
$ wget -qO - | sudo apt-key add -
$ wget -qO - | sudo apt-key add -
$ sudo apt update
$ sudo apt install grafana influxdb
$ sudo systemctl daemon-reload
$ sudo systemctl unmask influxdb.service
$ sudo systemctl enable grafana-server.service
$ sudo systemctl start grafana-server
$ sudo systemctl start influxdb

I confirmed Grafana worked on 192.168.1.XXX:3000 and that it could connect to the local Influxdb instance.

Logging data to Influxdb was pretty easy, it’s just a POST request to its built in HTTP server. When we log the data, Influx requires a table name, list of 0 or more key-value pairs (tags) and list of 1 or more values. We could also give it a timestamp, but skipping it just uses the event’s time of arrival.

table,tag1=foo,tag2=bar value=42

With this, I had to change the download script to log proper events to influx:

if [ $? -eq 0 ]
  curl -XPOST 'http://localhost:8086/write?db=inet' --data-binary "speed dl_bps=${speed}"
  curl -XPOST 'http://localhost:8086/write?db=inet' --data-binary "error value=1"

Note that I used a different event name for errors, this is to make it easier to plot them on the graph (zero or negative values did make the graph look less pretty and messed with things like average/p90 speed calculations).

This is the query in Grafana. Few things to note here:

  • I used `math (* 8)` operation to get bits per second, since that’s how ISP refers to the value
  • The errors are plotted as a different graph. From the download script you can see that error value can only be 1, I had to tell Grafana to use a different Y axis

After letting this script run for a while, here are the results:

Clearly something is not right. I called ISP again and gave them more info, time frames, etc. They still played innocent.

Looking at the hardware

To my surprise, the WiFi I was testing this on was served by an additional router that’s closer to the living rooms. I traced found the “main” switch that this router was connected to and plugged in my Raspberry Pi directly via ethernet port. The results were almost perfect:

D’oh! I should have started from this, and saved myself a lot of time.

I upgraded the in-room router (802.11g → 802.11n) and reconfigured it to be a dumb access point. My WiFi problem was fixed.

Since I went this far, I figured I’d make this dashboard thing even better.

Making dashboard even better

The main router has a web-based UI. It’s not a very pretty one, but it’s definitely workable. It has all this useful info, like total bytes sent/received, info about clients, etc.

Old school Web UI with HTTP basic auth

We are all spoiled by REST and Graphql APIs these days, but the web of the past had its own charm. It was so simple.

Here’s what’s going on when I click “Refresh” stats button:

Chrome DevTools has a feature that allows you to copy the request as curl command

Glueing a few curls and greps together, I came up with this. It works, but after figuring out how to turn something into an array in Bash I wished I’d just went with Python.


ROW=$(curl -sS '' -H 'Authorization: Basic aHR0cHM6Ly9jdXR0Lmx5L3Z0aG1TaGE=' -H 'Accept: text/html' | grep 'var statistList = new Array' -A 1 | tail -n 1)

IFS=', ' read -r -a stats <<< "$ROW"
curl -XPOST 'http://localhost:8086/write?db=inet' --data-binary "router_stats bytes_received=${stats[0]},bytes_sent=${stats[1]},packets_received=${stats[2]},packets_sent=${stats[3]}"

Note that the router doesn’t give me the speed, just the total bytes. Fortunately, Grafana can take a derivative, of that value, giving me approximate speed at a point in time:

I did the same for per device stats page, ping and a few other things.

Currently, the end results looks like this:

 No comments   2020  

Reconsidering the way I explain programming

“Do you know a recipe for a recursive salad?” – I asked. “It consists of tomatoes, olives and a recursive salad”.

My joke falls flat. Michael’s eyes confused and waiting for the explanation. I regroup and try a different strategy – sketching:

One of my failed attempts at explaining recursion

I’ve explained a lot of programming concepts to different people. From high school students who are just getting started, to experienced engineers who are quickly diving into a new programming language.

I used to take a lot of pride in the clever explanations I used to come up with. “Your UI is just a function of state”, “the closure hugs your variables in scope”, “Prolog function arguments can be in or out”. I also loved the visuals, formatting code and showing clever animations.

But many times the people on the receiving end would not be as excited. I thought my delivery was poor.

Now after so many years, it finally hit me.

Programming is complex and abstract. Like advanced math, it’s removed from everyday things we deal with normally. What I was describing was only my mental model. Words, pictures and a lot of hand-waving is the way I internalized these abstract concepts.

Unfortunately, understanding them is not enough to explain them.

Andy Matuschak recently had a beautiful piece on the status quo format of lectures:

“the lecturer says words describing an idea; the class hears the words and maybe scribbles in a notebook; then the class understands the idea.” In learning sciences, we call this model “transmissionism.” It’s the notion that knowledge can be directly transmitted from teacher to student, like transcribing text from one page onto another. If only! The idea is so thoroughly discredited that “transmissionism” is only used pejoratively, in reference to naive historical teaching practices. Or as an ad-hominem in juicy academic spats.

All my “perfect” models were beautiful only in my head. They did strike a chord with others, sometimes, but it was sheer luck – their intuition was in tune with mine for that particular problem.

So what do I do differently now?

Listen very carefully

First, I try to listen very carefully to their question. If they are not talking much I’ll ask questions and keep listening. I’ll keep notes on how they explain themselves:

  1. What names do they use to refer to abstract concepts? I’ll try to use the same.
  2. What kind of modality do they operate in? Do they “see” things or “listen” to them?
  3. How deep do they need to go? Just fix something and move on, or trying to understand it on a more fundamental level?

Operate within their mental model

Second, I resist the urge to explain it exactly how I understand it. I try to accept their mental model of the world, even if I believe it’s not super accurate. As long as it’s not hurting their understanding, I’m willing to skip over non-essential bits.

I also adopt their language and use the same names in my examples.

Let them explore

If the environment allows for it, I’ll encourage them to use the debugger, logs or experiment with the code. For this to work, sometimes I need to reduce the problem space to a much smaller one.

They key is to let them poke the real world (in this case, the way the compiler or the programming language runtime works) and tune their own model. The goal is to help them develop their own intuition, instead of conveying my own.

This can be generalized: successful communication is so much about understanding the context and the people on the receiving end.

Back to Michael. I showed him the IntelliJ debugger and asked to trace a very simple recursive program he wrote.

“So it’s like the staircase!”, he exclaimed. “Every time we go in is like taking a step, and then we return all the way down”. Well, I guess it is kind of like staircase... I never thought about recursion this way myself. Now I think I can explain tail call optimization via an escalator analogy.

This is not the model I had in my mind, but I’ll definitely use it in the future