Blog

Filters

Presenting Code Trove

It's been some months in the making, but I feel comfortable now presenting this new small project that I call Code Trove or trove for short.

The premise is simple: it's a repository of little scripts and code snippets, centralized in a single place and accessible through the CLI or your text editor. Something similar to the scratch pads in IntelliJ, but available "everywhere". And with some inspiration from Obsidian, borrowing the concept of a vault.

I've been using it for a few months, rewrote the whole thing about 200 times, but I think it's now in a state where I can iterate upon it and not rewrite it all over again.

It's written in Go, and for now the only extension available is for VS Code, but I plan to implement it at least for IntelliJ and Neovim.

I have no intention of making it available on the VS Code Marketplace for now, so you will have to download it from the repository or build it yourself. Check it out:
https://github.com/rafamoreira/trove

The value of it is kind of diminished in these AI times, as throwaway scripts have become even more throwaway, but I still think it's a neat tool, and I enjoyed the process of building it in a totally new language, using code agents.

Not everything needs to be an essay

Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte.

Blaise Pascal

I don't know if this is an artifact of SEO optimization, LLM output, or a pushback from the microblogging format, but to me it feels, every time I want to read something interesting, that thing is 5 thousand words long, with diverse preambles, ins and outs, and all sorts of padding.

If it's a pushback from the microblog, I understand. I despise that too, but if anything, that format made me very good at being succinct, but in a Monkey's Paw fashion, it probably destroyed my attention span too.

I suspect, however, that this is more an effect of SEO optimization now 'roided up by the LLM flood.

And to be clear, nothing wrong with essays, long format pieces can be very good and entertaining, but that's the catch: if you are talking just doing plain and simple exposition of a concept or idea, and the text is informative without padding, that's what the text is. It's something that will inform, and can be judged only by that.

It is, however, a proverbial pain in the ass to find some interesting idea or concept buried in a very long essay, written by a mediocre writer that has no capacity for entertaining.

Still, I sympathize with the person, maybe they really like writing but never bothered to get better at it. That's fine. The big caveat, though, is that it has to be a person. If the "author" of said piece is an LLM, and the author didn't want to spend the time and effort to actually make that enjoyable to read, and invert the balance, where the creation of something takes 5 minutes and the consumption takes 30 minutes, then I'll not read that shit.

There are too many good things to be read to waste our time with long form LLM output.

Vibe coding is just accelerated Extreme Go Horse

I use LLM for coding, everyone is using LLM for coding. This is changing, and will even more, the craft forever, that's a fact, the genie is out of the bottle, but this has been said a million of times already, and will be said even more. Nothing new.

Another often repeated phrase is something like:

using LLMs responsibly and just vibing to your heart's content are two very different things.

But I don't think that's new either, people were creating slop before the big slop started, the difference is just the scale, and the area damage that can be inflicted by a single individual.

Any Brazilian programmer, specially the old timers like me, will be familiar with Extreme Go Horse (XGH), and I'm very happy to see that this old formalization, or this institution of coding, dare I say, is proving to be finally recognized as the superior software development methodology.

For those not familiar with, enjoy:

eXtreme Go Horse (XGH) Process

Original

Translation Source

  1. I think therefore it's not XGH. In XGH you don't think, you do the first thing that comes to your mind. There's not a second option as the first one is faster.

  2. There are 3 ways of solving a problem: the right way, the wrong way and the XGH way which is exactly like the wrong one but faster. XGH is faster than any development process you know (see Axiom 14).

  3. You'll always need to do more and more XGH. For every solved problem using XGH 7 more are created. And all of them will be solved using XGH. Therefore XGH tends to the infinite.

  4. XGH is completely reactive. Errors only come to exist when they appear.

  5. In XGH anything goes. It solves the problem? It compiled? You commit and don't think about it anymore.

  6. You commit always before updating. If things go wrong your part will always be correct... and your colleagues will be the ones dealing with the problems.

  7. XGH don't have schedules. Schedules given to you by your clients are all but important. You will ALWAYS be able to implement EVERYTHING in time (even if that means accessing the DB through some crazy script).

  8. Be ready to jump off when the boat starts sinking. Or blame someone else. For people using XGH someday the boat sinks. As time passes by the system grows into a bigger monster. You better have your resume ready for when the thing comes down. Or have someone else to blame.

  9. Be authentic. XGH don't follow patterns. Write code as you may want. If it solves the problem you must commit and forget about it.

  10. There's no refactoring just rework. If things ever go wrong just use XGH to quickly solve the problem. Whenever the problem requires rewriting the whole software it's time for you to drop off before the whole thing goes down.

  11. XGH is anarchic. There's no need for a project manager. There's no owner and everyone does whatever they want when the problems and requirements appear.

  12. Always believe in improvement promises. Putting TODO comments in the code as a promise that the code will be improved later helps the XGH developer. He/She won't feel guilt for the shit he/she did. Sure there won't be no refactoring (see Axiom 10).

  13. XGH is absolute. Delivery dates and costs are absolute things. Quality is relative. Never think about quality but instead think about the minimum time required to implement a solution. Actually, don't think. Do!

  14. XGH is not a fad. Scrum, XP? Those are just trends. XGH developers don't follow temporary trends. XGH will always be used by those who despise quality.

  15. XGH is not always WOP (Workaround-oriented programming). Many WOP require smart thinking. XGH requires no thinking (see Axiom 1).

  16. Don't try to row against the tide. If your colleagues use XGH and you are the only sissy who wants to do things the right way then quit it! For any design pattern that you apply correctly your colleagues will generate 10 times more rotten code using XGH.

  17. XGH is not dangerous until you see some order in it. This axiom is very complex but it says that a XGH project is always in chaos. Don't try to put order into XGH (see Axiom 16). It's useless and you'll spend a lot of precious time. This will make things go down even faster. Don't try to manage XGH as it's auto-sufficient (see Axiom 11) as it's also chaos.

  18. XGH is your bro. But it's vengeful. While you want it XGH will always be at your side. But be careful not to abandon him. If you start something using XGH and then turn to some trendy methodology you will be fucked up. XGH doesn't allow refactoring (see Axiom 10) and your new sissy system will collapse. When that happens only XGH can save you.

  19. If it's working don't bother. Never ever change - or even think of question - a working code. That's a complete waste of time even more because refactoring doesn't exist (see Axiom 10). Time is the engine behind XGH and quality is just a meaningless detail.

  20. Tests are for pussies. If you ever worked with XGH you better know what you're doing. And if you know what you're doing why test then? Tests are a waste of time. If it compiles it's good.

  21. Be used to the 'living on the edge' feeling. Failure and sucess are really similar and XGH is not different. People normally think that a project can have greater chances of failing when using XGH. But success is just a way of seeing it. The project failed. You learned something with it? Then for you it was a success!

  22. The problem is only yours when you name is on the code docs. Never touch a class of code which you're not the author. When a team member dies or stays away for too long the thing will go down. When that happens use Axiom 8.

Community addition

  1. More is more. With XGH you thrive on code duplication - code quality is meaningless and there's no time for code reviews or refactoring. Time is of the essence, so copy and paste, quickly!

The joy of making things from scratch

If you wish to make an apple pie from scratch, you must first invent the universe.
Carl Sagan

I've been a programmer for many years at this point - professionally alone I'm about to cross the 20-year mark. Most of my career was built on a pragmatic foundation: always trying to use the best tools for the job, reducing development time, and using tools that wouldn't necessarily bring me joy but would do the job well. Lately, though, I've started to go on a tangent with my personal projects, where I would previously apply these same rules of pragmatism.

I have a nice job that I enjoy, I can pay my bills, and save some money, and while I would never be able to afford a Lambo (not that I want to afford that anyway), I'm very happy with my career. This realization freed me from trying to make my side projects profitable endeavors. This gave me a freedom in programming that I needed but didn't know I was missing. I finally can feel the joy of programming for fun again, just like it used to be.

This website, for example, although still using Flask as a framework, was something different and non-pragmatic enough that I wanted to try anyway. Right now, I'm in the process of removing the dependencies and doing everything from "scratch"—as much as possible and as much as my interest permits.

It needs to be a step-by-step approach: first removing the CSS lib, then removing Flask itself, maybe removing all Python dependencies (although I'm not sure I want to implement a markdown parser—and if I don't, that's fine). Then I can think about creating a new toy web server, maybe a proxy? Is that a good use of my time? Probably not, but I don't care. I don't need to be productive as long as I'm having fun with it. The best part? When it stops being fun, I can simply stop and move on to another thing.

Another great side effect is that it's a good way to escape the inundation of AI tools that have lately been sucking the life out of everything around code. I don't feel pressured to be x-times more productive when working on these tasks; I just want to enjoy, learn new things, type the code, and see it working (or breaking).

Mind you, I'm not an anti-AI person in general, but the pressure that AI tools have been exerting on every programmer to be ultra-productive all the time, with no time to think, just shipping code constantly, is really starting to pile up. Things are starting to look a bit bleak from this perspective.

What's the limit? Going and implementing a toy language just to bootstrap everything? A new OS and all that comes with it? A new protocol? Probably not. For now, I'm happy to pick my apples, flour, sugar, etc., at the grocery store and still say that I'm making my pies from scratch - but who knows, one day I may have a garden and plant an apple tree :)

What should I test? - The Cascade Testing Method

For a long time, convincing people to create automated tests during development wasn't easy. Whether the argument was longer development time or lack of usefulness, there was always pushback. Nowadays, this has changed for the better - whatever methodology you choose for development, tests are an integral part of it in the vast majority of cases. The problem, at least as I see it now, isn't if we should write tests, but what we should write tests for in a realistic and pragmatic way.

Of course, we could be purists and try to test not just every line of code, but every logical outcome of each line. The first problem with this approach is that I don't think any tool actually enables you to do that. We have test coverage tools, but at least the ones I know only cover code execution, not logical outcomes. More importantly, this approach is unattainable, too burdensome, and would likely lead to test breakage with each and every code change.

When discussing testing approaches, TDD (Test-Driven Development) is the first thing that comes to mind: write the acceptance test, then integration tests, then unit tests, iterating and refactoring at each step. And to be honest, on paper it sounds amazing. In practice, I've never managed to fully accomplish this. It could be a "skill issue" on my part, but to me, the TDD approach usually fails the pragmatism smell test, especially in the day-to-day reality of incoming tasks and deadlines. Even discounting that, at some point I really feel overwhelmed by it - it's just too much effort for very little gain in my experience.

Enter the Cascade Testing Method

Lately, I've developed something I'm calling the "Cascade Testing Method." It works like this: develop your code (either together with tests or tests after—that doesn't matter), but always start by testing the "happy path." Test that your main functionality works as envisioned.

Here's a very simple example: imagine a CLI tool that takes two numbers and outputs their sum. Write an integration test that calls the entrypoint function with 2, 2 and expects 4.

After establishing the happy path, create tests for obvious - and I mean very obvious -corner cases. In the same example, write a test that passes "2, R" as parameters and verify that the program behaves nicely, fails gracefully, or tells the user that the input isn't supported (whatever the expected behavior should be).

Finally, when bugs arrive (and they will) after the code starts being used, then apply a TDD-like approach to solve each bug.

Why This Works Better

This approach addresses what I find to be the most tedious part of TDD: writing tests before the code exists. With Cascade Testing, the interfaces and behaviors are already in place, and you know your "happy path" is working. When a bug appears, you write a test to replicate it—which isn't always easy, but often replicating the bug is more than half the work of solving it. Then you modify the code with the new test in place until it passes. Of course, this simple example can branch out significantly during bug investigation.

I've been using this method for some time now, and I'm pleased with the results. It really removes a lot of the cognitive load around what to test, which was always something that bothered me. It provides a practical balance between coverage and maintainability while ensuring that real-world issues are properly addressed through test-driven bug fixes.

How to upgrade your BIOS on an Asus motherboard without a compatible CPU, or the sad state of search

Here's how you do it, because apparently Asus can't be bothered to tell you:

  1. Go to the Asus website and download the correct BIOS file for your motherboard, if you use windows, run the renamer.exe that comes with it, otherwise just look at the listing how it should be named (too difficult to provide a file with the correct name already, I guess)
  2. Format a USB drive as FAT, and put this file on the root of the drive.
  3. Put the USB drive on the BIOS designated port.
  4. Turn off the pc, if still on.
  5. Hold the BIOS flashback button for a few seconds.
  6. The LED will start to blink.
  7. After a few seconds the green LED will start to blink faster.
    1. if instead of blinking faster the LED goes back to solid or shut it off, there's a problem with your USB drive, it could be anything, bad format, bad BIOS file, wrong name, pay attention to every requirement.
  8. After a few minutes (around 5 on my case) the flashing LED will stop and the BIOS should be flashed with the correct version.

Why am I writing this?

A few weeks ago we got some new heavy-duty machines on work with the brand-new Ryzen 9950x and an Asus B650 motherboard, and although the B650 is compatible with Zen 5, a new BIOS is necessary to use the processor. These motherboards were manufactured in Dec, 23, so they are only compatible up to Ryzen 7000 series out of the box.

We had the privilege of choosing components for this machine, and we carefully selected motherboards that supported Zen 5 without needing a Ryzen 7000 on hand. The model in question? Asus TUF GAMING B650M-PLUS WIFI. It does indeed support this feature, but good luck figuring out how.

We're programmers, not IT wizards or system integrators. We were just fooling around building these PCs like regular consumers would. I've built countless systems in the past, but I've been out of the loop for a few years. I thought, "What could go wrong?" In the end, nothing did, but holy smokes, what a crappy journey.

ASUS manuals are a waste of trees

First off, hats off to Asus for constantly changing feature names with subtle differences. It's like they're trying to confuse us on purpose. The feature in question is now called "BIOS FlashBack™ button," but it used to be "BIOS USB FlashBack," or something equally forgettable.

Another point for Asus for not including a single word about this feature in the manual. They only bothered to mention the regular process of updating the BIOS through the interface, which is super helpful when you don't have a compatible processor.

There are a few scattered pages around their website, mostly with old instructions, and not applicable for this MOBO, but the previous(?) process that I was familiar with my personal X470 CH7.

The sad state of search

Now for the most infuriating part. All I wanted was a simple written guide or manual on how this works. But no, that would be too easy. The current state of search engines is so depressingly bad that nothing actually useful came up in any of my queries. Just outdated results from the official website and a ton of SEO spam trying to make me click their affiliate links. I don't even doubt that one of those sites had the answers I needed, buried between 23 pages of AI-generated word salad.

After a lot of frustration I found a video describing the exact problem that I wanted to solve. And while I respect the creator, it's really frustrating to me to have to sit through a 12 minutes video about something that could be described in 194 words, which is the first part of this post.

Everything about this process annoys me to no end. Writing this post brought back all those feelings of frustration and disbelief. I'm even starting to question myself: am I being unrealistic to think that an expensive piece of hardware should come with a decent manual? Is it too much to ask for clear, accessible information about a critical feature? Apparently, for Asus, the answer is "yes."

Kamal v2 is awesome

A bit more than one year ago I wrote this post # Deploying Django app with static files using Docker and Nginx proxy and my opening line was:

This task is more complicated than I first thought it would be.

And it really was. I even dreaded putting more services on the website that would depend on that flimsy setup, but I was reading about Rails 8, and then I read that Kamal would have a new version as well, so I went to take a look at it.

When first announced, Kamal, which at that time was called MSRK, enticed me, but one limitation, which was the possibility to run only a single "service" on a server, was a deal breaker for me. My understanding was that this limitation was related to Traefik, and how that is set up, but I'm not sure about the details. I'm not even sure if that was a real limitation, or if there were workarounds for that, but that left the impression that Kamal was for bigger services than I was expecting.

Enter Kamal v2 and Kamal Proxy

Kamal v2 dropped Traefik for Kamal Proxy, which is an in-house made proxy that completely orchestrates the proxy between apps, even dealing with 0 downtime deployments, which is not a requirement for me on this website, but it's always nice to have.

Everything feels like a perfect fit; the user-facing complexity is minimal, and with two config files, I had my app up and running, with two commands: kamal setup and kamal deploy

This website source code is available here with all Kamal configurations and more. The current stack is much more simplified than my previous post too. It's just a very small Flask app, rendering HTML directly from Markdown, running on gunicorn, with a Caddy reverse proxy in front of it, serving the static files, which is overkill for this site, but why not. Caddy is simple enough, then finally Kamal proxy in front dealing with auto-renewing Let's Encrypt certificates.

Let's take a brief look at the Kamal config file:

# Name of your application. Used to uniquely configure containers.
service: rafaelmc

# Name of the container image.
image: rafamoreira/rafaelmc.net

# Deploy to these servers.
servers:
  web:
    - 49.12.202.223

# Enable SSL auto certification via Let's Encrypt (and allow for multiple apps on one server).
# Set ssl: false if using something like Cloudflare to terminate SSL (but keep host!).
proxy:
  ssl: true
  hosts:
    - www.rafaelmc.net
    - rafaelmc.net
  # kamal-proxy connects to your container over port 80, use `app_port` to specify a different port.
  app_port: 80

# Credentials for your image host.
registry:
  server: ghcr.io
  username: rafamoreira
  password:
    - KAMAL_REGISTRY_PASSWORD

# Configure builder setup.
builder:
  arch: amd64

That's it, I won't go over details as it's a very basic config, but realistic all you need for a simple setup is to change the ip, the image name, the hosts, and the registre user/pass, it can't get much simpler than that. The documentation on their website is very complete, so don't skimp on that.

I'm really hooked on Kamal. It's one of the most polished and streamlined experiences I've ever had with deployment tooling, like a 100x better version of old Capistrano (does Capistrano still exist?).

I know very well that my use case is not very sophisticated, and probably Kubernetes does everything better than Kamal, but for my use case, the simplicity and polish are unbeatable.

Kamal rekindled my interest in the 37signals sphere of influence. for ~12 years I was doing my main development in Ruby, using a lot of Rails, but when I got a new job at a Python shop, I basically abandoned all Ruby and went all in on Python. But Kamal, combined with the recent talks about nobuild from DHH, makes me want to give a shot to Rails 8, see if it still feels like home, or if it will be an awkward reencounter.

Deploying Django with static files using Docker and Nginx proxy

This task is more complicated than I first thought it would be. For someone accustomed to Docker and all its intricacies, it probably wouldn't be so much, but here we are. Before anything here are the two main docker-compose files used to achieve this setup; first is the acme/nginx-proxy Here a example of the docker-compose for the proxy:

---
services:
  nginx-proxy:
    image: nginxproxy/nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - /root/docker/nginx/htpasswd:/etc/nginx/htpasswd
      - /root/docker/nginx/certs:/etc/nginx/certs
      - /root/docker/nginx/vhost:/etc/nginx/vhost.d
      - /root/docker/nginx/html:/usr/share/nginx/html
    networks:
      - proxy
  nginx-proxy-acme:
    image: nginxproxy/acme-companion
    volumes_from:
      - nginx-proxy
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /root/docker/nginx/acme:/etc/acme.sh
    environment:
      - DEFAULT_EMAIL=me@rafaelmc.net
    networks:
      - proxy
networks:
  proxy:

The second one is the one taking care of django, and serving the static files.

# docker-compose.yaml
version: "3"

services:
  web:
    image: rfmc28/rafaelmc.net:latest
    command: bash -c "python manage.py migrate && python manage.py collectstatic --no-input && gunicorn rafaelmc.wsgi -b 0.0.0.0:8000"
    container_name: rafaelmc
    volumes:
      - sqlite_data:/app/sqlite_data/
      - /root/sqlite_backups/:/sqlite_backups/
      - /root/docker/envfiles/rafaelmc.net.env:/app/.env
      - static_files:/app/static/
    expose:
      - "8000"
    environment:
      - VIRTUAL_HOST=rafaelmc.net # www.rafaelmc.net
      - VIRTUAL_PORT=8000
      - LETSENCRYPT_HOST=rafaelmc.net # www.rafaelmc.net
      - VIRTUAL_PATH=/
      # - VIRTUAL_DEST=/static
    networks:
      - proxy

  static:
    image: nginx
    expose:
      - "80"
    environment:
      - VIRTUAL_HOST=rafaelmc.net #,www.rafaelmc.net
      - VIRTUAL_PORT=80
      # - LETSENCRYPT_HOST=rafaelmc.net #,www.rafaelmc.net
      - VIRTUAL_PATH=/static/
      - VIRTUAL_DEST=/
    volumes:
      - static_files:/usr/share/nginx/html/
    networks:
      - proxy
    depends_on:
      - web
networks:
  proxy:
    name: nginx-proxy_proxy
volumes:
  sqlite_data:
  static_files:

I think it's better to already lay out all the files, and go on a little more detail on whys, and hows.

I think the first thing to note is that I'm separating this into two different docker-compose files because I'm running those on Portainer, so each compose file will be it's own stack. Portainer is entirely optional for the task, and it's possible and easy to combine those two compose files into one.

The most critical—and challenging to decipher from the documentation—are the VIRTUAL_PATH and VIRTUAL_DEST variables defined on the django compose. Although these variables are documented on the nginx-proxy image, understanding their meaning can be tricky if you're not familiar with Docker and Nginx terminology, at least it was for me.

Let's examine the web container, which is designed to serve the entire website, except for everything that resides in /static. The VIRTUAL_PATH for the web is /, meaning it serves everything. If you don't split your Nginx proxy into multiple containers, you never need to set this, and everything will operate as expected.

Next, we have the static container—a basic Nginx container. This one contains both VIRTUAL_PATH and VIRTUAL_DEST env variables. Here, VIRTUAL_PATH is equivalent to defining a location /static in nginx.conf.

While nginx can smartly interpret // as the root, it might be confusing for those like myself. To clear this up, I've defined VIRTUAL_DEST=/, which means nginx will receive the requests on the path relative as if they were on the root /.

For instance, let's say we have a file, example.com/static/css/style.css, and this is the complete path. This file will be routed to the static container, as it's part of /static/. Upon reaching the static container, the request appears as if it were on root/css/style.css.

The final point to note is the structure of the static_files volume. A static_files volume is declared and used by both the static nginx and web containers. When we mount the volume, it will initially be empty. However, with the command defined on the web container - python admin.py collectstatic --no-input - it copies the necessary files there. Consequently, all the static files will become accessible to both containers, the downside is that to be able to achieve some sort of automation, you need to run collecstatic for every deployment, even if you didn't change anything. It shouldn't be a problem, for this simple website it's very fast on my local machine

time ./manage.py collectstatic --no-input

0 static files copied to '/Users/rmc/code/python/rafaelmc/staticfiles', 132 unmodified.
./manage.py collectstatic --no-input  0.14s user 0.04s system 95% cpu 0.186 total

A new future dead blog; so it goes

About 10 years ago I decided to axe my personal website and technical blog.
At that moment, it felt like a good choice. The 'death of the blog' narrative
was ubiquitous, and it seemed that the end of blogging was inevitable. Twitter
was roaring, and the untimely demise of Google Reader delivered what looked like
the final blow to blogs. We adapted and moved on; so it goes.

The Shift to Twitter and Beyond

Twitter, especially in its technical sphere, felt like a
natural successor to my tech blog. At that time, Stack Overflow was in its
glory days, and Google's SERP was amazing. Information was easily discoverable,
and I saw an opportunity to simplify my digital existence. My exhaustion with
Wordpress nudged me in this direction, so I said fuck it; so it goes.

As years passed, the sweet taste of social media started to turn to ash.
Twitter became unbearable, and I can't pinpoint the exact moment or reason that
tipped me over the edge, but it was definitely before the infamous purchase
era. I did revisit the platform after the takeover, not out of love, or hope,
but by the morbid curiosity akin to watching a train wreck unfold. Eventually,
even that lost its appeal; so it goes.

Despite my bitterness, the digital world still hosts numerous amazing
communities. Platforms like Mastodon, Discord, Matrix, IRC, and various forums
continue to thrive. But now, I view them through a different lens: or they are
just rapid communicating platforms, like IRC which isn't going the way of the
Dodo anytime soon; the others? They may too, follow the path of their
predecessors; so it goes.

Rebuilding my Blog

The term 'blog' might not accurately describe this platform.
It feels more akin to a personal tech pulpit – a space for me to project my
thoughts into the void, mix with a playground of projects.
There's a certain comfort in this, as even if no one else finds this useful,
it still serves a purpose for me, as it could act as a future reference,
or simply as a means of organizing thoughts in a way that is comprehensible to
others, and as being so, it can be too for a future version of myself.
After all, the present version will too cease to exist; so it goes.

← Home