detfalskested

AI imposter syndrome

Colton Voege in his blog post about curing your AI 10x engineer imposter syndrome:

It's okay to sacrifice some productivity to make work enjoyable. More than okay, it's essential in our field. If you force yourself to work in a way you hate, you're just going to burn out. Only so much of coding is writing code, the rest is solving problems, doing system design, reasoning about abstractions, and interfacing with other humans. You are better at all those things when you feel good. It's okay to feel pride in your work and appreciate the craft. Over the long term your codebase will benefit from it.

Fødselsdagekage for dfs

25 år med dfs

I dag er det 25 år siden jeg registrerede domænet detfalskested.dk. Siden da har det fungeret som mit faste hjem på internettet, både til web og e-mail.

Det føles både som i går og som nærmest ufatteligt lang tid.

Bloggen har været et gennemgående element. I perioder også for venner der havde brug for at have deres egen. Desværre er meget gået tabt, grundet servernedbrud og dårlige backuprutiner. Især i de tidlige år var dfs også legeplads og eksperimentarium for min kodning, og har gennem tiden budt på ting som fx fatwageneratoren og gladlisten. Hvilket første til at jeg for lidt over 20 år siden fik mit første job som professionel EDB-mand, der har været min beskæftigelse lige siden.

Tak til Morten og Peter for inspirationen.

🎂🇩🇰

Coding with AI

Thomasorus tried coding with AI, but realised it made him dumber:

When I tried to fix the security issues, I quickly realized how this whole thing was a trap. Since I didn't write it, I didn't have a good bird's eye view of the code and what it did. I couldn't make changes quickly, which started to frustrated me. The easiest route was asking the LLM to deploy the fixes for me, so I did. More code was changed and added. It worked, but again, I could not tell if it was good or not.

That's when I stopped the experiment.

Elasticsearch: reconcile-desired-balance

I've been struggling all day to get my GitLab pipelines running properly again.

It turns out the combination of Docker happily eating all your disk space and Elasticsearch being very cautious to not start if there is not plenty of available disk made things break down.

My setup is GitLab's runnners running inside Docker, which is testing a Django app with Elasticsearch (ES) attached as a service. This has been working flawlessly forever. But at some point recently, I started getting errors for my tests that depend on ES. And the weird thing is that it started during a quiet period where I did not touch the setup or the code.

From Python, I was getting the error message:

elastic_transport.ConnectionTimeout: Connection timed out

This seemed weird as I was perfectly able to connect to the ES container from the app container: I do a check at the beginning of my test script, by simply curling the ES host. And even the line of code right before the one that got a timeout was successfully connecting to ES. What those 2 lines do is:

  1. Remove the search index.
  2. Create the search index again.

With all the things running inside GitLab runners running inside Docker, it seemed a bit like a black box. But I figured out a way to get the logs from the ES container:

Looking at the log I found the very last entry to contain a clue:

{"@timestamp":"2025-06-06T11:47:27.110Z", "log.level": "INFO",  "current.health":"RED","message":"Cluster health status changed from [YELLOW] to [RED] (reason: [reconcile-desired-balance]).","previous.health":"YELLOW","reason":"reconcile-desired-balance" , "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[5a28270d5238][masterService#updateTask][T#1]","log.logger":"org.elasticsearch.cluster.routing.allocation.AllocationService","elasticsearch.cluster.uuid":"dX-ZZeLEQ-OMCDKm2PjoCQ","elasticsearch.node.id":"r0CL-D0LT5u-tpTeHujQwQ","elasticsearch.node.name":"5a28270d5238","elasticsearch.cluster.name":"docker-cluster"}

It wasn't super clear what "reconcile-desired-balance" meant, but I fortunately found a forum post from someone having the same problem, suggesting it's because of lack of disk space.

Checking the disk (df -h), I had more than 30 GB free, but the usage percentage had crawled past 90%, which I assume could be a red flag for ES.

I do know that Docker will happily eat all your disk space over time. That has caused me problems before. And yes, it had also had a feast this time. Running docker system prune -a reclaimed 265 GB of disk.

After this, tadaaa! Elasticsearch no longer turns to a RED health status and thus does not time out: My tests are passing again. Oh, the joys of modern development.

Talk nicely

Amanda Bachman pretends to be the AI friendly tech CEO we all know too well, in A Company Reminder for Everyone to Talk Nicely About the Giant Plagiarism Machine:

I guess I understand. I, too, was once a little skeptical of the Giant Plagiarism Machine™. But that was before I attended The Conference for Big Boy Business Owners™. Here, I learned that my fellow titans of industry have been re-orging to “leverage plagiarism” and “minimize thought-waste.”

(...)

The way I see it, we’re family. It really does disappoint me that so many brilliant colleagues—whose genuine breakthroughs I’ve profited from for years—would be so quick to condemn this newer, stupider way that I and others like me can make money off your life’s work, through stealing.