Recent Posts

The best hardware setup for software engineers

When I'm writing software I usually have the following windows open, all at the same time:

  • 2 column layout VS Code (window = 2560 x 2160)
  • A fullscreen-equivalent browser with usable console to see what I'm working on (window = 1920 x 2160)
  • A large terminal window (window = 2560 x 1440)
  • Chat GPT (window = 2560 x 1440)
  • A full-screen browser with all the stuff I'm researching to help do whatever I'm doing (window = 2560 x 2880)

It looks like this:

I went through a number of iterations when it came to monitors. For a long time I used dual 32" 4K IPS screens, but even that wasn't quite enough pixels. It's hard to physically fit more than 2 32" screens on a desk - they're too wide already, and it would not be ergonomic to mount them above each other.

About a year ago I discovered these 16:18 ratio screens and bought 2 of them as side screens around the central 32" 4k screen. They're 2560x2880, providing ~7m pixels each, compared to ~8m pixels on a 4k screen. Another way to look at it is two 1440p screens stacked on top of each other.

This means that I can fit a full-height browser window on them without having to scroll, which is a huge productivity boost. Sometimes I'll stack 2 windows of top of each other if I want to see multiple references at once.

Continue reading

Loading Fast and Slow: async React Server Components and Suspense

When the web was young, HTML pages were served to clients running web browser software that would turn the HTML text response into rendered pixels on the screen. At first these were static HTML files, but then things like PHP and others came along to allow the server to customize the HTML sent to each client.

CSS came along to change the appearance of what got rendered. JavaScript came along to make the page interactive. Suddenly the page was no longer the atomic unit of the web experience: pages could modify themselves right there inside the browser, without the server being in the loop at all.

This was good because the network is slow and less than 100% reliable. It heralded a new golden age for the web. Progressively, less and less of the HTML content was sent to clients as pre-rendered HTML, and more and more was sent as JSON data that the client would render into HTML using JavaScript.

This all required a lot more work to be done on the client, though, which meant the client had to download a lot more JavaScript. Before long we were shipping MEGABYTES of JavaScript down to the web browser, and we lost the speediness we had gained by not reloading the whole page all the time. Page transitions were fast, but the initial load was slow. Megabytes of code shipped to the browser can multiply into hundreds of megabytes of device memory consumed, and not every device is your state of the art Macbook Pro.

Continue reading

Using Server Actions with Next JS

React and Next.js introduced Server Actions a while back, as a new/old way to call server-side code from the client. In this post, I'll explain what Server Actions are, how they work, and how you can use them in your Next.js applications. We'll look at why they are and are not APIs, why they can make your front end code cleaner, and why they can make your backend code messier.

Everything old is new again

In the beginning, there were <form>s. They had an action, and a method, and when you clicked the submit button, the browser would send a request to the server. The server would then process the request and send back a response, which could be a redirect. The action was the URL of the server endpoint, and the method was usually either GET or POST.

<form action="/submit" method="POST">
  <input type="text" name="name" />
  <button type="submit">Submit</button>
</form>

Then came AJAX, and suddenly we could send requests to the server without reloading the page. This was a game-changer, and it opened up a whole new world of possibilities for building web applications. But it also introduced a lot of complexity, as developers had to manage things like network requests, error handling, and loading states. We ended up building React components like this:

Continue reading

Avoiding Catastrophe by Automating OPNSense Backups

tl;dr: a Backups API exists for OPNSense. opnsense-autobackup uses it to make daily backups for you.

A few months ago I set up OPNSense on my home network, to act as a firewall and router. So far it's been great, with a ton of benefits over the eero mesh system I was replacing - static DHCP assignments, pretty local host names via Unbound DNS, greatly increased visibility and monitoring possibilities, and of course manifold security options.

However, it's also become a victim of its own success. It's now so central to the network that if it were to fail, most of the network would go down with it. The firewall rules, VLAN configurations, DNS setup, DHCP etc are all very useful and very endemic - if they go away most of my network services go down: internet access, home automation, NAS, cameras, more.

OPNSense lets you download a backup via the UI; sometimes I remember to do that before making a sketchy change, but I have once wiped out the box without a recent backup, and ended up spending several hours getting things back up again. That was before really embracing things like local DNS and static DHCP assignments, which I now have a bunch of automation and configuration reliant on.

Continue reading

Automating a Home Theater with Home Assistant

I built out a home theater in my house a couple of years ago, in what used to be a bedroom. From the moment it became functional, we started spending most evenings in there, and got into the rhythm of how to turn it all on:

  • Turn on the receiver, make sure it's on the right channel
  • Turn on the ceiling fan at the right setting
  • Turn on the lights just right
  • Turn on the projector, but not too soon or it will have issues
  • Turn on the Apple TV, which we consume most of our content on

Not crazy difficult, but there is a little dance to perform here. Turning on the projector too soon will make it never be able to talk to the receiver for some reason (probably to prevent me from downloading a car), so it has to be delayed the right number of seconds otherwise you have to go through a lengthy power cycle to get it to work.

It also involves the location of no fewer than 5 remote controls, 3 of which use infrared. The receiver is hidden away in a closet, so you had to go in there to turn that on, remote control or no. Let's see if we can automate this so you can turn the whole thing on with a single button.

Continue reading

How to make Bond fans work better with Home Assistant

I have a bunch of these nice Minka Aire fans in my house:

They're nice to look at and, crucially, silent when running (so long as the screws are nice and tight). They also have some smart home capabilities using the Smart by Bond stack. This gives us a way to integrate our fans with things like Alexa, Google Home and, in my case, Home Assistant.

Connecting Bond with Home Assistant

In order to connect anything to these fans you need a Bond WIFI bridge. This is going to act as the bridge between your fans and your network. Once you've got it set up and connected to your wifi network, you'll need to figure out what IP it is on. You can send a curl request to the device to get the Access Token that you will need to be able to add it to Home Assistant:

curl command

If you get an access denied error, it's probably because the Bond bridge needs a proof of ownership signal. The easiest way to do that is to just power off the bridge and power it on again - run the curl again within 10 minutes of the bridge coming up and you'll get your token.

Continue reading

Demystifying OpenAI Assistants - Runs, Threads, Messages, Files and Tools

As I mentioned in the previous post, OpenAI dropped a ton of functionality recently, with the shiny new Assistants API taking center stage. In this release, OpenAI introduced the concepts of Threads, Messages, Runs, Files and Tools - all higher-level concepts that make it a little easier to reason about long-running discussions involving multiple human and AI users.

Prior to this, most of what we did with OpenAI's API was call the chat completions API (setting all the non-text modalities aside for now), but to do so we had to keep passing all of the context of the conversation to OpenAI on each API call. This means persisting conversation state on our end, which is fine, but the Assistants API and related functionality makes it easier for developers to get started without reinventing the wheel.

OpenAI Assistants

An OpenAI Assistant is defined as an entity with a name, description, instructions, default model, default tools and default files. It looks like this:

Let's break this down a little. The name and description are self-explanatory - you can change them later via the modify Assistant API, but they're otherwise static from Run to Run. The model and instructions fields should also be familiar to you, but in this case they act as defaults and can be easily overridden for a given Run, as we'll see in a moment.

Continue reading

Using ChatGPT to generate ChatGPT Assistants

OpenAI dropped a ton of cool stuff in their Dev Day presentations, including some updates to function calling. There are a few function-call-like things that currently exist within the Open AI ecosystem, so let's take a moment to disambiguate:

  • Plugins: introduced in March 2023, allowed GPT to understand and call your HTTP APIs
  • Actions: an evolution of Plugins, makes it easier but still calls your HTTP APIs
  • Function Calling: Chat GPT understands your functions, tells you how to call them, but does not actually call them

It seems like Plugins are likely to be superseded by Actions, so we end up with 2 ways to have GPT call your functions - Actions for automatically calling HTTP APIs, Function Calling for indirectly calling anything else. We could call this Guided Invocation - despite the name it doesn't actually call the function, it just tells you how to.

That second category of calls is going to include anything that isn't an HTTP endpoint, so gives you a lot of flexibility to call internal APIs that never learned how to speak HTTP. Think legacy systems, private APIs that you don't want to expose to the internet, and other places where this can act as a highly adaptable glue.

Continue reading

Distributed Tracing with Node JS

The microservice architecture pattern solves many of the problems inherent with monolithic applications. But microservices also bring challenges of their own, one of which is figuring out what went wrong when something breaks. There are at least 3 related challenges here:

  • Log collection
  • Metric collection
  • Distributed tracing

Log and metric collection is fairly straightforward (we'll cover these in a separate post), but only gets you so far.

Let's say your 20 microservice application starts behaving badly - you start getting timeouts on a particular API and want to find out why. The first place you look may be your centralized metrics service. This will likely confirm to you that you have a problem, as hopefully you have one or more metrics that are now showing out-of-band numbers.

But what if the issue only affects part of your user population, or worse, a single (but important) customer? In these cases your metrics - assuming you have the right ones in the first place - probably won't tell you much.

In cases like these, where you have minimal or no guidance from your configured metrics, you start trying to figure out where the problem may be. You know your system architecture, and you're pretty sure you've narrowed the issue down to three or four of your services.

Continue reading

A New Stack for 2016: Getting Started with React, ES6 and Webpack

A lot has changed in the last few years when it comes to implementing applications using JavaScript. Node JS has revolutionized how many of us create backend apps, React has become a widely-used standard for creating the frontend, and ES6 has come along and completely transformed JavaScript itself, largely for the better.

All of this brings new capabilities and opportunities, but also new challenges when it comes to figuring out what's worth paying attention to, and how to learn it. Today we'll look at how to set up my personal take on a sensible stack in this new world, starting from scratch and building it up as we go. We'll focus on getting to the point where everything is set up and ready for you to create the app.

The stack we'll be setting up today is as follows:

  • React - to power the frontend
  • Babel - allows us to use ES6 syntax in our app
  • Webpack - builds our application files and dependencies into a single build

Although we won't be setting up a Node JS server in this article, we'll use npm to put everything else in place, so adding a Node JS server using Express or any other backend framework is trivial. We're also going to omit setting up a testing infrastructure in this post - this will be the subject of the next article.

Continue reading

All Posts by Tag