I've been having great fun recently using Mini PCs using older laptop chips from one of the many sellers on Amazon, installing Proxmox on them, and using them as a host for virtual machines and Docker containers.
First, you put the Proxmox ISO on a USB stick, boot the machine from it, and install it as your operating system. It'll wipe the Windows installation that came with the device. It'll then give you the local network address so you can connect to it's control panel in a web browser.
Within Proxmox you can then easily upload ISO images and create virtual machines from them, like Home Assistant. I always find it easier to manage when you install it as the Operating System version rather than as Docker container, as you get better support for auto-updating and add-ons. Proxmox also supports Linux Containers, or LXCs, which I'd say was a less popular version of the container concept than Docker is. There are several easy scripts to help you launch popular apps in this format.
If you do want to use Docker, then Portainer is also a good option. Create a VM with a Portainer install within it, and you can use that VM to host all your Docker instances with a nice UI.
Proxmox also lets you setup easy backup routines, so you can have all your VMs backup every night to an external disk or NAS.
If you have more than one device you can easily make them a cluster by entering the IP address of your second device into the Proxmox control panel. Then both of them will appear within a single interface. A cluster means that if one of them fails, or you need to switch it off and move it, that the VMs running on it can start on another device. This will work best if you can offload the VM storage onto a NAS.
What seems like magic is moving a VM from one host to another while it's running, without needing to stop it. I was using VMware ESX with this feature many, many years ago. It was magic then and it's magic now.
As well as HAOS, I'm also trying out Immich for photo hosting, Hoarder for bookmarking, ErstatzTV for custom IPTV channels from my own content, and a homepage dashboard so I don't have to remember all the URLs.
In the early days (or years) of a SaaS product you are fighting for product market fit. Even when you know without a doubt that your product serves a purpose that people are willing to pay for, there's a long road between Minimum Viable Product (MVP) and a mature piece of software.
If it all it takes to onboard a new customer is filling in a couple of fields on a form and choosing some configuration options - whether done by the customer or one of your customer success team - that's Product Delivery.
But if you're doing bespoke development, adding features just for them, building new reports, or even doing on-site training - that's Project Delivery.
They may seem similar at first. After all, either way, you have to deliver something to the customer. This may be compounded if some of your customers want a product and some want a project.
But while treating product delivery like a project may result in some additional paperwork, but otherwise no harm done, trying to perform project delivery with a product delivery toolset will result in a very disappointed customer.
Projects need plans. They need tooling that can show you when things are going wrong. They need discovery, goals, risks and a clear list of responsibilities.
Products just need billed. Which is why everyone just wants to deliver products.
In 2012, as a jab against the recently released Microsoft Surface - a 2-in-1 tablet/laptop combo - Tim Cook said:
"You can converge a toaster and a refrigerator, but those things are probably not going to be pleasing to the user."
When Meta demoed Orion for the press last year, what I found most interesting is not that their approach to product announcements is different from Apple's (since it was a press demo of a product they're not releasing), but that they recognise that AR and VR are different products.
Meanwhile, the Vision Pro is the toaster-fridge. A product that is always going to be compromised by the fact it's trying to be more than one thing.
That's why it's a VR device that can't actually play most existing VR content because it doesn't have hand-controllers.
I've heard Jason Snell describe the Vision Pro as an AR emulator, since the technology doesn't yet exist to build the product Apple wanted. I think that's a fair assessment, but the Vision Pro currently offers:
- Immersive environments for watching videos, playing games, etc (VR)
- A virtual Mac display (AR)
- Various games (AR and VR)
- Immersive video on Apple TV (VR)
- Spatial computing (AR)
- Apps like Jigspace for exploring 3D content (AR)
Which means at some point in the development journey, the AR emulator also became a VR device. But if they really could build Orion-style glasses then features like immersive environments and video would no longer be possible. The immersion disappears if I can still see the real world out the side of my glasses. I want to watch NBA games and feel like I'm in the arena, not that the players are in my living room.
This is an area I haven't seen discussed. Is their lack of effort in producing regular immersive content simply because they don't want to produce a VR device? Are they eventually going to have to admit that Meta are right, and these are two different product lines?
Like every software engineer, I spend a lot of time looking online for solutions to problems (usually via Google, but more recently Kagi and Claude). I am eternally grateful to other engineers who have taken the time to write about their solutions. Especially those who cover the why as well as the how.
In that spirit, I'm going to breakdown the process I've been using for the past few years to ship code into production for the SaaS products I'm involved with. They're all Python and hosted in AWS, but that's not required for this approach.
Local Development
I spent many years dealing with a development server and database in the cloud, where the whole team used svn (with a fantastic homegrown web front-end that preceded Github by many years) to create branches and do their work. It generally worked fine, until somebody broke the database or the development server collapsed.
I'm now a huge fan of Docker-based local development. Whatever language and database you're working with, if you can get everyone running your application locally through Docker, it's going to be a positive experience. A simple compose file to get everything launched, VS Code for editing, and no central resource for everyone to worry about.
I'm increasingly a fan of making our software 100% local, back to that "famous" 5-minute Wordpress install I talked about yesterday. That means a developer should be able to get the whole thing running locally via Docker, and the whole application should work without any AWS access. That may sound simple, but it means abstracting certain features and sometimes not using AWS services. For example, if you upload to S3 in production you'll need to abstract that in your code so that somebody running locally can still upload and retrieve files from local disk.
They should also be able to run it without an internet connection. If your team want to work in the middle of nowhere, they should be able to. That means vending all your JS and CSS libraries, rather than loading them from public CDNs.
You'll be surprised how you naturally get better practices from following that approach.
Git Process
I want the interaction with Git to be as minimal as possible (one of the reasons for not using GitFlow) and am a happy user of the Github Desktop application. I want to minimise merge conflicts and I want to minimise the cognitive load for the team.
With that in mind, our process is simple. The development team branch off main
, do their work in a feature branch, and then merge back to main
again.
We always consider the main
branch to be deployable. Now what does deployable mean?
- We don't merge half-complete implementations. And if we do, they're hidden behind feature flags/configuration values.
- We have automated tests (both unit tests and integration tests via Playwright) to make sure that nothing is broken.
- Every job goes through a peer-review process before being merged (via Github).
- There's no cherry-picking. Once it's in
main
, that's it going to be deployed. You may occasionally roll something back again if we're not quite ready for that, but you can't decide to only move some of main
forward into production.
Once a job is merged to main
then we run a build process that creates a Docker image. This happens via a combination of CodePipeline and CodeBuild, and the resulting image is uploaded to ECR. But you could easily do this with Github Actions and any private Docker registry.
The image gets tagged with the commit-id from Github and the newest image is always tagged with latest
.
We have separate development and production AWS accounts (and you should too), but the important thing is that the resulting image from the build process is shared to both environments. ECR lets you create those cross-account sharing rules.
However, the image is only deployed automatically to the development environment because we have stakeholders.
Stakeholders
What/Who are stakeholders?
These are the people that care whether or not your software is any good. They may be project managers. They may be product owners. They may be a QA department. They may be customers, depending on how you sell your software.
Despite all your tests, peer-reviews and developer best-efforts - stakeholders want to see your hard work before it's pushed out to the whole world. Just because the code passes your tests doesn't mean it's good.
They're also not technical, so they can't pull the code down themselves and build their own Docker containers locally. Or maybe they could, but I'm assuming the thought of having to support that gives you the fear.
So we have a development environment they can access to see the latest version of the application. They can clearly see what's going to be deployed next.
And because the Docker image is shared between the development and production environments, it's exactly the same code that's going to be deployed to both. There's no chance of them seeing something in dev, and then something else getting deployed to production. Your QA team will like that.
Deployment
We use CodeDeploy with Fargate on ECS. It's blue/green, so we deploy a complete set of new containers and make sure they're OK before updating the load balancer to point at the new IP addresses and destroying the old ones. CodeDeploy manages all of this for us, which is fantastic.
However... this would work just as well with Github Actions SSHing into a box in Digital Ocean. It's just a Docker image you need to put on your server.
Production Deployment
Assuming everyone is happy and it's time to do a deploy, we add a release
tag on to the image within ECR in production. This automatically starts another CodePipeline/CodeDeploy combo that deploys that image with the same blue/green process.
We try and deploy once or twice a week, but that does rely a lot on the stakeholders.
Rollback
Easy, just tag a previous image with the release
tag to deploy that one instead.
Database Migrations
We use Alembic (with Flask), and we run the Alembic upgrade command within CodeBuild in the CodePipeline before CodeDeploy. If this step fails, we stop the pipeline and don't deploy the image and then a DBA needs to take a look at the database to see what happened. Since this is all within CodeBuild, it's easy to look at the logs in Cloudwatch and see why the command errored.
This also means that we consider our database migrations to be a pre-deploy step, not a post-deploy step. The important thing to know about that is that database migrations have to be backwards compatible, as your database is going to change before the code does. If the migrations work, but the deploy fails, you still want the application to continue running even though you added additional columns.
The alternative would be a post-deploy step, but then your new code goes live and it needs to be able to cope with the columns it relies on not being there yet. And that's harder to work with (tried that!).
Pre-deploy also means that you should do migrations that drop columns separately. Consider migrations to be additive, and then worry about cleanup migrations to remove deprecated fields later.
Pre-deploy also makes it easier for developers to test locally, as they can run the migrations and then switch back to the main
branch and run tests to confirm everything is still fine.
And backwards compatible migrations is what makes rollbacks easier.
Half-completed migrations can be a pain to resolve, which is a good reason to use PostgreSQL instead of MySQL, since PostgreSQL supports transactional DDL.
Hotfixes
Sometimes you need to fix a bug in production but you don't want to do a full deploy, because a full deploy will also drag along everything else that's been deployed to main
since your last production deploy and your stakeholders aren't ready for that yet.
To resolve this we have a CodeBuild configuration that builds an image whenever a branch with "hotfix" in its name has some code merged into it. It goes something like this:
- Follow the usual process for creating a branch off
main
and fix your bug. - Merge it back into
main
. - Create a new branch off
main
called hotfix-something
. - Create a branch off
hotfix-something
called my-urgent-fix
. - Cherry pick the commit from
main
containing your fix and put it into your my-urgent-fix
branch. - Do a PR to pull from
my-urgent-fix
into hotfix-something
.
When that PR is merged, Codebuild sees the name of the branch and creates a new image using the usual process and tags it as normal with the commit id, but doesn't deploy it to development (because the development environment already contains the fix, as the fix is already in main
). And then we just tag that as release
and deploy it to production, then delete the hotfix branch.
Alternative Flows
There are plenty other approaches, so to round this out I'll give a few reasons why we don't use them.
- Github Flow. If you don't have stakeholders, this is as simple as it gets. But if you have any team at all, I think it's useful for people to see something in a preview environment before it goes live. I'm also not ready to trust that tests are 100% reliable.
- GitFlow. It's so complicated! There's so much merging going on between steps, the whole thing just creates a spaghetti mess of commit history, followed by the very high likelihood that you'll screw something up somewhere. Worst of all, what gets deployed to production is not the same as was deployed to development, because there's a merging step between branches done along the way.
- OneFlow. This one is pretty good for the stakeholder lifestyle, because it has release branches that you could deploy for them to see before moving them forward. It is a bit more complicated than what we're doing, and while release previews might seem straightforward, they're incredibly difficult to create a database for.
Conclusion
It works for us! It may not work for you, but it's been a really positive experience for our team.
Pull previews are the next step, to automatically spin up a preview environment (using the local docker compose file that developers use) on EC2 when a Github PR is created. I'll write more about that when the time comes.
Picking a platform to host this blog was harder than I expected it to be. And I didn't think my requirements were that tough:
- I just want to write. I'm not interested in images, video, or complex layouts. It's 2025, but I want a blog layout circa-1999. Simplicity matters.
- I'd prefer to do it in a nice front-end. I don't want to write posts in VS Code and deal with files. But if I can still export everything as Markdown, awesome.
- I want to design my own theme, and have a nice experience doing so. That means a good template language and a workflow for previewing changes.
- I don't mind hosting it myself, but not if doing so is going to be overly complex. If I have to run more than one server/container and you don't auto-update, I'm out.
- I'd like to have different post types or post blocks. If I write about a movie, book or video game - it would be great to have a metadata preview of that item in the post automatically.
I think people underestimate how important the "famous" Wordpress 5-minute install was at helping them gain traction. If it's too complicated to start, most people (including me) will just bounce off.
Wordpress is too heavy for what I wanted here. We use it for the Issuebear website because Bricks is amazing for building that kind of SEO-driven brochure website.
I spent a long time trying out different static site generators, but universally found them to be an unsatisfactory experience:
- Writing in a text editor just doesn't do it for me. It looks wrong, the spell checking is terrible, and I don't want to keep remembering how to format the front-matter YAML.
- The go templating language in Hugo is an abomination.
- I always need to go back to a terminal and remember whatever command I need to run to make the thing work.
- And so many of them are Javascript, that's an npm nightmare.
- I didn't find any that would automatically look up movie details for me. Or any that had a plugin system I could understand well enough to do it myself.
- The Python options are very limited. Pelican seems best, but it never clicked.
I was tempted by Pika, but it was at the opposite end of the scale from Wordpress - too simple.
So I've ended up with Ghost. Which is more "best of a disappointing bunch" than a true recommendation, but there are some positives:
- The editor is nice and easy.
- It has some interesting membership options that could be a direction in the future.
- The templating language is Handlebars and they've done a good job adding functional helpers to it (better than 11ty has).
- Although the theme development experience is really poor, because I'd have to install Ghost locally to quickly iterate on changes. I'm stuck compiling zipped themes with a gulp file, manually uploading them and crossing fingers.
No automatic movie/book/video game inserts though. It also has a poor configuration experience and is expensive for what I'm using it for (but very cheap if you're actually running a membership site).
They also used to support sqlite as a database, which made it easier to host yourself. Now they only support MySQL 8.0, which makes it a two container setup on Render. That pushed it beyond "I'll host it myself".
If we launch a blogging platform this year, you'll know why.