In the interest of eventually moving my personal website off of where it is now: what’s your favorite way to deploy a simple web application these days? Assume I’m doing it to a VPS host.
Comments
Log in with your Bluesky account to leave a comment
The remote repository is a bare repository that resides on the same server. It's just a post-receive-hook script that moves files into place after every push.
my site isn't going to be static, so this was basically the spot i got to, and then was like "maybe before i start doing the "and then point the nginx in front at the new deployment" and all that jazz, was looking at what else is out there so i don't have to maintain it myself, hah. thanks :)
Pull vs Push. A crontab with a script to git pull your repository, making it accessible to your web server and reloading it might be enough in this case. But you can also use webhooks with a listener or any other method that fits your needs, constraints, and preferences.
My site is deployed by rsync from my machine. Although that’s only one bit of it. Everything else… ssh in and fiddle with it, or scp up a local folder. I have no way to redeploy the whole site at once, but I can’t imagine needing to either.
I have a 22 line bash script that builds the project, rsyncs it to the serve, ssh in and does some fiddling on the server, backs up the db, restarts the server process, and purges cloudflare cache.
"sandboxing" via dedicated linux user with home dir in /opt/appname, systemd unit for starting/stopping the service, nginx as a reverse-proxy that handles SSL (via certbot/letsencrypt)
I’m curious about this, I wouldn’t think the reward would be there to go to the trouble of writing an apt package, which tells me I’m probably overestimating how difficult it is
At work we use https://fly.io a lot, even for relatively simple applications (assumption: web server running an application, plus a database probably). It's pretty inexpensive if you don't need much, and the ergonomics of it all are pretty fine. I intend to use it for an upcoming side project.
I don't know that I like any of them. I tend to default to GitHub actions, but so far all of those tools feel like exercises in using a ruler to debug YAML (but with a really long feedback loop).
Deno Fresh has been fairly enjoyable to work with, especially when combining it with a Markdown to HTML compiling preprocessor, so I just write MD files and redeploy (or GH Actions do it for me) to push a new blog post.
In that case, you would have a github action that started when you committed your change, but it would run the build and deploy phases on your local host using your local runner.
I use nix to deploy my stuff but you kinda have to be “all in” for it to make sense. But it’s nice to have a simple unified way to represent my packaging and also the various systemd jobs etc that run stuff on my vps and homelab.
Otherwise I’d be using a managed kubernetes offering I think.
My favourite is static and so I use Hugo both personally and professionally for our website. It’s open source and I can just about figure out how to use it. It’s got great post/blog publish support.
I like Vultr + hosted Kubernetes. If you wanted to avoid kubernetes, I'd probably fall back on docker compose with a reverse proxy like traefik as part of the stack that can do the lets encrypt renewal.
If it's static site hosting on a single VPS host I might also consider a single-node garage.
to be more specific: i `git pull` the source to my vps, `compose build` it locally, then `compose up`. no container registry, build pipelines, etc needed.
this doesn't scale at all, but works great for just one box. you could probably put it in a github action but i just do it manually
I store most of my code in GitHub, so for deploying GitHub actions are pretty easy. Most popular sites you deploy to have templates. For static sites, I just use GitHub pages, which can make it even easier.
> Most popular sites you deploy to have templates.
Yeah, what I'm after is what is on the servers of those sites, so i can use it on my server. (obviously most are proprietary, but there's open source tools too)
Docker + docker compose pretty much is what I use. Have GitHub action publish the docker container then be able to pull it down however I choose. Usually a webhook to trigger something.
More mainstream alternative: build a container using a Dockerfile, then deploy using your host's tools. https://fly.io is *super* simple, or you could bang out a script that does the minimal thing on AWS ECS in a couple hours. Not hard to script those
i am the person implementing "the host's tools" and so is what im asking about. i got halfway through the script and figured maybe i should see what's out there these days so i don't have to maintain that too :)
I maintain my homelab server using NixOS and the linked strategy. It's really delightful being able to have a fully declarative config. I only really touch it once every several months, and I find jumping back in and making a change much, much easier than imperatively configured distros.
normally i use nixos but for node.js specifically ive found the least bad option to be sshing in and doing `git pull && npm i && sudo systemctl restart $SERVICE`
If it’s a static site I just use Netlify or DigitalOcean App Platform, both let you host static sites for free. If it needs a node server or otherwise then I’ll do a cheap DigitalOcean droplet or if you need more power Hetzner is probably most cost effective.
I'm using Deno Fresh for a project currently, mainly because I wanted to try it out. I like Deno for its native Typescript support. I like Fresh for its clear server-side rendering vs. client boundary. Running in Docker with nginx in front for TLS and 80 -> 443 redirect. I'm no web expert though.
I have a bash script in each of my major projects, that basically runs git push to two different places, then logs in the server(s) via ssh, does git pull, removes cache, reloads processes, prints status.
Side note from the deploy tool: I would still lean towards packaging with Docker and deploying that to a host.
It's nice to be (more) hermetic—though I know containers aren't actually all that hermetic...
No problem! My site's pretty simple (it's an Astro site), but I have seelf integrated with my Gitea instance using Gitea actions. When I push a new commit to a certain branch, it'll notify seelf and deploy it either to production or staging. It's a very neat project.
I have tons of experience with backend engineering (data engineering), but never in my life had I made a website before.
I tried out Vercel and had my website up and running in a few hours with almost zero knowledge of what I was doing (other than being familiar with GitHub, CI/CD)
i just run docker with a reverse proxy on my vps. can go as simple as static websites via nginx, but i use traefik and host multiple domains and apps on the same box. i think a $5 host is doing like 10 different websites for me right now.
I'm personally using Podman quadlets (systemd services effectively) which I sync to a dedicated server with Ansible. Simple enough but it works well - to update my server with my changes I just run my playbook and then `sudo systemctl restart service`
the pipeline/action would
ssh into the target server
git add . && git stash && git pull && whatever else you need
for me it's composer update and phinx migrate
I explored this a couple of months ago, and really liked the experience of setting up Hashicorp's Nomad for this. Felt like systemd with an admin dashboard that orchestrates containers, binaries and more. My feel was that it's in similar space as k8s but scales down to single node much better.
I haven't written things up in detail but my setup looked like:
* Single node nomad cluster running on a VPS that has Tailscale installed
* Nomad's controller bound to Tailscale network: not publicly accessible
* Push containers to docker hub, and target nomad via cli from my laptop for deployments.
I picked caddy as the webserver with 2 instances deployed via nomad. One instance listening on the private Tailscale network, and other in public space. Caddy managed ssl certificates for both public and private domains (Private domains required a caddy plugin for my DNS provider)
I’m reading this to try catch up and get the actual gems. I love the amount of clarification that you’ve had to do! New role: thread prompting engineer. “No no BlueGPT I need a…”
oh i read your update, well with this setup and something like hugo, your deploy process is just a git push. not sure if you mean more like orchestrating that process from scratch...
What framework/tech stack are you using for your website? Static site? Or require a running server? If you're going the server route, then cheapest option is Raspberry Pi, Cloudflare Tunnel / Tailscale Funnel to expose to the internet, and a CDN with a long TTL
I have a 22 line bash script that builds the project, rsyncs it to the serve, ssh in and does some fiddling on the server, backs up the db, restarts the server process, and purges cloudflare cache.
I also have a long bash script that sets up a fresh server for when I do server upgrades.
+1 for Coolify. You can create a GitHub app via Coolify to deploy on git push for private repos, and it handles building the commit and rolling out the updated server. It has other bells and whistles like live logs and automatic SSL setup. The docs are good.
i just have a github action that builds the new site to a single binary, scp’s it over, and then ssh’s in to run `systemctl restart site`. all my static assets are compiled into the final binary, so that’s all i need.
I recently played around with https://coolify.io/ - it's rather simple and works good enough, though it still somewhat limited when you want to implement a customized flow. I wouldn't accept the overhead for one app, also because you ideally install it on a separate server.
I use GitHub actions to build and publish my docker image + watchtowerr and a webhook action to trigger watchtowerr to update my container.
And docker compose to manage my services, caddy to manage proxie for their cert service.
Haven't used it yet, but https://kamal-deploy.org/ looks like the most promising modern version of capistrano. I'm planning to give it a whirl on a project soon.
I run a Hugo based blog and a portfolio page, and have a GitHub actions pipeline that… rsyncs everything to my server. That’s it, no fancy k8s similar 🤗
The web server is caddy so it’s reasonably simple to set it up with HTTPS.
I run ProxMox on a cheap intel NUC knockoff that manages a few containers and pet VMs, not a lot of automation in my homelab because of the shoemaker’s daughter and all
For personal not too serious stuff, I copied what bluesky does with the self hosted PDS: docker compose running as a systemd service, running three containers: caddy, and watchtower (https://github.com/containrrr/watchtower), then I just need to push new images of to my private registry… (1/2)
Just regular docker on the VPS, private docker registry, watchtower to automatically run latest version. Build new version locally (or via something like GHA), push to your registry, done.
Capistrano’s still around, but the hard versioning stuff gets annoying. I just try to make everything run in docker and clone with git on the destination these days
ddns on a rpi. it's been a while since doing that but it was simple and used a shell script for deploys/updates 🙃. unless you need tooling to really help manage a team of people, the simplicity is kind of refreshing. VPS should be similar.
These days when I think of *how* I would deploy a site, it's so intimately tied to the hosting platform that I don't think I could answer this question without naming a host. E.g., my sites get built and deployed by my host with a commit to main; I don't use any other tooling on top of that.
The last time I kinda-sorta needed to do this manually, I wrote a GitHub Action that would SSH into my host whenever a commit landed in my main branch. The the host would pull the changes and restart everything. I don't necessarily recommend it, but it's a thing that can be done. 🤷
If I were doing it all from scratch, I'd probably have built new images of the app, deployed them, then spun down the previous containers. But it was a single legacy VPS that had been updated manually via SSH for a long time and I had a time-crunch, so I just automated the old process.
Ansible and puppet are kind of old school at this point in the cloud-native era, but they're what I lean on (Ansible specifically) if I don't have kubernetes and need to write deployment scripts.
puppet might fit better if you've written lots of ruby
Hmm. Well Puppet Ansible were just about declaring machine provisioning for deployment mostly right? Where Kubernetes is a different animal managing clusters of nodes and container orchestration. Then serverless is the anthesis of the whole philosophy, never setting up server environments.
So basically is that to say, there are no more modern or better such tools and configuration declarations, but instead an abandonment of the process itself?
i have this uhh cursed setup where i ssh into server `git pull; nix run` and that uhh. it doesn't rebuild nixos, but it executes the build and updates the symlink that nginx is serving to the result
You know how in the "imperative" way you can "install" packages for your user with `nix-env -iA nixos.ncdu`? That actually builds a configuration in ~/.local/state/nix/profiles/profile and you can see that next to it is also profile-1-link and profile-2-link so you could even rollback to a previous.
The current setup for my website is less elaborate, mostly equivalent to `nix-env --profile /nix/var/nix/profiles/per-user/michcioperz/hugowiki --set $(nix-build)` with that profile path being set as root directory for nginx
Last time I did this I gave the VPS read-only access to a GitHub repo and had a cron job run `git pull` every minute. I’m almost not ashamed to admit it.
My website is just a binary, so scp is enough for me. I’ve got my CI set up to start as an ephemeral node in my Tailscale network which lets me use their SSH integration to not worry about keys. Works pretty well. I run my website as a systemd unit, so I just stop it and restart it after the copy.
Oh man, coordinating ssh calls what year is this Steve.
Back in those days I leaned on https://www.fabfile.org/ for coordinating SSH calls, but to be fair that approach is more easily described as "capistrano without batteries or the instruction booklet".
If you drop VPS ("server I can ssh into") as a requirement and we still don't include PaaS like Netlify, I'd be deploying sites to S3/Lambda using either like https://arc.codes, https://sst.dev, or straight CDK.
yeah, in this case, let's just say i have a rack of computers lying around that i found in a garage somewhere, so the VPS thing is what i am working with :)
... i haven't been doing web dev for like ten years :( i'm like the guy that went to prison in the 90's and got out now and everyone has cell phones, lmao
I will say fab really only shines when you're doing sysadmin tasks coordinating across groups of servers. For a single server there's very little fab offers outside of "it's not a bash script"
i am also very interested in the ways in which i didn't phrase my question in a way that people understood what i'm asking for, haha. things sure have changed in the last fifteen years... gonna think about this a bit
One thing you didn't say (but we all threw a bunch of answers at you anyway) was the architecture of the web app? Is it static? Is it a web server with a sqlite file? A separate database service? What does a build artifact look like? (Container? tarball? git pull?) Deploy from laptop or git push?
But beyond that part of this is that it isn’t about the specific thing, because there’s a few of them. So stuff will change. But I think SSG vs “I need to build and reboot a server” is significantly different
i've been searching for good answers for a while too, it really seems like there's nothing in the space and it's all been swarmed with cloud and paas and "pricing" links on the landing page...
It seems like most folks are using some kind of PaaS for this now, but for a VPS, https://dokku.com/ seems interesting, I've never tried it though. I think ansible might be a good option if you want something simpler.
Comments
Runtime specifically obviously & from what I can tell no way to package assets in alongside.
Still definitely containers & a crappy hand written k8s manifest running on k3s, for most stuff. Gitops, someday.
If I'd want to go complex (but not as complex as k8s) - probably mix of ansible, terraform and nomad
Allows me to run multiple things on one server and very easy to use once setup.
I don't know that I like any of them. I tend to default to GitHub actions, but so far all of those tools feel like exercises in using a ruler to debug YAML (but with a really long feedback loop).
https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners
In that case, you would have a github action that started when you committed your change, but it would run the build and deploy phases on your local host using your local runner.
Otherwise I’d be using a managed kubernetes offering I think.
https://github.com/serokell/deploy-rs
https://github.com/waferbaby/usesthis/blob/main/.github/workflows/main.yml
If it's static site hosting on a single VPS host I might also consider a single-node garage.
this doesn't scale at all, but works great for just one box. you could probably put it in a github action but i just do it manually
https://kamal-deploy.org/
Yeah, what I'm after is what is on the servers of those sites, so i can use it on my server. (obviously most are proprietary, but there's open source tools too)
https://coolify.io/
Its neat for small miscellaneous personal projects and can host just about anything.
It's nice to be (more) hermetic—though I know containers aren't actually all that hermetic...
I tried out Vercel and had my website up and running in a few hours with almost zero knowledge of what I was doing (other than being familiar with GitHub, CI/CD)
100% recommend!
I've written some Ansible playbooks that'll pull and rebuild the containers and bring the new ones up in order to update stuff if needed.
Works for me!
It's not exactly robust or pretty, but like, for personal stuff, it's fine. Depends on having a binary with few/no dependencies though.
Pipelines == github actions
the pipeline/action would
ssh into the target server
git add . && git stash && git pull && whatever else you need
for me it's composer update and phinx migrate
* Single node nomad cluster running on a VPS that has Tailscale installed
* Nomad's controller bound to Tailscale network: not publicly accessible
* Push containers to docker hub, and target nomad via cli from my laptop for deployments.
local changes -> yarn deploy
I host ~10 sites for $2/mo like this. usually use something like nuxt to gen static site.
Other than that... I'd probably just sftp a rust binary and use systemd 😁
downside is [gestures vaguely] all of it.
... or markdown + a static site generator like Hugo or 11ty and run Nginx to publish it...
I also have a long bash script that sets up a fresh server for when I do server upgrades.
Oh, honestly, just github actions?
actions are great, but im looking for the software that would run on the VPS that the action would talk to. like coolify :)
Other alternatives: https://caprover.com/ https://dokku.com/
(Though personally I stumbled somewhat about making it more secure than the default.)
And docker compose to manage my services, caddy to manage proxie for their cert service.
The web server is caddy so it’s reasonably simple to set it up with HTTPS.
For my website and projects I use Github Actions -> pushing to my private docker registry -> deploying to my single VPS k3s node.
wrote everything down here: https://hilars.dev/blog/my-own-vercel-how-i-deploy-this-website
Fwiw, I have done cloudflare pages/S3/github pages stuff to avoid this specific question so ¯\_(ツ)_/¯
puppet might fit better if you've written lots of ruby
I'm assuming you're running Nginx via NixOS, but then that would mean you can't change its config without a rebuild.
So do you have a hard-coded path to the symlink that the derivation in nix run is overwriting? How are you escaping the builder sandbox?
Back in those days I leaned on https://www.fabfile.org/ for coordinating SSH calls, but to be fair that approach is more easily described as "capistrano without batteries or the instruction booklet".
Your rack of computers can sit comfortably in your garage and be securely connected to the internet.
... i haven't been doing web dev for like ten years :( i'm like the guy that went to prison in the 90's and got out now and everyone has cell phones, lmao
i remember hearing about fab!!!
coolify works well, but i've had issues getting it set up when ssh is not on port 22 or when root login is disabled too.
Turns out that you don't actually need config management for like... a handful of static sites coming out of hugo lol
i am also very interested in the ways in which i didn't phrase my question in a way that people understood what i'm asking for, haha. things sure have changed in the last fifteen years... gonna think about this a bit
Then deploying can be an “ssh host ‘sudo docker pull foo/bar:latest && sudo docker restart’” script or similar with Podman or whatever.
It’s unreasonably effective!
Wouldn't help with the SSG responses though.