Skip to main content
PCSalt
YouTube GitHub
Back to Homelab
Homelab · 4 min read

From Manual Chaos to Automated Deployments: My Home Server CI/CD with GitHub Actions

How I replaced SSH-and-pray deployments with GitHub Actions — path-based triggers, reusable workflows, and selective service deployment for a multi-service home server.


I run a home server with 8+ services — Immich for photos, Nextcloud for cloud storage, Jellyfin for media, N8N for automation, and a few more. For a long time, deploying updates meant SSH into the server, navigate to the right directory, pull the latest changes, run docker compose up -d, and hope nothing breaks. Multiply that by 8 services and it gets old fast.

Here’s how I replaced all of that with GitHub Actions — push to main, and only the changed service gets deployed automatically.

What’s Running on the Server

Before diving into the automation, here’s a quick look at what the server hosts:

ServiceWhat It Does
ImmichSelf-hosted Google Photos alternative with ML-powered search
NextcloudCloud storage with Collabora for document editing
JellyfinMedia server for movies and shows
N8NWorkflow automation (like Zapier, but self-hosted)
PortainerDocker management UI
PairDropLocal network file sharing
TransmissionTorrent client with VPN
WordPressA WordPress site with phpMyAdmin

Each service lives in its own directory with a docker-compose.yml and any related config files.

The Old Way

The deployment workflow used to look like this:

  1. Make a change to a service config on my laptop
  2. Commit and push to GitHub
  3. SSH into the server
  4. cd to the service directory
  5. git pull
  6. docker compose down && docker compose up -d
  7. Verify the service is running
  8. Repeat for every changed service

This had a few problems:

  • Easy to forget — did I restart the right service? Did I pull the latest?
  • No secrets management.env files with passwords sitting on the server, manually maintained
  • No visibility — no deployment history, no logs, no rollback path
  • Time consuming — even a small config change required the full SSH dance

The New Way: GitHub Actions with Path-Based Triggers

The repo structure is simple — one directory per service:

home-server/
├── .github/workflows/
├── immich/
├── jellyfin/
├── nextcloud/
├── n8n/
├── pairdrop/
├── portainer/
├── transmission-vpn/
└── test-wordpress-site/

The key insight is path-based triggers. Each service has its own workflow file that watches only its directory:

on:
  push:
    branches: [main]
    paths: ['jellyfin/**']

Change something in jellyfin/? Only the Jellyfin deploy runs. Touch immich/? Only Immich deploys. Change two services in one commit? Both workflows trigger independently.

The Reusable Workflow

Instead of copy-pasting deployment logic across 8 workflow files, there’s one reusable workflow that handles all the heavy lifting. Each service workflow just calls it with the right parameters.

Here’s the core of deploy-service.yml:

name: Deploy Service (Reusable)

on:
  workflow_call:
    inputs:
      service:
        required: true
        type: string
      has_dockerfile:
        required: false
        type: boolean
        default: false
      has_secrets:
        required: false
        type: boolean
        default: false
    secrets:
      DEPLOY_SSH_KEY:
        required: true
      DEPLOY_USER:
        required: true
      DEPLOY_HOST:
        required: true
      ENV_CONTENT:
        required: false

env:
  DEPLOY_PATH: /opt/pcsalt/home-server

jobs:
  deploy:
    runs-on: self-hosted
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4

      - name: Detect compose file
        id: compose
        run: |
          if [ -f "${{ inputs.service }}/docker-compose.yml" ]; then
            echo "file=docker-compose.yml" >> "$GITHUB_OUTPUT"
          elif [ -f "${{ inputs.service }}/docker-compose.yaml" ]; then
            echo "file=docker-compose.yaml" >> "$GITHUB_OUTPUT"
          fi

      - name: Build Docker image
        if: inputs.has_dockerfile
        run: |
          docker build -t home-server-${{ inputs.service }}:latest \
            ${{ inputs.service }}/
          docker save home-server-${{ inputs.service }}:latest \
            | gzip > /tmp/home-server-${{ inputs.service }}.tar.gz

      - name: Deploy on remote server
        run: |
          # SSH into the server, stop old containers, start new ones
          ssh $SSH_OPTS "$REMOTE" << ENDSSH
            cd $SERVICE_PATH
            docker compose -f $COMPOSE_FILE down
            docker compose -f $COMPOSE_FILE up -d
            docker compose -f $COMPOSE_FILE ps
          ENDSSH

      - name: Cleanup
        if: always()
        run: |
          rm -f ~/.ssh/deploy_key
          rm -f /tmp/home-server-${{ inputs.service }}.tar.gz
          docker rmi home-server-${{ inputs.service }}:latest 2>/dev/null || true

I’ve trimmed the SSH setup, file copy, and secrets injection steps for brevity — but you get the idea. The workflow handles:

  • Compose file detection — supports both .yml and .yaml
  • Optional Docker builds — for services like Nextcloud that use a custom Dockerfile
  • Secrets injection — writes environment variables from GitHub Secrets to .env on the server at deploy time
  • Cleanup — removes SSH keys, temp files, and dangling images after every run

A Service Workflow in Action

Here’s what a simple service workflow looks like — Jellyfin, which needs no secrets and no custom build:

name: Deploy Jellyfin

on:
  push:
    branches: [main]
    paths: ['jellyfin/**']
  workflow_dispatch:

jobs:
  deploy:
    uses: ./.github/workflows/deploy-service.yml
    with:
      service: jellyfin
    secrets:
      DEPLOY_SSH_KEY: ${{ secrets.DEPLOY_SSH_KEY }}
      DEPLOY_USER: ${{ secrets.DEPLOY_USER }}
      DEPLOY_HOST: ${{ secrets.DEPLOY_HOST }}

That’s it. 17 lines. For a service that needs secrets, you add has_secrets: true and pass ENV_CONTENT. For one with a custom Dockerfile, add has_dockerfile: true. The reusable workflow handles the rest.

Every workflow also includes workflow_dispatch, so I can manually trigger a deploy from the GitHub UI if needed.

Secrets Management

One of the biggest wins is how secrets are handled. Previously, .env files with database passwords and API keys lived on the server and were maintained manually. Now:

  • All secrets are stored in GitHub Secrets
  • They’re injected into .env on the server at deploy time
  • The .env file on the server only exists while the service is running
  • No secrets in the git repo, ever

What Changed

BeforeAfter
SSH into server manuallyPush to main
Remember which service to restartPath triggers handle it
.env files maintained by handGitHub Secrets injected at deploy
No deployment historyFull workflow run history in GitHub
Hope you didn’t miss a stepAutomated, repeatable, every time

Wrapping Up

This setup isn’t complicated — it’s just a reusable workflow, path-based triggers, and GitHub Secrets. But it removed an entire class of “did I do it right?” moments from my home server management.

The best part? Adding a new service is trivial. Create its directory, add a docker-compose.yml, write a 17-line workflow file, and you’re done. The reusable workflow handles the deployment, and path triggers ensure it only runs when that service actually changes.

If you’re running a multi-service home server and still deploying manually — set this up. It takes an afternoon and saves you from every future “let me just SSH in real quick” moment.