Running Docker on a Synology NAS: A Practical How-To Guide

11 November 2025

This guide walks you through running Docker on a Synology NAS, and I’ll be using the DS1525+ for this guide. I’ll cover everything from enabling Container Manager to deploying your first container with Docker Compose.

Running containers directly on a Synology NAS is one of the easiest ways to centralise self-hosted services—without spinning up extra VMs. The DS1525+ has more than enough grunt for media servers, automation tools, dashboards, and lightweight APIs.

This guide assumes DSM 7.x (Synology NAS software) and a DS1525+, but the steps are broadly identical across Plus-series models.

Why Run Docker on a Synology NAS?

  • Always-on platform – your NAS is already running 24/7
  • Low overhead – containers are lighter than full VMs
  • Centralised storage – perfect for bind-mounting volumes
  • Great UI – Synology’s Container Manager removes much of the CLI pain

Typical use cases include:

  • Home Assistant
  • n8n
  • Node-RED
  • Media servers (Plex, Jellyfin)
  • Internal tools and dashboards
  • Small dev/test environments

Prerequisites

Before you start, make sure you have:

  • A Synology NAS (e.g. DS1525+)
  • DSM 7.2+ (Synology NAS software)
  • An admin account for your NAS box
  • At least 4 GB RAM (8 GB+ recommended if you plan to run multiple containers)

Step 1: Install Container Manager

Docker on Synology is now called Container Manager.

1) Logon to your Synology NAS box via the browser

2) Open Package Center

3) Search for Container Manager and click Install under the package

4) Wait for the service to install and start

5) Choose your desired subnet for Docker to use (this must be something outside of your normal network, and be /16). You can use the default, but for my setup I’m going with 10.17.0.1/16. Click Next when you’re done.

6) Check Run after installation and click Done

7) When the installation is complete, the installing message will disappear and Installed will appear next to the package, as shown below:

8) Click the Main Menu button

9) Select Container Manager

You’ll then see this, confirming that everything is installed

Step 2: Understand Synology’s Docker Structure

Synology stores Docker data under:

Bash
/volume1/docker/

You can view these files in the File Station too:

Best practice: Using the File Station, create a sub-folder per service (the below is just an example):

Bash
/volume1/docker/
 ├─ n8n/
 ├─ homeassistant/
 ├─ plex/
 └─ portainer/

I created a shared folder called docker, then the individual folders under that. You will need to check Enable advanced share permissions on that shared folder (e.g. docker) and make sure that your user, administrators and ContainerManager have read/write permissions.

If people get stuck on this part, I’ll expand it with an edit in the future

Step 3: Pull Your First Image

Let’s start simple.

1) Click the Main Menu button

2) Select Container Manager

3) Click Registry on the left

4) Search for an image (e.g. portainer)

6) Select latest (or a pinned version)

7) Click Download

The image will now download and be stored locally.

Step 4: Deploying Using Docker Compose (Recommended)

For anything non-trivial, Docker Compose is the way to go, so this really is better than using the GUI. Be sure to have your folders created first (e.g. /volume1/docker/n8n and /volume1/docker/n8n/n8n_data).

1) Go to Container Manager → Projects

2) Click Create

3) Name the project (e.g. n8n)

4) Choose a project folder (you can use the set path button to select the folder):

Bash
/volume1/docker/n8n

5) Create a file called docker-compose.yml on your desktop (or somewhere else, I like a folder linked to a git repo) and add the below content to it (make sure you change the host, and encryption keys):

Bash
services:
  n8n:
    image: n8nio/n8n:latest 
    container_name: n8n
    restart: unless-stopped
    ports:
      - "5678:5678" 
    environment:
      - GENERIC_TIMEZONE=Europe/London
      - N8N_HOST=<your_domain_or_ip>
      - N8N_PORT=5678
      - N8N_PROTOCOL=http
      - N8N_SECURE_COOKIE=false
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false
      - N8N_ENCRYPTION_KEY=<your_encryption_key>
    volumes:
      # Keeps all your N8N data (credentials, workflows, etc) persistent
      - n8n_data:/home/node/.n8n
      
volumes:
  n8n_data:

6) Select that file

7) Click Next

8) Check Set up web portal via Web Station and add the values (name, port and protocol) as per below:

9) Click Next

10) Click Done

11) You will see the image download and install

12) When the installation completes, go to Web Station via the Main Menu, or clicking OK

13) If the config pack doesn’t load, select Web Portal, then Create, then Web service portal

14) Set the settings, as per below:

15) Click Create

Note: Sometimes this does get done as part of the package deployment, but I have found it to be unreliable at times

16) Visit http://<<NAS NAME / IP>>:5678 and you should see the setup screen

If you see that, then all is good. If not, check that the container is running (also make sure it isn’t crashing) and check your Web Portal settings, to make sure it’s mapped correctly.

Step 5: Networking Basics (What Actually Matters)

I’m not going to dig in to networking here, as I’m presuming that you’ll just be running this internally, with no external access or reverse proxies. If you to plan to expose this remotely, please research this thoroughly and make sure you configure your setup securely.

Step 6: Backups (Don’t Skip This)

If retaining this data is important to you, then I recommend you backup the below folder regularly:

Bash
/volume1/docker/

Best options:

  • Hyper Backup to USB / another NAS
  • Snapshot Replication (if using Btrfs)
  • Cloud backup for configs only (I use GitHub)
  • Cloud replication for everything

Remember that containers are disposable, but your drive volumes are not, they are persistent.

Performance & Stability Tips

  • Set memory limits on containers which are resource hungry (only necessary if you see problems)
  • Enable auto-restart (this is the restart: unless-stopped part of the yaml config that I included above)
  • Don’t over-commit RAM
  • Avoid running databases unless necessary (or back them up carefully)

The number of containers you can run will depend on what NAS you have, as the processor and memory will be different in them all. It seems that the DS1525+ can handle between 10–20 lightweight containers comfortably, but it will depend on what those containers are.

When Docker on Synology Isn’t the Right Tool

Consider something else if you need:

  • Kubernetes
  • GPU compute
  • Heavy CI pipelines
  • High-IOPS databases

In those cases, a dedicated server or cloud VM is a better fit.

Final Thoughts

Docker on a Synology DS1525+ works for me, but only for lightweight workloads. It’s a low friction, high value solution for a quickstart home lab setup. It will work well for individuals, small teams, and automation enthusiasts.

If you’re already comfortable with Docker Compose and Synology’s UI, then it’s probably exactly what you want.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.