Around two years ago I bought a new Synology DS918+ and wrote an article about my new media stack. Since then over few iterations I recently ended up with yet another setup - that hopefully will be a final one ;).
Let’s just jump straight into the how-to, I’m gonna skip ansible syntax this time as we’re gonna use Watchtower to automatically upgrade running containers on our Synology.
What changed
I started using Video Station a little bit more as they have quite a nice Android app that you can run on your TV, app can use an external player (I’m using MX Player, but feel free to use VLC or whatever) to stream files directly using hardware acceleration. There is quite an interesting problem here regarding files indexing, but we will get to that later. As a side note the fact that often it’s easier to download the show from torrent and stream it from the NAS than struggle with the unresponsiveness of HBO Go app for example is ridiculous. Talk about first-world problems, right?
Medusa was replaced by Sonarr - which is basically a Radarr, but for TV ;)
Switched back from rutorrent to transmission - I really like remote transmission gui on my Mac (outdated, but still works)
to deal with auto unpack/extract I added unpackerr in the mix - rutorrent handled that for me before, but it was really flaky
to deal with containers upgrade I decided to give a Watchtower a go - so hopefully it’s fire & forget setup that will stay up to date and not break too soon ;)
Prerequisites
You need ssh access to your Synology with docker installed (via package manager), most of the docker related commands might be required to be ran via
sudo
.I will be using
PUID
andPGID
from my Synologyadmin
user - you can obtain those values usingid
command from within the ssh Synology shell. We set it explicitly so it won’t mess up file permissions on your system as we will be mounting host file system.Make sure mounted directories are owned by the user & group of your choice, plus check Read/Write permissions in
Settings
->Shared folders
section on Synology - seems there is some extra security layer on top (at least in the newest DMS 7.x).Change
TZ
(timezone) as well according to your location.
Step 1 - synoindex docker solution / workaround
As mentioned before we proceed we need to solve one issue regarding indexing files by Synology - so they will be visible in Video Station once downloaded. We need to inform Synoindex about that event - which is fine, both Radarr and Sonarr support that in their UI, but given the fact we run it in docker container there is a workaround we need to do. Thankfully a tool already exist called simple-synoindex-server - just follow it’s readme, create folder mappings and you can proceed!
Note: in case your process will be killed upon boot try running it via nohup
or even adding sleep 60
(or more) before the command - I’m not sure how Synology boot process looks like and what is loaded when, but it works for me and I won’t question that.
I will assume that extracted binaries are located under /volume1/homes/admin/simple-synoindex-server
. You should have there simple-synoindex-server.ini
with folder mappings, synoindex
and synoindex-server
executable files there.
My mappings look like the following:
[mappings]
/tv=/volume3/tv
/movies=/volume2/movies
Step 2 - Docker network
First, let’s create a network for all of our containers as we will need some between-containers communication. To keep things simple we will use one single network for that purpose. I will be called media
, but feel free to use anything you like:
docker network create media
Step 3 - Jackett
This hasn’t changed, I’m still using Jackett as a api-proxy for various private tracker sites. It worked great for years so I don’t see the need to change it - once set up you don’t have to touch it ever again.
docker create \
--name=jackett \
-e PUID=1024 \
-e PGID=100 \
-e TZ=Europe/Warsaw \
-p 9117:9117 \
-v /volume1/docker/jackett:/config \
--restart unless-stopped \
--network=media \
--log-opt max-size=10m \
--log-opt max-file=5 \
linuxserver/jackett
docker start jackett
Step 4 - Radarr
The same goes for Radarr - worked great for years, one noticeable change is that we’re gonna mount simple-synoindex-server
. That will allow us to enable indexing option found in Settings
-> Connect
-> Synology Indexer
. It should work out of the box then with all the default settings.
docker create \
--name=radarr \
-e PUID=1024 \
-e PGID=100 \
-e TZ=Europe/Warsaw \
-p 7878:7878 \
-v /volume1/docker/radarr:/config \
-v /volume2/movies:/movies \
-v /volume2/torrent:/downloads \
-v /volume1/homes/admin/simple-synoindex-server:/usr/syno/bin:ro \
--network=media \
--restart unless-stopped \
--log-opt max-size=10m \
--log-opt max-file=5 \
linuxserver/radarr
docker start radarr
Step 5 - Sonarr
I moved from Medusa to Sonarr - mostly because Unpackerr supports both. I think Medusa did a pretty great job over the years, but Sonnarr is slick and you can just feel that the interface is much more responsive.
Once up & running don’t forget to enable Synology Indexer there as well.
docker create \
--name=sonarr \
-e PUID=1024 \
-e PGID=100 \
-e TZ=Europe/Warsaw \
-p 8989:8989 \
-v /volume1/docker/sonarr:/config \
-v /volume3/tv:/tv \
-v /volume2/torrent:/downloads \
-v /volume1/homes/admin/simple-synoindex-server:/usr/syno/bin:ro \
--network=media \
--restart unless-stopped \
--log-opt max-size=10m \
--log-opt max-file=5 \
linuxserver/sonarr
docker start sonarr
Step 6 - Transmission
Transmission in my experience can lag a little bit in comparison to rtorrent when it comes to downloads speeds, but it’s good enough for me and I like remote gui app on Mac.
docker create \
--name=transmission \
-e PUID=1024 \
-e PGID=100 \
-e TZ=Europe/Warsaw \
-v /volume1/docker/transmission:/config \
-v /volume2/torrent:/downloads \
-p 9091:9091 \
-p 51413:51413 \
-p 51413:51413/udp \
--network=media \
--restart unless-stopped \
--log-opt max-size=10m \
--log-opt max-file=5 \
linuxserver/transmission
docker start transmission
Step 7 - Unpackerr
Note: We will use hotio image as it allows to set user & group id, seems linuxserver doesn’t provide such image.
The great thing about unpackerr that it will clean up extracted files once they are imported by sonarr/radarr. Magic ✨. In short, it can pool various services and check their queues and orchestrate unpacking based on that - straightforward, but smart!
You can find API keys to Sonarr & Radarr in their Settings
-> General
sections
docker create \
--name unpackerr \
-v /volume2/torrent:/downloads \
-e PUID=1024 \
-e PGID=100 \
-e TZ=Europe/Warsaw \
-e UN_SONARR_0_URL=http://sonarr:8989 \
-e UN_SONARR_0_API_KEY=SONARR-API-KEY \
-e UN_RADARR_0_URL=http://radarr:7878 \
-e UN_RADARR_0_API_KEY=RADARR-API-KEY \
--log-opt max-size=10m \
--log-opt max-file=5 \
--network=media \
--restart unless-stopped \
hotio/unpackerr:release
docker start unpackerr
Step 8 - Watcher
One last step is to add watcher on top that will upgrade all the running containers on daily basis. We will also configure telegram soft notification.
Once you obtain your telegram token and chat id (out of scope of this article, but you can skip that part), replace TELEGRAM-BOT-TOKEN
and CHANNEL-ID
parts and run:
docker create \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/localtime:/etc/localtime:ro \
-e WATCHTOWER_NOTIFICATIONS=shoutrrr \
-e WATCHTOWER_NOTIFICATION_URL="telegram://TELEGRAM-[email protected]/?channels=CHANNEL-ID¬ification=no" \
-e WATCHTOWER_CLEANUP=true \
--log-opt max-size=10m \
--log-opt max-file=5 \
--restart unless-stopped \
containrrr/watchtower
docker start unpackerr
And that’s a wrap up! By now you should have 4 apps available on exposed ports and 2 apps just doing their jobs in the background :).
Looking for subtitles support? Check bazaar that also works on top of Radarr and Sonarr!