In previous post I briefly discussed web frameworks options for Crystal. Once you have your main web application up and running usually pretty soon you need some sort of background worker - to do some heavy lifting, fetch some data and whatnot.
In Ruby world you would probably choose between two most popular and straightforward redis-based solutions - resque and sidekiq (obviously there is much more to choose from).
Thanks to Mike Perham there is a sweet Ruby-compatible Crystal port of Sidekiq - called, Sidekiq.cr (who would guess).
I will skip here getting started guide as everything is nicely described in a official wiki. Basically for a very simple project in the end you should have up to three different binaries:
- your main web application
- sidekiq binary for your worker(s) (references)
- optional sidekiq web ui panel /that is also served by Kemal BTW/ (references)
Seems like a much hassle, but actually this is pretty nice - you can compile minimal binaries and you will have more granular control over deployment. Maybe you will need to change something in your web application that doesn’t necessary affects workers, so you can modify only web-part, recompile only web app and re-deploy it without even touching the rest - and in Crystal it’s super easy to require needed modules (I’m looking at you go import) so you have that separation out of the box.
You have many options to chose from when it comes to managing your system processes - I used supervisor for a while, init.d scripts few times along with monit, you can pack your binary inside a mini docker image and use that (as docker have handy restart policies). This time I went with systemd as it kinda seems like nowadays standard in devops world - and it turned out to be quite easy to use after all.
What’s even better sidekiq.cr comes with two different examples that you can use for daemon-izing your processes.
But let’s take few steps back first.
Prerequisites
- Bare metal/VPS/whatever Linux box with a systemd (most major Linux distros have it) and Redis configured - I’m using Ubuntu 16.04
- Dedicated non-root user for your crystal app (you can read this guide), let’s say that user is called
crystalapp
and that user have it’s own group and home directory, as we will be serving everything from/home/crystalapp/app
folder - installed Ansible (I’m running 2.4.3.0) and some basic Ansible knowledge
- some basic linux/shell knowledge, there is a lot of topic to cover here, but I will try to keep it very short and not go into details
Assumptions & simplifications
- We will deploy three binaries to a single machine and spawn three processes - this is important because you can increase concurrency for a single sidekiq process, but for Kemal you would have to spawn extra processes and do some load balancing (references/discussion)
- Downtime during deployment is acceptable
- Too keep it simple we will always override old binary (no rollback) and execute (compile & deploy) everything locally
- Once again - this is single machine setup for your pet project, you don’t need a fleet of virtual servers running Mesos, Kubernetes or whatever and three load balancers in two physical locations in front of all that. Basic linux tooling will be more than enough :).
Configuring systemd services
Note: In general you should use Ansible or other automation tool for configuring your server and try not to touch any configuration manually.
Configure three services by putting those three .service
files under /etc/systemd/system/
# /etc/systemd/system/crystalapp-sidekiq.service
[Unit]
Description=crystalapp-sidekiq
# start only once the network and logging subsystems are available
# we should probably wait for redis here as well, but that depends
# how you run redis - eg. redis-server.service or maybe it's running
# in docker so wait for docker.service (in case of docker you would
# probably have to make sure the actual container exists) - feel free
# to tweak it further
After=syslog.target network.target
[Service]
Type=simple
# You can provide some ENVs here, eg. for redis
# Environment="REDIS_PROVIDER=REDIS_URL"
# Environment="REDIS_URL=redis://127.0.0.1:2104/0"
# Or you can load it via something like cr-dotenv
# during application boot
WorkingDirectory=/home/crystalapp/app
# You can provide more sidekiq options here, try running it with -h parameter
ExecStart=/home/crystalapp/app/sidekiq -e production
User=crystalapp
Group=crystalapp
UMask=0002
# restart on failure
RestartSec=1
Restart=on-failure
# and log to /var/log/syslog
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/crystalapp-web.service
[Unit]
Description=crystalapp-web
After=syslog.target network.target
[Service]
Type=simple
Environment="KEMAL_ENV=production"
WorkingDirectory=/home/crystalapp/app
ExecStart=/home/crystalapp/app/web -p 3000
User=crystalapp
Group=crystalapp
UMask=0002
RestartSec=1
Restart=on-failure
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/crystalapp-kiqweb.service
# This is our sidekiq web ui that we will run on port 3001
[Unit]
Description=crystalapp-kiqweb
After=syslog.target network.target
[Service]
Type=simple
Environment="KEMAL_ENV=production"
WorkingDirectory=/home/crystalapp/app
ExecStart=/home/crystalapp/app/kiqweb -p 3001
User=crystalapp
Group=crystalapp
UMask=0002
RestartSec=1
Restart=on-failure
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
Ok that seems like quite a lot, but you can notice it’s mostly a boilerplate and configuration itself is very minimal.
So now you can enable your three services via:
systemctl enable {crystalapp-sidekiq,crystalapp-web,crystalapp-kiqweb}
and have control over them via:
systemctl {start,stop,restart} {crystalapp-sidekiq,crystalapp-web,crystalapp-kiqweb}
Output will be logged to syslog - for convenience you can tail logs from given service by journalctl -f -u {service_name}
command.
So we have web app running on port 3000
, siekiq panel running on port 3001
and sidekiq process running on the system - normally you would proxy those two ports via nginx/haproxy/caddy/traefik or whatever, but this should do the job as a proof of concept.
Binaries for Linux 64-Bit
Next we should automate the deployment process. Ideally we should just compile binaries somewhere and put them on the system. So if you have a CI environment you can setup it there and basically automate everything, let’s assume we will do everything locally.
This is where it gets a little bit tricky if you’re running a different architecture than your server. So I’m on Mac and I need to build Linux 64bit binaries, cross compilation in Crystal is possible, but not really that straightforward and convenient. Also I don’t want Crystal with all dependencies on my production server.
What I decided to do is simply create a bash script and use official Crystal docker image. There is another problem, because I’m simply mounting whole project directory lib/
folder is mounted as well, aaand those packages are for different architecture (there will be problem when you’re using some Crystal C-Bindings), so what I did - I just moved the folder and called it a day :P. Lame? Most definitely. It does the job here? Yup.
#!/bin/bash
# put it bin/compile in root of your project and chmod +x it
mv lib/ lib_darwin/
mv lib_linux lib/
docker run --rm --volume $(pwd):/app --workdir /app crystallang/crystal:0.24.2 \
/bin/bash -c "crystal deps; crystal build --progress --release --stats src/$1.cr"
mv lib/ lib_linux/
mv lib_darwin lib/
With this simple script I can compile anything by calling bin/compile {crystal_file}
and get linux binary I can then put on my server. Because I move directories during the process it doesn’t mess up my development (MacOS) environment.
Deployment automation with Ansible
Ok so now it would be nice to automate the whole process, we have our services, we have our binaries. There is still one problem tho, our crystalapp
user cannot stop/start those services due to insufficient system permissions!
But there is a simple workaround for that with sudoers
configuration. Let’s create new file under /etc/sudoers.d/
directory (it’s normally auto loaded - please take a look at your /etc/sudoers
file for more useful information)
# /etc/sudoers.d/crystalapp (don't put any dots in the filename because it won't be loaded)
Cmnd_Alias MANAGE_APP_CMDS = /bin/systemctl start crystalapp-*, /bin/systemctl stop crystalapp-*, /bin/systemctl restart crystalapp-*
crystalapp ALL=(ALL) NOPASSWD: MANAGE_APP_CMDS
With such simple modification crystalapp
user will be able to run sudo systemctl stop crystalapp-web
without password prompt and it won’t affect any other services/commands, so we’re pretty safe here. You can read this stackoverflow question for more references.
Now back to ansible configuration.
My hosts
file looks like this. I specify ansible_user
as I want to run playbook with this particular user.
ip-of-the-server ansible_user=crystalapp app_directory=~/web
Because my setup for workers is quite specific I will only show you basic example how you can deploy your web
app, but you can get the general idea from there, nothing fancy is happening. It’s worth noticing that I’m running this from a deploy/
directory - thus those ../
directory changes you see below.
We’re simply compiling binary locally, stopping web service on the remote, copying new binary and starting service back up.
# web.yml
- hosts: all
vars:
binary_name: web
tasks:
- name: Compile binary
local_action:
module: command
args: "bin/compile {{ binary_name }}"
chdir: ../
# we need to user shell module here because of limitations mentioned here:
# https://docs.ansible.com/ansible/latest/become.html#can-t-limit-escalation-to-certain-commands
- name: Stop web service
shell: sudo systemctl stop crystalapp-web
- name: Copy new binary
copy:
src: "../{{ binary_name }}"
dest: "{{ app_directory }}/{{ binary_name }}"
- name: Start web service
shell: sudo systemctl start crystalapp-web
Run ansible-playbook -i hosts web.yml
and hopefully it will just work ;).
Now go and build some Crystal apps, cheers!