Category Archives

5 Articles


Compiling nchan on Ubuntu 14.04

Posted by Jamie Woods on

If you’re using Ubuntu 14.04, and want to compile a version of the nginx nchan module that works with Redis, this is the guide for you. This is useful if you want to install security updates without recompiling nginx from scratch every time.

  1. Install the nginx PPA for your system: sudo add-apt-repository ppa:nginx/stable && sudo apt-get update
  2. Install nginx from the PPA: sudo apt-get install build-essential nginx-full libnginx-mod-nchan libpcre3-dev libxml2-dev libxslt-dev libgeoip-dev
  3. This’ll install an old version of nchan, but it will install most of the dependencies that we need.
  4. Test the new nginx version to make sure it works. As we’ve just installed a different version, we might have caused some compatibility issues compared to the standard one (i.e. what’s in the normal repositories)
  5. cd /tmp. Download and unpack the nginx we have installed
    wget$(nginx -v 2>&1 | cut -d "/" -f 2).zip && unzip nginx*
    wget && tar -xf v1.1.14.tar.gz
  6. Change directory to nginx. Run configure with the following:
    ./configure --add-dynamic-module=../nchan-1.1.14 --with-cc-opt='-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -fPIC -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/ --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic --with-mail_ssl_module
  7. Run make modules
  8. Copy objs/ to /usr/lib/nginx/modules/
  9. Rename /etc/nginx/modules-enabled/50-mod-nchan.conf to 50-mod-nchan-new.conf, edit it, and change to
  10. Run nginx -t to test that it installed OK.
  11. Restart nginx. Done!

Spotify for Automation

Posted by Jamie Woods on

This isn’t at all what you might think it means.

Automation in broadcast is tricky. Computers aren’t always the best at deciding what two songs sound right when played back-to-back, and can leave you with some horrible segues. However, this can now be improved using cool modern tech.

tl;dr / Overview

This project required a few tweaks to the music scheduling software, and a Nerve module to automatically fill in information. In AutoTrack, one can use “Characteristics” to determine what songs can follow each other without sounding rubbish. This information and tech came from Spotify, in their acquisition of the Echo Nest. This ensures that songs with similar tempo, energy, and mood follow each other properly.

Loading the Information

Spotify provide a free web API. Nerve has, for a while, tried to tie every track on the system with a Spotify ID. In automation, the hit rate is around 90%. The other ten percent is made up of tracks with slightly different spellings between the Spotify platform and others, and artists who opted out.

Nerve makes requests to the Spotify search, like below:

It picks either the first result to perfectly match the title and artist, or the top result. The search tags used in the query work with many use cases, and there is not a single track in that 90% that is completely the wrong song. A few remixes are incorrectly tagged, but these are updated when we notice them.

Nerve then makes a request to to load this information.

This information is then written to playout, in this case through the AutoTrack database.

Bonus: Mass Loading Information

When Nerve was created, it did not use Spotify. This was, in reality, an unfortunate decision, as MusixMatch quite frankly sucks. Hence, we had to bootstrap the existing library to load in this information.

Spotify rate limit requests, but that didn’t stop us writing a simple script that, honestly, just iterated through every track in the library, tying it to a Spotify ID where possible.

The next bit (loading audio information) used the{} API reference. It loaded 100 bits of track information at a time, significantly faster than before. Not all tracks on Spotify have an audio analysis, interestingly, so after about 24 hours (the 404 search results seemed to enqueue the tracks for analysis) we ran the script again. This time, near enough every track completed.

All code is in the Nerve repository on GitHub, so you can use it free of charge for your own station. Migrating your library fully to Nerve is a bit of a pain. We’re investigating ways to make it easier, but sadly it’s not a priority at the moment. Feel free to hack on the code and make pull requests – we’ll accept ’em.

AutoTrack Scheduling

Four characteristics were defined: Energy, Tempo, Dancability and Mood. 1 through 6 were defined as Very Low to Very High.

Within AutoTrack, the global rules were edited. These became:

These stayed roughly the same for each characteristic. Energy changed a little from the above.

Next, the Clock Rules were updated. The Characteristic Follow rule became a guide, so it can be broken if necessary. It’s rare that this is needed, but initially about 10% of songs weren’t being scheduled/left blank, and we don’t want that.

That’s it. Through this, we managed to significantly improve the quality of automation. 😁


How-To: Super Scalable Streaming with HLS

Posted by Jamie Woods on

This tutorial will run you through how to build a crazy scalable streaming platform for your radio station, using HLS. This is a long one, so grab a cup of coffee or a Coke.

First, you’ll need to sign up to Google’s Project Shield. This is totally free for independent (e.g. community) radio stations. You can do that here. If you want to use another provider, that’s fine also.

You also require a Linux (or other UNIX-y) server. Although nginx builds fine on Windows, we haven’t tested it.

Although this works on most platforms, there are some exceptions. Notably, in Internet Explorer on Windows 7 and below. For that, you can probably use Flash as a polyfill. We’re not using HE-AAC here, as it’s not supported in all major browsers. If you want to (you’ll break support for Flash/IE, Firefox, and probably Opera – far too many users) add “-profile aac_he_v2”.

Setting Up the Server

On your Linux server, make sure you have GCC installed.

On CentOS/Fedora/etc., run yum -y install gcc gcc-c++ make zlib-devel pcre-devel openssl-devel git autoconf automake cmake pkgconfig.

On a Debian machine, you can run apt-get -y install build-essential zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libssl-dev tar unzip git autoconf automake cmake pkgconfig. If you’re using another flavour of Linux (or UNIX), find the packages above and install ’em.

Next, we need to get and compile nginx with the “nginx-ts-module”. You can also use nginx-rtmp-module, but there’s no point unless you want a full RTMP server.

Move to the tmp directory with cd /tmp

To do this, go to the nginx website and download the latest stable version. For instance, run wget

Extract it. tar -xf nginx-1.12.1.tar.gz

Next, run git clone to download nginx-ts-module

Now, cd nginx-1.12.1

Build nginx by running:

./configure \
--user=nginx                          \
--group=nginx                         \
--prefix=/etc/nginx                   \
--sbin-path=/usr/sbin/nginx           \
--conf-path=/etc/nginx/nginx.conf     \
--pid-path=/var/run/         \
--lock-path=/var/run/nginx.lock       \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--with-http_gzip_static_module        \
--with-http_stub_status_module        \
--with-http_ssl_module                \
--with-pcre                           \
--with-file-aio                       \
--with-http_realip_module             \
--without-http_scgi_module            \
--without-http_uwsgi_module           \
--without-http_fastcgi_module         \

Then, make && make install

Now, to install it on boot, run these commands:

useradd -r nginx

wget -O /etc/init.d/nginx

The on-boot commands may differ based on your operating system. Google for “build nginx ” and see what it advises there.

Excellent. We’ve built it!

Configuring nginx

In /etc/nginx/nginx.conf, add the following server block:

server {
	listen 80 default;
	root /srv/stream;
	client_max_body_size 0;
	location /stream {
		deny all;
		ts_hls path=/srv/stream/hls segment=10s segments=30;
		ts_dash path=/srv/stream/dash segment=10s segments=30;

	location ~ \.(h|mpd|m3u8)$ {
		add_header 'Access-Control-Allow-Origin' '*';
		add_header 'Cache-Control' 'max-age=9';

	location ~ \.(ts|mp4)$ {
		add_header 'Access-Control-Allow-Origin' '*';
		add_header 'Cache-Control' 'max-age=86400';


Next, run mkdir /srv/stream /srv/stream/hls /srv/stream/dash /srv/stream/hls/stream /srv/stream/dash/stream to make our web folders for nginx to serve. Then, chown nginx: /srv/stream/dash

Now, we can start nginx with service nginx start and chkconfig nginx on.

Our HLS stream becomes available at http://host-name/hls/stream/index.m3u8, and DASH at http://host-name/dash/stream/index.mpd.

Adding the Plumbing

We have our origin server configured now. Next, we need to pipe some audio into it.

First, install ffmpeg. We need it with “libfdk-aac” so we can get a good quality stream. This will use the most modern HE-AACv2 codec, as it ensures the highest quality at the lowest bitrates. You can follow a guide here for CentOS on how to do so. For Ubuntu/Debian, make sure you have installed autoconf, automake, cmake, and pgkconfig (we did this above). You can then follow the CentOS guide, which should be mostly the same.

In /usr/local/, add the following (replacing the stream.cor bit with your regular stream URL):

while [ true ]; do
    ffmpeg -i -c:a libfdk_aac -b:a 128k -f mpegts
    sleep 0.1

These commands tell ffmpeg to grab a copy of your Icecast stream, encode it with High Efficiency AAC, and send it to the server so that it can repackage it for browsers. If it crashes, it will automatically restart after 1/10th of a second (this is intentional, as if the system is ill-configured, the batch job would choke available system resources).

If you want to get even better quality, you can use an RTP stream or something else in the ffmpeg command. The reason I’m not using this in the example above is that, to achieve that, it requires an overhaul of your streaming architecture to use PCM on ingest. If you have a Barix based STL, you can configure it to RTP send to your streaming server, thus saving the need for an extra audio interface.

Append to /etc/rc.local the following line to instruct the system to automatically restart our script on boot:


(You could do this using systemd or initv for production, but this works well enough)

Excellent. When you run /usr/local/ or reboot, you should be able to access your streams.

Setting up the CDN

CDNs are designed to copy your content so that it can scale. For instance, instead of having one server (our origin) serving 100,000 clients, we can send this to lots of “edge” servers. Providers make this easy and are more cost efficient.

We’re using Project Shield, because it’s totally free for indie media, and harnesses the same power of Google. Other freebies exist like CloudFlare, but CloudFlare’s terms of service don’t let you use it for multimedia.

The nginx config we used above should allow your CDN to work properly from the onset. It will cache the manifest/index files (that instruct the client which audio segments to get) for 10 seconds. After 10 seconds, the files will have changed so we want the edges to update. The media files will be cached for a day so once the CDN grabs it once, it will never need to again.

Usage & Testing

I’ll post some sample player code on GitHub, but you probably want to use the HLS stream with hls.js. You can test it on this page. Why not DASH? nginx-ts-module, at the time of writing, has tiny gaps between audio segments. These are pretty audible to an average listeners, so until that is fixed we might as well continue using HLS.

Once again, the streaming URLs will be (HLS), and (DASH).

Do consider setting up SSL. It requires a couple of tweaks in your nginx configuration, and LetsEncrypt makes it much easier. Google are working on support for LetsEncrypt in Shield – hopefully when you read this it will be much easier to use, instead of having to manually replace SSL/TLS certificates every 60 days.

Going Further

The nginx-rtmp-module, similar to install, provides adaptive bitrate HLS (but not DASH). However, there is a bug in Mobile Safari that adds silent gaps between the segments. The TS module will soon support adaptive HLS, this guide will be updated when that happens. You can also hack it yourself.


Email: Vulnerable To Blacklist

Posted by Jamie Woods on

This morning I received an email informing me that the Insanity Radio mail server had been added to two email blacklists.

*Groan* go most system administrators. This usually means one of our users’ has had their email account compromised, probably by using a weak password. Most of the time, this is probably right. However, the reasons behind this were more alarming than you’d consider, and actually point to a vulnerability within most Postfix/SpamAssassin set-ups.

A bit of background: Postfix is a mail server. SpamAssassin is a server which is fed emails and responds with a score on how spammy the email is. Combine the two and you have a pretty good way to filter out spam. However, this is where it gets a bit more tricky.

Postfix has a concept of filters – a way of running email through a bit of code. There are two main types, before-queue and after-queue. before-queue will apply the filter before the email is accepted by the server, and after-queue will apply after the email is accepted.

The problem is that most SpamAssassin set-ups work after-queue. This means that your incoming mail server will accept spam emails before it scans them. If it rejects an email later on, it will send a response to the sender saying it bounced, and why.

You’ve probably seen emails from “MAILER DAEMON” saying stuff like “The email you tried to send your thing to doesn’t exist”. It’s pretty much the same.

Now, there’s actually a surprisingly easy way to exploit this. This could be used by an attacker to quite comfortably destroy the reputation of a mail server. Scary, huh? The attack can be done in a few simple steps.

  1. Connect to your target mail server.
  2. Send the most horrible spammy email you can, one that checks all the boxes on SpamAssassin
  3. Set the envelope address to a honeypot address
  4. Rinse & Repeat

The mail server will send a bounce email to the honeypot address, and will get added to an email blacklist as a result.

The best solution is just to install amavisd, and just let that handle interconnecting to SpamAssassin.



Posted by Jamie Woods on

This blog details the technical changes that occur under-the-hood at Insanity Radio 103.2FM, a community radio station who live in Surrey.

Why? Our technical operations are very fast paced, and we do a lot of things that aren’t industry standard. But we want them to be.

Feel free to pinch any of the ideas on this site for your own use, commercial or otherwise. Also to criticise, etc. We’d honestly love to hear feedback.