Jamie Woods


Online

Using WordPress Sessions Everywhere In PHP

Posted by Jamie Woods on

Like a huge number of sites, Insanity’s website runs off of WordPress.

Although we use custom WordPress plugins all over the place, there’s plenty of traditional PHP dotted around our website (for example, on our listen pages).

There might be times when you need to get access to session data (e.g. to check if someone’s logged in) from outside of WordPress. Why add another login form when you can reuse the session you already have?

You get the point. Let’s get straight into the code.

<?php
define('SHORTINIT', true);
define('ABSPATH', '/path/to/wordpress/');
define('WP_PLUGIN_URL', '');

require_once ABSPATH . 'wp-load.php';
require_once ABSPATH . WPINC . '/link-template.php';

define('SITEROOT', get_site_url() . '/');

require_once ABSPATH . WPINC . '/class-wp-user.php';
require_once ABSPATH . WPINC . '/class-wp-roles.php';
require_once ABSPATH . WPINC . '/class-wp-role.php';
require_once ABSPATH . WPINC . '/class-wp-session-tokens.php';
require_once ABSPATH . WPINC . '/class-wp-user-meta-session-tokens.php';
require_once ABSPATH . WPINC . '/capabilities.php';
require_once ABSPATH . WPINC . '/user.php';

wp_cookie_constants();

require_once ABSPATH . WPINC . '/vars.php';
require_once ABSPATH . WPINC . '/kses.php';
require_once ABSPATH . WPINC . '/rest-api.php';
require_once ABSPATH . WPINC . '/pluggable.php';
require_once ABSPATH . WPINC . '/general-template.php';

if ( !is_user_logged_in() ) {
        die(header('Location:  ' . wp_login_url(SITEROOT . $_SERVER['REQUEST_URI'])));
}

$wp_user = wp_get_current_user();
// do whatever you want with $wp_user here
?>

Uncategorized

LetsEncrypt and Cloudflare: Generating Wildcard SSL Certificates Securely

Posted by Jamie Woods on

If you’re looking for a simple command to do this, look no further, there’s a tl;dr below!

If you use Cloudflare to run your website, chances are you use them to handle your website’s DNS records too. Which is great. However, say you want to add SSL support to your website without using the one built into Cloudflare (it doesn’t always work best for us).

This is do-able. LetsEncrypt has been around for a while, and recently it began to support wildcard SSL – essentially a way of saying you own the whole of *.insanityradio.com instead of just insanityradio.com and www.insanityradio.com.

Great! To generate a wildcard certificate, you have to prove that you own your domain, and the only completely fair way to do this is by using DNS records, as these are the glue that make up your website. But there’s a caveat, and what most online tutorials will tell you to do opens up a HUGE security vulnerability. Not kidding.

Before we delve deeper into a solution, let’s have a quick recap so we understand exactly what we’re trying to achieve.

How does LetsEncrypt work?

LetsEncrypt will give you a SSL certificate if you can prove that you have enough access to a fully-qualified-domain-name. Often, proving you control the content on the website is enough. But to get a wildcard certificate and prove you own the whole domain, you need to prove you can control the website’s DNS records.

How does it prove you have enough access to the domain? It uses a mechanism called challenge and response. We can nicely sum it up in 5 simple steps.

  1. We ask LetsEncrypt to validate that we own x.insanityradio.com.
  2. LetsEncrypt sends us a challenge text for this domain – think of it like a 2-factor authentication token – it changes every time.
  3. We put this challenge text up on the website, and tell LetsEncrypt to go and check it.
  4. LetsEncrypt tells us that they’ve been able to verify it.
  5. We ask LetsEncrypt to generate a SSL certificate for the domain(s) we’ve just validated.

Traditionally, LetsEncrypt would just prove we control the content on the website. If you remember having to upload those google_xaohdksjahdsakjdhas.html files to prove to Google Analytics that you own the site, it’s similar, but just highly automated.

Let’s go a step further. If we want to prove we own [everything].insanityradio.com, for this to work we’d have to scan every possible domain for this code. Because verifying one doesn’t imply we own all of insanityradio.com (think – if we owned myblog.wordpress.com and tried to verify wordpress.com, that wouldn’t make sense). However, as there’s infinite potential for domains, this obviously isn’t possible – we can’t do those 5 steps infinite times.

The best solution? Prove we own insanityradio.com‘s DNS, and hence physically control all of these sites underneath it.

This works exactly the same as above, but instead of shoving a file on the web server, we have to change its DNS records.

Enter Cloudflare – our DNS provider.

Cloudflare DNS

For this to work best, we need to automate changing these records. Unless you want to, every 60 days, do this whole process by hand. The answer seems immediately obvious, and is well documented everywhere:

Just feed your Cloudflare API key and email into the LetsEncrypt automation script, and let it handle everything.

But hang on – there’s a security hole!

Say someone hacks your web server somehow. Traditionally, this is probably not the end of the world.

But hang on, we’ve saved our Cloudflare API key on the server. When the attacker steals this, it’s game over. They have full control over our Cloudflare account, and the worst bit is Cloudflare won’t email us to tell us someone’s using our key.

Use your imagination to guess what an attacker could do with this access. They could:

  • Redirect your email traffic and intercept it, reset your passwords, and permanently steal your domain.
  • Add malicious code to your website without tampering with the web server
  • Disable your Web Application Firewall
  • Rack up a huge bill by using billable Cloudflare features

Big yikes.

What do now?

Cloudflare doesn’t let you generate a specific key for this, you have to feed the script your entire account’s.

However, there’s a better solution. It’s not perfect, but it completely mitigates the risk of the above.

Say we have a second domain we don’t care much about – insanityradio.co.uk. Nothing runs on it, but we keep it registered because #brand. If you don’t, you can simply register a new one – the name itself doesn’t matter, it could be anything. We’re going to set up this secondary domain so we can use it to verify we own insanityradio.coms DNS.

What?!? Yup. That’s a thing.

Let’s sign up for a new CloudFlare account (avoid using an email on the secondary domain), and set up our secondary domain with it. As we don’t use this domain for much, it’s less of a concern if an attacker gains access to it. They can’t really do much except generate SSL certificates for our main website – if they do, they don’t have the means to use them.

How does this work?

To validate DNS (step 3), we normally create a _acme-challege.insanityradio.com DNS record. This contains the challenge response from step 2.

Instead of creating a record with text, we can create an alias record (a DNS CNAME) that tells LetsEncrypt to look at another domain for this response.

For instance, to verify `insanityradio.com`, we can set up the following:

insanityradio.com CNAME _acme-challenge.insanityradio.com.insanityradio.co.uk

The domain name doesn’t need to be as huge as it is – in fact, we can set the record to any domain. However, if you’re verifying multiple sites in one go, you can’t reuse the target because DNS is slow. It’s best practice to use something unique per domain, and the best way to do that is just to use the domain itself.

The Take Aways

Some valuable lessons have been learnt.

  1. Avoid giving scripts full access to your domain’s DNS.
  2. Cloudflare’s API keys are extremely dangerous

tl;dr (aka the code)

  1. Sign up for a new Cloudflare account with a secondary domain and find its API keys
  2. Add _acme-challenge.insanityradio.com DNS CNAME records pointing to _acme-challenge.insanityradio.com.insanityradio.co.uk (that’s a big boy). If you want to verify a second level domain like *.cor.insanityradio.com, add a DNS record like _acme-challenge.cor.insanityradio.com DNS CNAME records pointing to _acme-challenge.cor.insanityradio.com.insanityradio.co.uk
  3. Install acme.sh$ curl https://get.acme.sh | sh
  4. Run the following to generate a certificate:
    export CF_Key="my cf api key"
    export CF_Email="[email protected]"
    acme.sh --issue \
    	-d insanityradio.com  --challenge-alias insanityradio.com.insanityradio.co.uk \
    	-d *.insanityradio.com --challenge-alias insanityradio.com.insanityradio.co.uk \
    	-d *.cor.insanityradio.com --challenge-alias cor.insanityradio.com.insanityradio.co.uk \
    	--dns dns_cf
    
  5. Add the following cronjob, to ensure your certificates are auto-renewed close to expiry. Make sure to replace `user` with whatever you want.
    0 0 * * * /home/user/.acme.sh/acme.sh --cron --home /home/user/.acme.sh > /dev/null
Falcon

Studio Lighting On The Cheap: A Quick Overview

Posted by Jamie Woods on

As we’ve written about before, visualised radio is one heck of a way to connect with your audience through social media.

But making your studio look good on camera isn’t as easy as re-painting your studio and putting the cameras in good places – your lighting needs to not suck. Here’s some quick lighting advice that will make your radio studio look ten times better without investing in expensive studio lighting.

You can apply this without investing in any specialist lights if your studio has ceiling spotlights, as these can easily be re-angled.

Camera Colours

It’s probably a good idea to configure your studio cameras to have the same colour temperature and gain. The lower the gain, the less noisy the image will be.

Make sure that your colour temperature isn’t too warm or cool.

Light Positioning

The main thing is that you position your lights properly. You want some sort of light behind your presenters to avoid large shadows, and you’ll need a light on the presenters so that you can see them.

The most important thing is the vertical angle of the light, as you shouldn’t have it facing directly down onto a presenter – aim for a diagonal (45º is good). Instead, try angling the lights above one presenting position towards the other.

Our studio has spotlights. Many standard fitting lights allow you to angle the lamp. You can use this to point the lamp in a different direction.

If you can’t re-angle your lights, you can cheat and cut a piece of black plastic into a semi-circle and use this to direct light.

Lighting Colour

Remember that cameras capture images using three channels – red, green and blue. This is very different to what our eyes expect, so some lights will simply not look good on camera.

Fluorescent (and compact fluorescent) lamps only really emit a small range of colours, and tend to show up very green on camera.

Halogen lamps don’t emit much blue light, but by slightly adjusting the colour temperature you can somewhat compensate. You’ll still likely get a very warm image, which might not be what you expect.

White LED lamps produce light at a lot of different frequencies, and have a relatively good balance of colour. As you’d expect, cool LEDs produce more blue light than warm lamps, so depending on the feel of your studio/station choose wisely. Warmer LEDs, however, produce a lot of green light.

LEDs also have a good lifespan and are more environmentally friendly.

Comparison

Before we made these simple changes to our room lighting, our picture quality was very poor. It was hard to watch as the studio looked very pink, even with colour temperature adjustments on halogen bulbs. After swapping to cooler bulbs and re-angling lamps to direct light diagonally, the quality of picture increased dramatically.

Falcon/Online

Insanity At Reading – Visual OB Write-Up

Posted by Jamie Woods on
Insanity At Reading – Visual OB Write-Up

Insanity successfully undertook its first ever visualised radio Outside Broadcast. A joint effort between a hardworking studio and remote team, we managed to achieve something only previously done by the big national stations with flashy budgets. And it didn’t look too bad on air either.

Here’s how we did it.

Studio End – Receiving Content

Before we even started, we had to work out how possible it would be to ingest video from a remote site, and air it both on FM and on our online visual stream.

Luckily, we have a spare computer in our main studio. It’s powerful enough to receive and stream at the same time – great. We normally use this machine to receive outside broadcast content, so it has a mix-minus bus. To output video, we installed the Open Broadcaster Software (OBS) and Dual Monitor Tools.

A DisplayPort to HDMI converter allowed us to connect the OB1 machine to one of the HDMI inputs on the Blackmagic ATEM – which is how we broadcast our normal visual stream.

Dual Monitor Tools allowed us to lock the cursor to the left screen, which avoided any possibility of it appearing on the fake monitor.

OBS was also configured to individually stream to Facebook Live, allowing us to air our show on this platform without using our main studio programme feed (which contains copyrighted music). This allowed us to just broadcast the video inserts and playout with some royalty-free music.

OBS Setup

OBS was configured with 3 different scenes. These were:

  • Slate (and a clone, to allow editing of text)
  • Live
  • Video Tape

Live and Video Tape contain a VLC source. The live source is set to always be active, and is fed by the RTMP feed from remote. The slate is just an image.

The Video Tape scene is configured to play a video file, and will automatically restart/cue when the scene is made live. Frustratingly, it is not possible to get playback time information from OBS/VLC, so we had to rely on in/out cues.

OBS has the Stereo Tool plugin loaded, allowing us to process (as one) the return microphone feeds (there are five microphones). These are then mono summed before being outputted to the studio mixer. To use Stereo Tool as a VST, you need a license. The cheapest one is £30, and we’d definitely recommend the software as it’s fantastic. This is the preset that we used.

We set up Stereo Tool as a filter on the Live and Video Tape. Letting OBS process played-out audio instead of pre-processing it allowed us to speed up turnaround time of clips.

Festival End – Sending Content Back

Interviews were recorded (1080p 35Mbps MPEG2/MXF), rendered into a package, and transmitted to the studio over the internet (encoded as a 4Mbps MP4 file). All pre-recorded content was played out from the studio end, so we’ll touch on that later.

This was surprisingly straightforward. Massive thanks to Smoke Media who were able to loan us a HDMI camera, we’d otherwise have had to hack a smartphone.

A Teradek VidiU (although any RTMP sender could be used – this one happened to be small enough to fit comfortably inside our flight case) accepted a HDMI input from our camera, and output video to the server sitting back at our studio site at about 3Mbps. This was done over the public internet (Festival Republic provided us with a wired ethernet connection which provided us with a shaped 10 up 10 down connection).

Audio

A mixer was set-up with 6 microphone inputs from some SM58s, and the stereo output connected to the camera’s XLR inputs. The mixer’s second bus was used to send incoming talkback over Sennheiser belt packs to the presenter, although we didn’t really end up using this. The Stereo Tool on the studio end processed these microphone feeds to appropriate levels, to ensure it sounds good on air. Care was taken to ensure that the audio input has necessary headroom – the processing has AGC which would fix this in the studio. However, as we learnt, the analogue-to-digital audio converters on cameras are quite noisy, so don’t give yourself too much headroom.

This allowed us to send back audio and video over the same channel, in sync. Due to budgeting reasons, we didn’t have an alternative transmission path, but thankfully as we were not broadcasting all content from site this was not as big an issue as it could have been.

Playing Out Interviews

We ended up using MEGA to send back MP4s from the festival. The audio on these is unprocessed (so you’d average about -30 dB during speech, depending on how far away guests would hold the microphone), but was edited remotely to ensure that any bad language was cut.

Sadly, no radio playout software that we are aware of/is affordable can also play out video content, so we could not rely on automation to air this content. Darn.

OBS allowed us to insert interviews into a scene (which was then streamed as well as input to the studio), and not loop them so that we could manage playout. As a result, the studio operator could cut from the live feed to the playout feed when given the cue by our presenter. OBS doesn’t provide timers for its media players, however, so we had to rely on known out-cues from clips to out segue. However, it seems possible that we could develop an OBS plugin in the future that does exactly this.

Some continuity inserts were pre-recorded in a similar way, as we really wanted to see a live band during the same time as the broadcast.

OBS does not have any media/asset management system, which made it more complicated than should be necessary to load in content for playing out. We could have created multiple scenes to do this, but if anything needed changing we would have had to duplicate these changes by hand in every scene.

The Actual Running

It was a bit hectic. We had several people unfortunately pull out from being able to assist in the studio, but it all got sorted in the end.

Some interviews were recorded 15 minutes before they were due to air (for various admin reasons, we couldn’t air the interviews live), so overall it was in some cases relatively stressful to quickly chop up video, edit audio for language, encode/render, and load this into playout. This was mostly done using ffmpeg on the command line, as we don’t have a license for Adobe Premiere or other video editing software.

For YouTube, audio is loaded into Reaper, processed with Stereo Tool (although we do have an internal web-based tool for this now), and then re-attached to the video. No re-encode necessary.

Phew.

Next year, anyone?

We couldn’t have done this without the help of some fantastic organisations. Special thanks to Festival Republic and LD Communications, for allowing us to do this, to Smoke Media, for letting us pinch a HDMI cam, rhubarbTV, for the video encoder, and to BBC R&D, just for being fantastic.

Falcon

Visual Radio Metapost: Looking Back

Posted by Jamie Woods on

This year, Insanity launched its visual platform – mostly to showcase how radio can be professionally visualised on a shoestring budget.

This post aims to solve some of the less technical problems with launching a visual radio platform, and how we solved them.

 

When To Stream

Big question: when do you have the cameras on? In fact, it’s not so simple. Don’t forget that with visual radio you have lots of different platforms to stream on.

For us, we almost always stream on our website. As we don’t market this stream extensively, it doesn’t subtract from the impact of the platform.

For special events, we stream on Facebook and YouTube. Facebook draws our biggest audience engagement figures, as you can already target your audience as they’ve probably liked your page.

 

Licensing Woes

This was the biggest issue for us.

Community radio in the UK, like all other stations, have PRS and PPL licenses to cover music streaming, both on terrestrial, and online. The wording of these license terms is very vague, but our interpretation is that a visualised radio stream, with the original station audio, counts as a simulcast. The only downside is that this limits our distribution on third party platforms – when we do, we need to be very careful not to include music. As long as you have some factor of control (even if that’s just the ability to start or stop your stream) over the platform you’re broadcasting on, you are probably within terms of the license.

Although the services we stream on have music licenses of their own, automated filters are unforgiving and overzealous.

But on-demand, we can completely avoid that issue, as per our social media guidelines, OD content should ideally be one link or idea.

(Remember, we are not your lawyers – please seek legal advice on the terms of your music licensing contract if you’re unsure!)

 

Getting The Presenters Onboard

Not everyone wants to live stream their show. During the first scheduling term after launch, about ten of our hundred shows decided not to stream themselves on the platform. After a few months, that number dropped to one.

Remember, the radio studio isn’t becoming a TV studio – there’s no pressure on looking amazing on camera.

With the rise of social media, video has become the online first-class content – not audio. Providing just something to go with that audio is exactly what visual radio is about.

Docker

Dockerizing Radio: AudioEngine

Posted by Jamie Woods on

It’s been known for a while that Docker, and containers in general, are slowly creeping into IT infrastructures.

Research shows they are stable enough to use exclusively in production – so why don’t we hop on the bandwagon for radio?

Insanity Tech is developing AudioEngine – a collection of scripts and utilities for virtualising a radio station’s streaming stack. We’ve just released v0.1.0-alpha, as a proof of concept.

To get started, install Ruby, Docker and docker-compose on a server, and clone https://github.com/InsanityRadio/AudioEngine.git.

Create and config.yml (from config.yml.dist). Insert your configuration, and run ruby scripts/build_config to build the docker-compose configuration. Then you can build and launch your streaming system.

Not bad, huh?

The development code is up on GitHub, and developers are, as always, invited to contribute.

Uncategorized

Compiling nchan on Ubuntu 14.04

Posted by Jamie Woods on

If you’re using Ubuntu 14.04, and want to compile a version of the nginx nchan module that works with Redis, this is the guide for you. This is useful if you want to install security updates without recompiling nginx from scratch every time.

  1. Install the nginx PPA for your system: sudo add-apt-repository ppa:nginx/stable && sudo apt-get update
  2. Install nginx from the PPA: sudo apt-get install build-essential nginx-full libnginx-mod-nchan libpcre3-dev libxml2-dev libxslt-dev libgeoip-dev
  3. This’ll install an old version of nchan, but it will install most of the dependencies that we need.
  4. Test the new nginx version to make sure it works. As we’ve just installed a different version, we might have caused some compatibility issues compared to the standard one (i.e. what’s in the normal repositories)
  5. cd /tmp. Download and unpack the nginx we have installed
    wget http://nginx.org/download/nginx-$(nginx -v 2>&1 | cut -d "/" -f 2).zip && unzip nginx*
    wget https://github.com/slact/nchan/archive/v1.1.14.tar.gz && tar -xf v1.1.14.tar.gz
  6. Change directory to nginx. Run configure with the following:
    ./configure --add-dynamic-module=../nchan-1.1.14 --with-cc-opt='-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -fPIC -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic --with-mail_ssl_module
  7. Run make modules
  8. Copy objs/ngx_nchan_module.so to /usr/lib/nginx/modules/ngx_nchan_module_new.so.
  9. Rename /etc/nginx/modules-enabled/50-mod-nchan.conf to 50-mod-nchan-new.conf, edit it, and change ngx_nchan_module.so to ngx_nchan_module_new.so.
  10. Run nginx -t to test that it installed OK.
  11. Restart nginx. Done!
Falcon

Visual Radio Part 2: Automated Vision Mixing

Posted by Jamie Woods on
Visual Radio Part 2: Automated Vision Mixing

So you want to get into visualised radio. Great! That’s what we’re doing, too.

This is a very beefy post. When I have more time, I’ll update it to include more detail and justification.

All the national stations have automatic vision mixing – so all their videos are automatically generated with a lot of complexity (and also with some huge license costs – commercial products are super expensive). How can we achieve this on the cheap, and with high quality?

Insanity worked on this last summer, using a cheap vision mixer and our existing analogue mixing desk (Sonifex S2) with no auxiliaries. Sadly I didn’t take any photos before writing the article, so it is just a big wall of text.

Our shopping list for this project includes:

  • A Blackmagic ATEM switcher (you’ll see why this brand specifically later)
  • Some cameras (we used Marshall CV500MB’s) – make sure they have a (stereo) audio input that works with its SDI output
  • A server with a free USB port
  • A joystick controller port (USB, in this case)
  • Solder, some D-sub 9’s, XLR connectors, 3.5mm jacks, and lots of wire

I’ll assume you’ve set up all the cameras how you want them. We’ve used three – a wide angle, presenter side view, and guest side view.

Here’s our full system diagram:

Firstly, we need to make some cables. For each microphone, a Y-splitter. We use this to split the return signal from the processor into two – one goes back to the desk, one is to go to the cameras.  We’re not actually using the sound from the cameras, we’re just going to measure its levels, so using a Y splitter doesn’t actually impact the quality.

The next cable we need to make (one per “focused” camera) are an odd one – they are female XLR to 3.5mm jack (replace the 3.5mm with whatever inputs your cameras have). Leave the cold core in this cable completely disconnected – don’t pull it to ground like you normally would when balanced to unbalanced. As above, we don’t care about the audio quality going into the camera, and doing so won’t affect the audio return to your mixer. Connect it up neatly, and do a few sanity checks on your mics to make sure you have the wiring correct.

In our cameras, we had to adjust the audio setting so that it used the line input [from the 3.5mm jack]. The audio levels should then became visible in the ATEM mixer, pre-fade. Don’t turn the channels on. This being pre-fade doesn’t matter too much. As the 3.5mm jack is stereo, and (hopefully) our camera supports stereo audio, we can wire two microphones up to each camera to avoid having to use over-the-top mixing circuits or the like.

Note: to actually get audio directly out of the ATEM mix, we provided it a copy of our PGM from the distribution amplifier to make it happy – we don’t want to use the audio we’re getting from the mics as it’s pre-fade and hence always on, and also it sounds somewhat bad.

That’s great, but how do we monitor events like fader starts, and, most importantly, which microphones are live? The solution: a simple joystick controller.

We created some cables to connect the opto-isolated MIC CUE lights for each channel to the joystick port. This is very simple on the S2, as the cue lights behave exactly like virtual switches. The outputs on the S2 can be connected directly to buttons 1-4 on a joystick port. Make sure you get the polarity the correct way around, otherwise it’ll leave you scratching your head as to why it only sometimes works. Once you’ve made one for each channel, connect it to your joystick port, open up a test application (HTML5 Gamepad Tester is excellent for this), and knock a fader slightly to see if you have a connection. On the S2, we had to fit jumpers to the mic channels (Jumper 1) so that the cue light was latching and not momentary.

Boom! The ATEM can now see the camera audio levels, and our server can see what mics are live. Now, we need some software to tie it all together. Enter libatem.

We wrote libatem to address the alarming lack of ATEM APIs – there are a few existing ones but all in low level languages and designed for arduinos and other embedded devices. This is used in a simple Ruby script that combines it with RJoystick, a Ruby library for interfacing with joysticks on Linux. RJoystick only runs on Ruby < 2.2, as it hasn’t been updated in 6 years, so we used 2.0.0 on the server. If using CentOS or RHEL, update your kernel as only the most recent revision contains the correct drivers.

This is the software we use to tie it altogether. It mixes based somewhat on audio levels and die throws. It also responds in real-time to fader opening and closes. Of course, season to taste.

And there you go! Automatic, operator-less vision mixing, using regular video kit, and more kit we had lying around gathering dust.

Uncategorized

How-To: Super Scalable Streaming with HLS

Posted by Jamie Woods on

We’ve just launched AudioEngine, a free Docker-ecosystem based system for completely managing your stream deployment. This includes HLS and DASH, as we outlined in this tutorial. You can read up on it, and get the code, here.

This tutorial will run you through how to build a crazy scalable streaming platform for your radio station, using HLS. This is a long one, so grab a cup of coffee or a Coke.

First, you’ll need to sign up to Google’s Project Shield. This is totally free for independent (e.g. community) radio stations. You can do that here. If you want to use another provider, that’s fine also.

You also require a Linux (or other UNIX-y) server. Although nginx builds fine on Windows, we haven’t tested it.

Although this works on most platforms, there are some exceptions. Notably, in Internet Explorer on Windows 7 and below. For that, you can probably use Flash as a polyfill. We’re not using HE-AAC here, as it’s not supported in all major browsers. If you want to (you’ll break support for Flash/IE, Firefox, and probably Opera – far too many users) add “-profile aac_he_v2”.

Setting Up the Server

On your Linux server, make sure you have GCC installed.

On CentOS/Fedora/etc., run yum -y install gcc gcc-c++ make zlib-devel pcre-devel openssl-devel git autoconf automake cmake pkgconfig.

On a Debian machine, you can run apt-get -y install build-essential zlib1g-dev libpcre3 libpcre3-dev libbz2-dev libssl-dev tar unzip git autoconf automake cmake pkgconfig. If you’re using another flavour of Linux (or UNIX), find the packages above and install ’em.

Next, we need to get and compile nginx with the “nginx-ts-module”. You can also use nginx-rtmp-module, but there’s no point unless you want a full RTMP server.

Move to the tmp directory with cd /tmp

To do this, go to the nginx website and download the latest stable version. For instance, run wget http://nginx.org/download/nginx-1.12.1.tar.gz

Extract it. tar -xf nginx-1.12.1.tar.gz

Next, run git clone https://github.com/arut/nginx-ts-module.git to download nginx-ts-module

Now, cd nginx-1.12.1

Build nginx by running:

./configure \
--user=nginx                          \
--group=nginx                         \
--prefix=/etc/nginx                   \
--sbin-path=/usr/sbin/nginx           \
--conf-path=/etc/nginx/nginx.conf     \
--pid-path=/var/run/nginx.pid         \
--lock-path=/var/run/nginx.lock       \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--with-http_gzip_static_module        \
--with-http_stub_status_module        \
--with-http_ssl_module                \
--with-pcre                           \
--with-file-aio                       \
--with-http_realip_module             \
--without-http_scgi_module            \
--without-http_uwsgi_module           \
--without-http_fastcgi_module         \
--add-module=/tmp/nginx-ts-module

Then, make && make install

Now, to install it on boot, run these commands:

useradd -r nginx

wget -O /etc/init.d/nginx https://gist.github.com/sairam/5892520/raw/b8195a71e944d46271c8a49f2717f70bcd04bf1a/etc-init.d-nginx

The on-boot commands may differ based on your operating system. Google for “build nginx ” and see what it advises there.

Excellent. We’ve built it!

Configuring nginx

In /etc/nginx/nginx.conf, add the following server block:

server {
	listen 80 default;
	root /srv/stream;
	client_max_body_size 0;
	location /stream {
		allow 127.0.0.1;
		deny all;
	
		ts;
		ts_hls path=/srv/stream/hls segment=10s segments=30;
		ts_dash path=/srv/stream/dash segment=10s segments=30;
	}

	location ~ \.(h|mpd|m3u8)$ {
		add_header 'Access-Control-Allow-Origin' '*';
		add_header 'Cache-Control' 'max-age=9';
	}

	location ~ \.(ts|mp4)$ {
		add_header 'Access-Control-Allow-Origin' '*';
		add_header 'Cache-Control' 'max-age=86400';
	}

}

Next, run mkdir /srv/stream /srv/stream/hls /srv/stream/dash /srv/stream/hls/stream /srv/stream/dash/stream to make our web folders for nginx to serve. Then, chown nginx: /srv/stream/dash

Now, we can start nginx with service nginx start and chkconfig nginx on.

Our HLS stream becomes available at http://host-name/hls/stream/index.m3u8, and DASH at http://host-name/dash/stream/index.mpd.

Adding the Plumbing

We have our origin server configured now. Next, we need to pipe some audio into it.

First, install ffmpeg. We need it with “libfdk-aac” so we can get a good quality stream. This will use the most modern HE-AACv2 codec, as it ensures the highest quality at the lowest bitrates. You can follow a guide here for CentOS on how to do so. For Ubuntu/Debian, make sure you have installed autoconf, automake, cmake, and pgkconfig (we did this above). You can then follow the CentOS guide, which should be mostly the same.

In /usr/local/run.sh, add the following (replacing the stream.cor bit with your regular stream URL):

#!/bin/bash
while [ true ]; do
    ffmpeg -i https://stream.cor.insanityradio.com/path_to_icecast_mp3 -c:a libfdk_aac -b:a 128k -f mpegts http://127.0.0.1/stream
    sleep 0.1
done

These commands tell ffmpeg to grab a copy of your Icecast stream, encode it with High Efficiency AAC, and send it to the server so that it can repackage it for browsers. If it crashes, it will automatically restart after 1/10th of a second (this is intentional, as if the system is ill-configured, the batch job would choke available system resources).

If you want to get even better quality, you can use an RTP stream or something else in the ffmpeg command. The reason I’m not using this in the example above is that, to achieve that, it requires an overhaul of your streaming architecture to use PCM on ingest. If you have a Barix based STL, you can configure it to RTP send to your streaming server, thus saving the need for an extra audio interface.

Append to /etc/rc.local the following line to instruct the system to automatically restart our script on boot:

/usr/local/run.sh&

(You could do this using systemd or initv for production, but this works well enough)

Excellent. When you run /usr/local/run.sh& or reboot, you should be able to access your streams.

Setting up the CDN

CDNs are designed to copy your content so that it can scale. For instance, instead of having one server (our origin) serving 100,000 clients, we can send this to lots of “edge” servers. Providers make this easy and are more cost efficient.

We’re using Project Shield, because it’s totally free for indie media, and harnesses the same power of Google. Other freebies exist like CloudFlare, but CloudFlare’s terms of service don’t let you use it for multimedia.

The nginx config we used above should allow your CDN to work properly from the onset. It will cache the manifest/index files (that instruct the client which audio segments to get) for 10 seconds. After 10 seconds, the files will have changed so we want the edges to update. The media files will be cached for a day so once the CDN grabs it once, it will never need to again.

Usage & Testing

I’ll post some sample player code on GitHub, but you probably want to use the HLS stream with hls.js. You can test it on this page. Why not DASH? nginx-ts-module, at the time of writing, has tiny gaps between audio segments. These are pretty audible to an average listeners, so until that is fixed we might as well continue using HLS.

Once again, the streaming URLs will be http://yourcdnhost.com/hls/stream/index.m3u8 (HLS), and http://yourcdnhost.com/dash/stream/index.mpd (DASH).

Do consider setting up SSL. It requires a couple of tweaks in your nginx configuration, and LetsEncrypt makes it much easier. Google are working on support for LetsEncrypt in Shield – hopefully when you read this it will be much easier to use, instead of having to manually replace SSL/TLS certificates every 60 days.

Going Further

The nginx-rtmp-module, similar to install, provides adaptive bitrate HLS (but not DASH). However, there is a bug in Mobile Safari that adds silent gaps between the segments. The TS module will soon support now supports adaptive HLS, this guide will be updated when that happens. You can also hack it yourself.