AutoMix is a proof of concept software vision mixer, using the libatem library.

It works by capturing joystick input events (from a mixer mic cue lights), and audio levels from microphones, and then making a guess on who is talking and cutting to that mic, as well as algorithmic-ally inserting reaction shots and wide angles.

View on GitHub

Write-up on how we actually use it available here.

If you want to help us improve the software, please do. We’d really like to make it more accessible to stations who don’t have software engineers on-site.