Audio Live Streaming with Icecast2

Icecast2 is our choice for TSRN’s live audio streaming due to the inherent delay of HLS, which delivers NHL video to our sports fans. Icecast2 uses HTTP on the transport layer and added some headers to support application layer, for example, interleaving meta data with audio stream, “Icy-MetaData:1”.

There are some other low latency protocols that don’t meet our needs as good. SHOUTCast is proprietary. WebRtc is P2P for small number of listeners. RTMP is closely related to Flash. RTP is too weak on mobile device.

The PPA repository we use is


Convert to Bayer Format

Image sample files are everywhere, but files in Bayer format are not easy to find. Actually I didn’t have one until we ordered a GeniCam camera. Therefore I made this tool to convert to Bayer formatted files.

Bayer files (I name it .bay) hold one or more raw Bayer frames, like .yuv in the sense that neither of them contains meta data. I convert to .bay with Opencv SDK. OpenCV has functions to convert from Bayer to other format, e.g. CV_BayerRG2RGB, but no support for conversion the other way. Therefore I manipulate the bytes manually, in MatToMatOrBits_MatToBits in gui.cpp in the download link below.

Command line to run the tool, mux.exe, to convert to*.bay:

  1. from .jpg: converts to .bay picture: mux.exe –FileIn pic.jpg –FileOut pic.bay –Action 2
  2. from .mp4: it also converts to .bay video, which only means multiple frames one after another: mux.exe –FileIn video.mp4 –FileOut video.bay –Action 1

Source code and executable:

Runtime DLLs required: Visual Studio 2015 Debug x64, OpenCV 2.4 for VC14

MaxxSports and are now one

By welcoming TSRN to our family, our video content extends from NHL hockey to baseball, football and basketball. TSRN’s camera configuration is different though. Our cameras are permanent installed, and therefore on all the time, while TSRN cameras are only set up during game time. Here are the two integration challenges, and enhancements.

  1. For live streaming, there have to be a mechanism for us to know the cameras are ready.
  2. For DVR, we used to assume the video from camera is continuous, but this is no longer true for TSRN. We have to some how handles gaps in video.
  3. Video advertising. Meridix was handling ads for TSRN; now it becomes our own responsibility.
  4. Audio. Our audio was not well synchronized with video. For TSRN, this has to be fixed.
  5. Live subtitles to show score and commentator’s input. Subtitles for live is not as straightforward as VOD.
  6. Commentator has control to switch camera so that viewers can watch from different angle.
  7. AI/AR based advertising. This is two folds:
    1. Find a proper location to put the ad, e.g. a bench on the side.
    2. Render a 3D movable ad object, e.g. a airplane pulling a banner above the baseball field.


Subtitle Live Streaming

Demo page:  Mouse over the player, click the CC button to choose language.
Even though this video is pre-recorded, the way the components are setup is well-suited for live subtitles.
With regular WebVTT, .vtt file name is in “tracks:”, a property of the player, parallel to “file:” property, and it’s loaded during player initialization. Therefore it only supports VOD subtitles.
With Segmented WebVtt, newly generated .vtt files are appended to the subtitle .m3u8 files, just like new .ts files are appended to video .m3u8 files. Subtitle .m3u8 is EXT-X-MEDIA of the master playlist, and therefore subtitle playlist are refreshed along with video playlist to render live video and subtitle.