Transcoding GeniCam to HLS


Goal: GeniCam cameras output Bayer format raw video. We need HLS H.264.
Hardware setup at MaxxSports:
Camera connects to a low end Windows laptop by a 3 foot USB3 cable. Laptop is not capable to encode 1080p video, and it sends the raw video to a Linux server through optical fiber LAN.

The general flow:
In house Windows OpenCV application, mux.exe, reads and process the raw video from camera, and pipes it to ffmpeg.
ffmpeg sends raw video to ffserver through http.
ffserver encodes it into H.264, and expose a RTSP end point.
ffmpeg re-packages RTSP into HLS, and exposes it as a http URL.

Sample commands listed as following:
Raw video processing: mux.exe
Send raw video to ffserver:
mux.exe | ffmpeg.exe -re -f rawvideo -vcodec rawvideo -s 1920×1080 -r 30 -pix_fmt            bayer_rggb8 -i pipe:0 -g 1 http://awsq.MaxxSports.cc:555/feed.ffm
ffserver.conf parameters:
HttpPort 555
RtspPort 554
File /mnt/ebs/ffbuf/feed.ffm
<Stream bvsc.mp4>
Feed feed.ffm
Format rtp
VideoCodec libx264
Re-package from RTSP to HLS
ffmpeg -i http://awsq.MaxxSports.cc:554/bvsc.mp4 -f hls -hls_segment_filename /mnt/ebs/strm_%05d.ts -hls_time 6 -segment_list_flags +live -hls_list_size 12000 -hls_flags delete_segments -start_number 0 /mnt/ebs/strm.m3u8

Final HLS output URL:  http://awsq.MaxxSports.cc/strm.m3u8

Advertisements

Server Push


There are two kinds of server pushes.

1. video server keeps pushing jpg image frames as a way for browser to play video.

AKA M-JPEG over HTTP, “server push stream”. As a mime-type content type multipart/x-mixed-replace;boundary=boundary-name, the images are separated by boundary-name.

It’s really still a Pull instead of Push because images are sent as one resource in response to GET. The URL looks like this: http://QeyeCamera.com/now.jpg?snap=spush

2. The other server push is truly a Push. For example, browser request 1.htm, but server returns 2.htm, which browser didn’t request, in addition to 1.htm. This is part of HTTP/2 standard.
Nginx started supporting at version 1.13.9

Start code in PES


There are two types of start code in H.264 PES preceding each NALU: the three byte and four byte version; I call them 001 and 0001 for short.
It has been confusing me for years and I am still not clear of when to use one over the other.
Some say 0001 is used before the first NALU in a coded video sequence. However, based on my test, most NALUs are preceded with 0001, the four byte version, regardless of the NALU’s type and location.
I examined three .h264 files, Party.h264 from my IP camera, Gym2017bshort.h264 from my Nexus phone camera, BengSi.h264 from a flash file.
For all three, vast majority of NALUs are preceded with 0001. Very few are preceded with 001, for example, SEI and first I-slice start with 001.
Sample files:
http://riowing.net/media/Party.h264
http://riowing.net/media/Gym2017bshort.h264
http://riowing.net/media/BengSi.h264

StartCode

Subtitle Options


There are five places for subtitles, listed innermost layer first.
1. in VCL: Video Coding Layer, as part of the frame video, called hardsub. e.g. by ffmpeg drawtext.
2. in SEI as part of NAL in ES, Network Abstraction Layer, e.g. CEA-708. ffmpeg cannot generate this kind.
3. in track in a video file, e.g. mp4.
ffmpeg -i Party.mp4 -i cap.srt -c:v copy -c:a copy -c:s mov_text -metadata:s:s:0 language=eng PartyCap.mp4
Here is a sample video, playable with vlc, by clicking Menu – Subtitle – Sub Track:
[ http://riowing.net/local/subtitle/track/PartyCap.mp4 ]
4. WebVtt: subtitle in separate file, segmented VTT or not, for both VOD and live.
This sample page is segmented VTT: http://riowing.net/local/subtitle/caption.htm
5. Subtitle as a layer in web page outside video player. e.g. outside JW Player.
This is no longer video technology but HTML

 

PartyCap