GStreamer Conference 2019
As in the previous year, this year's GStreamer Conference took place in the same city as the ELC-E. Therefore, Michael Olbrich and Michael Tretter of the Pengutronix Graphics Team used the opportunity of already being in Lyon to also attend this conference.
While there were a lot of interesting talks, I still want to point out a few notable talks.
GStreamer State of the Union
As last year, the conference was opened by Tim-Philipp Müllers State of the Union talk about recent developments in the GStreamer universe.
The most prominent changes in the last year were the move from Bugzilla to GitLab including a major rework of the CI infrastructure, the move from autotools to Meson, which resulted in various improvements to the Meson build system, and the move from gtk-doc to hotdoc for generating the documentation.
Apart from that, Tim very quickly went across many other changes in the plugins.
Alicia Boya García presented the validateflow plugin that allows to record the output of gst-validate-1.0 pipelines on their first run and compare subsequent runs against the recording. This relieves test authors from the burden to manually code the expected test output. Instead they can just verify the output of the first run and compare subsequent runs to the golden run.
The test author writes the gst-validate pipelines as usual and registers the validateflow plugin at a pad in the pipeline.
HDR: Seeing the World As It Is.
Another highlight was Edward Hervey's talk on HDR video. After explaining the basics of how colors are represented in computer systems and why the presentable colors only covers a small part of what the human eye can actually perceive, he explained how the situation is totally changed by the BT 2020/2100 standards and Wide Color Gamut and Higher Dynamic Range.
However, the technology has not yet reached practice, because of hardware limitations and incompatibilities. For examples, because of physical limits, displays cannot show everything that is represented in the Standard and try to show as much as they can.
He finished that HDR in software is mostly about correctly conveying the used colorspace, but as HDR is not universally supported yet, there are a lot of ugly details that must be handled.
Wim Taymans gave an update on the recent development on the PipeWire daemon, especially from a users perspective. He showed the interaction with Jack applications and Pulseaudio and presented features that he wants to implement like transports for controls, MIDI and video processing with Vulkan.
At Pengutronix, we are using Pipewire for sharing video between client applications and for streaming Weston output via the network. Thus we are looking forward to the further development of Pipewire.
20 Years of GStreamer
Wim Taymans also started Friday with a history lesson about the last 20 years of GStreamer.
He told the story of GStreamer, how people founded companies around GStreamer or stopped working on GStreamer at all, how Gnome and Nokia funded GStreamer and put pressure to stabilize and release versions. Furthermore, he affirms all of his claims with e-mails of significant events in the GStreamer history. The highlight of the talk is a dump of the GStreamer pipeline that was used in the LIGO Project.
As a cherry on the cake, the talk ended with a Skype call with Erik Walthinsen, the founder of GStreamer.
Which Network Streaming Protocol Should I Pick?
If you want to know more about the entire zoo of streaming protocols, you should look at Olivier Crête's talk.
He presents the use of every streaming protocol that is supported in GStreamer and explains their use cases and origins. If you ever wondered about things like SDI, RTP, SRT, RIST, SIP, WebRTC, HLS, RTSP, and others, just get enlightened by this talk.
Enabling a Different Piece of Hardware, a Story
Guillaume Desmottes presented Collabora's work on the Xilinx Zynq UltraScale+, which is able to transcode multiple 4k streams at 60 frames per second in parallel.
They are using Xilinx's downstream driver which exposes the pretty much dead OpenMAX API to GStreamer. On top of this interface they build the infrastructure for sub-frame latency using slices, alternate interlacing and pushing packets before the encoding is done and zero-copy dma bufs.
It would be interesting to see the features as well on the mainline Allegro V4L2 driver for the encoder.
Home Automation with GStreamer
Jan Schmidt closed the conference by presenting his ongoing work on his home automation system that he is working on for several years now. While he already has multi-room audio in place, he wants to also control the system by voice. Especially, the system shall be able to locate the speaker in the house to properly react on commands.
While his previous experiments did not work out, microphone arrays are freely available now and he showed the ODAS tool for detecting the speaker and how he uses libquiet to calibrate the positions of the microphone arrays with respect to each other.
The major topic in the GStreamer universe were fundamental changes to infrastructure. The move from Bugzilla to GitLab and the GitLab build and test infrastructure caused a few ruptures, but overall most people seem pretty content with the move. While the change was pretty straightforward for Linux, making the Windows builds work was rather complicated which showed up in a few talks in this years conference.
The attendees at the conference indicate that GStreamer has really arrived in the professional industry. The various technologies that are used, e.g. SRT (secure reliable transport), SDI (serial digital interface), and standard conforming MPEG-TS streams indicate that the classic broadcast industry is switching to GStreamer and is preparing it for production use. Further the talks about Pipewire in Automotive and using AudioVideo Bridging in Automotive show that also the automotive industry looks into GStreamer and related technologies for their use cases.