Tag Archives: transcoding

Audio/Video transcoding server

I’m pleased to announce Nullx – our standalone audio/video/subtitles transcoding server. It accepts files to be transcoded via http and either returns transcoded files in the reply or optionally uploads file (and metadata) into Elliptics storage.

So far it is a quite simple service which does not change parameters of the streams like height/width or bitrate, but instead transcodes streams into h264/aac/mov_text format, which is only suitable for HLS adaptive streaming. Since we plan to add realtime downscaling of the data for adaptive streaming, this service will be extended and will receive some per http request controls which will tell how exactly should given stream be transcoded, so far I believe only h264 profile and height/width for video and bitrate for audio streams are needed.
That will be our next release.

Nullx – transcoding server – is used in our broadcast service which allows to group uploaded into Elliptics storage audio/video streams, mux them together with different options (like order, timings, split at various time positions, sound from different source and so on) and adaptively stream resulted data using MPEG-DASH/HLS protocols (natively supported by Chrome/IE/Firefox and Safari on desktop and mobile).

Ever thought of working with ffmpeg?

Transcoding audio track into mp4(aac) is just about 30kb of hardcore <a href="https://github online rezept cialis.com/bioothod/nullx/commit/2c378e8644176fde220bb6689ceed4f38d4832a1″>C code.

That’s a small highlight on our progress on the streaming service, we are building a system which accepts user content and allows to create long-lived broadcasting translations containing many tracks in one stream.

For example your stream may start with file ‘introduction’ then ‘adv1’ then part of the file ‘main content’, ‘adv2’, ‘main content’, ‘final part’ and so on.
The only thing you need is to upload your audio/video tracks to our service, and create your stream using our interface. If you prefer, you can setup different audio track for your stream.
We will use adaptive HLS/DASH streaming for different client bandwidths.

We will not concatenate your video files together, instead we are using real-time stream composition on our streaming servers which build video content just out of the files you uploaded.

Here is initial presentation (MPEG-DASH) which muxes 2 video streams (5 seconds each, sample-after-sample) and 2 completely different audio streams: http://video.reverbrain.com/index.html