FIFA 2014 World Cup live stream architecture

live_stream_nginx We were given the task to stream the FIFA 14 World Cup and I think this was an experience worth sharing. This is a quick overview about: the architecture, the components, the pain, the learning, the open source and etc.

The numbers

  • GER 7×1 BRA (yeah, we’re not proud of it)
  • 0.5M simultaneous users @ a single game – ARG x SUI
  • 580Gbps @ a single game – ARG x SUI
  • =~ 1600 watched years @ the whole event

The core overview

The project was to receive an input stream, generate HLS output stream for hundreds of thousands and to provide a great experience for final users:

  1. Fetch the RTMP input stream
  2. Generate HLS and send it to Cassandra
  3. Fetch binary and meta data from Cassandra and rebuild the HLS playlists with Nginx+lua
  4. Serve and cache the live content in a scalable way
  5. Design and implement the player

If you want to understand why we chose HLS check this presentation only in pt-BR. tip: sometimes we need to rebuild some things from scratch.

The input

The live stream comes to our servers as RTMP and we were using EvoStream (now we’re moving to nginx-rtmp) to receive this input and to generate HLS output to a known folder. Then we have some python daemons, running at the same machine, watching this known folder and parsing the m3u8 and posting the data to Cassandra.

To watch files modification and to be notified by these events, we first tried watchdog but for some reason we weren’t able to make it work as fast as we expected and we changed to pyinotify.

Another challenge we had to overcome was to make the python program scale to x cpu cores, we ended up by creating multiple Python processes and using async execution.

tip: maybe the best language / tool is in another castle.

The storage

We previously were using Redis to store the live stream data but we thought Cassandra was needed to offer DVR functionality easily (although we still uses Redis a lot). Cassandra response time was increasing with load to a certain point where clients started to timeout and the video playback completely stopped.

We were using it as Queue-like which turns out to be a anti-pattern. We then denormalized our data and also changed to LeveledCompactionStrategy as well as we set durable_writes to false, since we could treat our live stream as ephemeral data.

Finally, but most importantly, since we knew the maximum size a playlist could have, we could specify the start column (filtering with id > minTimeuuid(now – playlist_duration)). This really mitigated the effect of tombstones for reads. After these changes, we were able to achieve a latency in the order of 10ms for our 99% percentile.

tip: limit your queries + denormalize your data + send instrumentation data to graphite + use SSD.

The output

With all the data and meta-data we could build the HLS manifest and serve the video chunks. The only thing we were struggling was that we didn’t want to add an extra server to fetch and build the manifests.

Since we already had invested a lot of effort into Nginx+Lua, we thought it could be possible to use lua to fetch and build the manifest. It was a matter of building a lua driver for Cassandra and use it. One good thing about this approach (rebuilding the manifest) was that in the end we realized that we were almost ready to serve DASH.

tip: test your lua scripts + check the lua global vars + double check your caching config

The player

In order to provide a better experience, we chose to build Clappr, an extensible open-source HTML5 video player. With Clappr – and a few custom extensions like PiP (Picture In Picture) and Multi-angle replays – we were able to deliver a great experience to our users.

tip: open source it from day 0 + follow to flow issue -> commit FIX#123

The sauron

To keep an eye over all these system, we built a monitoring dashboard using mostly open source projects like: logstash, elastic search, graphite, graphana, kibana, seyren, angular, mongo, redis, rails and many others.

tip: use SSD for graphite and elasticsearch

The bonus round

Although we didn’t open sourced the entire solution, you can check most of them:


Discussion / QA @ HN

How to start learning high scalability

distributed systems

When we usually are interested about scalability we look for links, explanations, books, and references. This mini article links to the references I think might help you in this journey.

DISCLAIMER:

You don’t need to have N machines to build/test a cluster/high scalable system, currently you can use Vagrant or docker and up N machines easily.

THE REFERENCES:

Now that you know you can empower yourself with virtual servers, I challenge you to not only read these links but put them into practice.

Good questions to test your knowledge:

  • Why to scale? how people do that usually?
  • How to deal with user session on memory RAM with N servers? how LB know which server is up? how LB knows which server to send the request?
  • Isn’t LB another SPOF? how can we provide a failover for LB?
  • Isn’t my OS limited by 64K ports? is linux capable of doing that out of the box?
  • How does mongo solves failover and high scalability? how about cassandra? how cassandra does sharding when a new node come to the cluster?
  • What is cache lock? What caching policies should I use?
  • How can a single domain have multiple IP addresses (ex: $ host www.google.com)? What is BGP? How can we use DNS or BGP to serve geographically users?

Bonus round: sometimes simple things can achieve your goals of making even an AB test.

Please let me know any mistake, I’ll be happy to fix it.

Testing your lua scripts for nginx is easy

A friend and I were extending Nginx by using  lua scripts on it. Just in case you don’t know, to enable lua scripting at nginx you can use a lua module you can read more about how to push Nginx to its limits with Lua.

Anyway we were in a cycle:

  • We did some lua coding.
  • Restart nginx.
  • Test it manually.

And this cycle was repeating until we have what we want. This was time consuming and pretty boring as well.

We then thought we could try to do some test unit with the scripts. And it was amazingly simple. We created a file called tests.lua and then we import the code we were using on nginx config.

package.path = package.path .. ";puppet/modules/nginx/functions.lua.erb"
require("functions")

We also created a simple assertion handler which outputs function name when it fails or pass.

function should(assertive)
local test_name = debug.getinfo(2, "n").name
assert(assertive, test_name .. " FAILED!")
print(test_name .. " OK!")
end

Then we could create test suit to run.

function it_sorts_hls_playlist_by_bitrate()
local unsorted_playlist = [[#EXTM3U
    #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1277952
    stream_1248/playlist.m3u8
    #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=356352
    stream_348/playlist.m3u8
    #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=485376
    stream_474/playlist.m3u8]]

local expected_sorted = [[#EXTM3U
    #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=356352
    stream_348/playlist.m3u8
    #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=485376
    stream_474/playlist.m3u8
    #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1277952
    stream_1248/playlist.m3u8]]

should(sort_bitrates(unsorted_playlist) == expected_sorted)
end

I think that helped us speeding up a lot our cycle. Once again, by isolating a component and testing it, it’s a great way to make us productive.

Testing in python can be fun too!

Humble beginning

I come from (lately) Ruby,  Java and Clojure and currently I’m working with some python projects and I was missing the way I used to test my code with rspec.  After a quick research, I found three great projects that helps to make more readable tests, they are: py.test, Mock and sure.

Better assertions

I was missing a better way to make asserts into my tests. The option about using the plain assert is good but not enough and by using sure I could do some awesome code looking similar to rspec.

#instead of plain old assert
assert add(1, 4) == 5
#I'm empowered by
add(1, 4).should.be.equals(5)
[].should.be.empty
['chip8', 'schip8', 'chip16'].shouldnt.be.empty
[1, 2, 3, 4].should.have.length_of(4)

And this makes all the difference, my test code now is more expressive.

Struggling with monkeypatch

The other challenge I was facing was to understand and use monkeypatch for mocks and stubs.  I find easier to use the Mock library even though its @patch looks similar to monkeypatch but I could grasp it quickly.

#Stubing
def test_simple_math_remotely_stubed():
  server = Mock()
  server.computes_add = Mock(return_value=3)

  add_remotely(1, 2, server).should.be.equals(3)

#Mocking
def test_simple_math_remotely_mocked():
  server = Mock()

  add_remotely(1, 2, server)

  server.computes_add.assert_called_once_with(1, 2)

#Stubing an internal dependency
@patch('cmath.nasa.random')
def test_simple_math_with_randon_generated_by_nasa(nasa_random_generator):
  nasa_random_generator.configure_mock(return_value=42)

  add_and_sum_with_rnd(3, 9).should.be.equals(54)

#Mocking an internal dependency
@patch('cmath.mailer.send')
def test_simple_math_that_sends_email(mailer_mock):
  add_and_sends_email(3, 9)

  mailer_mock.assert_called_once_with(
          to='master@math.com',
          subject='Complex addition',
          body='The result was 12')

Make sure you

  1. Are using virtualenv for better lib version managment
  2. Have installed pytest, sure and mock
  3. Git cloned the project above to understand it.

Redundancy and failover on your life

Very simplified introduction: failover is the ability to keep using a service or device in case it fails, and you usually achieve that by having redundancy, having more than one service or device at time. A silly example could be when the power goes off in your house, you can handle that (failover) by having and using a flashlight (redundancy) as backup light system.

In my life I faced similar problems all the time and I think it is valid to share that. I’ll start with the very basic service (but currently very necessary) Internet, suppose we’re not at home or we’re travelling or our beloved ISP is off, I deal with that by having an extra 3G modem and Kindle with 3G free-worldwide.

travel quite often and everytime sometimes I face issues with outlets.  My notebook is meant to be used on Brazil outlet pattern but when I go to US I need to use an outlet adapter. It is not a very accurate failover mechanism however travelling with one world outlet adapter can save you from some pain.

The main way I have fun is by playing games; then in case of my console broke or the power went out I have the portable, again it’s not quite accurate a failover mechanism but for my purpose it is.

Another area is TV, suppose my TV was stealed I can keep watching it by using my usb tv or even my gps tv.

Going to digital world I can tell you endless stories and ways to have failover. The most obvious could be have your files in your computer but keep it also in a cloud storage. I love this thing about digital buying, I used to buy games digitally (Steam, eShop, PSN) and even though I change computer I don’t need dirty old DVD’S to recovery my games, they are all associated with my account.

These last two are the best IMHO: make all the documents you have digital copy (this is easy today since any smartphone can take pictures), try to attached them to cloud (email, storage…), it saved me a lot of time. You should have at least two phone numbers of a service (food delivery, cab, hospital and etc.) because sometimes you don’t have easy access to get this info.

And you, what do you do to have failover in your life?

Clojure session #01

Since I’m not posting nothing here lately, I’ve decided to write some thoughts and examples with some subject, this time is Clojure.

Multiple arities

(defn sum
"sum given numbers"
([] 0)
([x] (+ x))
([x y] (+ x y)))

Undefined number of arguments

(defn sum [& all] (reduce + all))

Local bindings

This can help you to bind some values and let your code more readable

(let [x 2] (+ x 2)) ; x only exists within let context

doing sides effects (maybe you can see this like blocks)

(do
(println "hey")
(def a (read))
(println " you typed " a))

Closures in clojure

(defn validnumber? [max] #(if (> % max) false true))
; or (defn validnumber? [max] (fn [number] (if (> number max) false true)))

(def bellow100? (validnumber? 100))

(bellow100? 9) ; true
(bellow100? 200) ; false

Curying

(def plus-one (partial + 1))
(plus-one 2)

Compose functions

(def composed (comp #(+ 2 %) +))
(composed 1 2)

Thrush operators ( -> and ->> )

-> data flow (in front) among functions
->> data flow (after) among functions

(-> 4 (- 2) (- 2))
;translates to
(- 4 2) (- 2) => (- 2 2) => 0

(->> 4 (- 2) (- 2))
;translates to
(- 2 4) (- 2) => (- 2 -2) => 4

another good usage for ->

(def person {:name "name" :age 120 :location { :country "US" } })
;instead of (:country (:location person)) a more readable way could be
(-> person :location :country)

PS: hardly based on good book Pratical Clojure

Will we only create and use dynamic languages in the future?

Since I’ve playing with some dynamic languages (Ruby and Clojure), I have been thinking about why would anybody create a new static typed language?! And I didn’t get the answer.

I started programming in Visual Basic and I taste its roots, which are almost all full of procedure commands (bunch of do, goto and end), then I moved to C#, sharper it changes the end’s for }’s and give us a little more power based on some premises: we can treat two different things in the same way, polymorphism. The last static language, but not the least, I used (and I use it) Java, abusing of his new way of treating a set of things equality, the interfaces and using its “powers” on reflections.

Although when I started to use Ruby I saw that I could treat a group of things equality without doing any extra work. I still need to code models and composed types, even though we can create or change them dynamically using “real power” of metaprogramming.

When I start to study and apply the Clojure and its principles, my first reaction was the rejection, how can I go on without my formal objects, how can I design software without a model in the head and so on. I wasn’t thinking about how actually I do software, currently I use TDD to design software and I don’t think what models I need to have, I do think in terms of “what I want”. At minimum, Clojure make me think about, do we really need object to design software?! .  A three days ago I saw an amazing video about similar thoughts: Some thoughts on Ruby after 18 months of Clojure.

Summarising: With my limited knowledge of theses languages, let’s suppose we use a function (which we don’t have source code) and we want to do something before that function is executed (intercept) using: VB I’ll need to check every single piece of code which we call this function and call another one, in Java we can use a AOP framework, in Ruby we can use the spells of metaprogramming. It seems that some frameworks, patterns and extra work aren’t needed more because of this dynamic language evolution.

My conclusions using dynamic languages (Clojure/Ruby) for now it’s: I write less code and reuse them more easy, so I don’t see any reason to create/use a new static typed language, would you see any motivation to do that?

PS: When I use C# (.Net Framework 1.3 – 2.0) it was not so super cool as today.