This post was inspired by Ilya Grigorik and his amazing efforts to promote performance knowledge in almost every computer level (application, network stack and etc.). But before we start to explore these topics, let’s review the “golden rules” of high performance web sites. (some of them will be better off with http2 :) )
- Make Fewer HTTP Requests
- Use a Content Delivery Network
- Add an Expires Header
- Gzip Components
- Put Stylesheets at the Top
- Put Scripts at the Bottom
- Avoid CSS Expressions
- Reduce DNS Lookups
- Avoid Redirects
- Remove Duplicate Scripts
- Configure ETags
- Make AJAX Cacheable
This is a series of articles about modern network performance:
- HTTP 1.x / HTTP/2
- TCP / QUIC (UDP)
- IP / IPv6
If you’re lazy about reading, watch this short video how browsers work.
It’s crucial to understand how Browsers work so you can optimize your page to load fast, believe me speed is a feature. Let’s suppose your browser is getting a response from example.com. It’ll receive a stream of bytes then it will convert it to characters (following the adopted encoding) and parse the chars to tokens and finally build the nodes which constitute the DOM. A picture is worth a thousand words. A similar process will also happen to build the CSSOM. But we’re not done yet, usually a page requires dozens of external resources (mostly: images, js and css), some of these resources are block rendering. For example, a simple page has CSS and JS as external resources. The browser will first get HTML build the DOM then it’ll find that it needs to download the css and js, after these files are downloaded it needs to build CSSOM, run the JS and rebuild the DOM, only after all these steps the browser will render the page. But the same page using non blocking css (media type/query) and js (async attribute) will make it render quicker, the steps between the first download (html) to render are reduced. It’ll render the page after the first DOM building.
A video (from Umar Hansa) that summarizes this
- All the great images above were stolen from Google’s web fundamentals.
- HTML and CSS are render blocking.
- For CSS you can specify media types and media queries to avoid render blocking.
- Avoid CSS import
- Inline render-blocking css
- That’s all folks
<!-- this will block (you still can inline it) --> <link href="style.css" rel="stylesheet"> <!-- this will block --> <script src="app.js"></script> <!-- this won't block --> <link href="style.css" rel="stylesheet" media="print"> <!-- these won't block --> <script src="user.js" async></script> <script src="vendor.js" async></script>
We were given the task to stream the FIFA 14 World Cup and I think this was an experience worth sharing. This is a quick overview about: the architecture, the components, the pain, the learning, the open source and etc.
- GER 7×1 BRA (yeah, we’re not proud of it)
- 0.5M simultaneous users @ a single game – ARG x SUI
- 580Gbps @ a single game – ARG x SUI
- =~ 1600 watched years @ the whole event
The core overview
The project was to receive an input stream, generate HLS output stream for hundreds of thousands and to provide a great experience for final users:
- Fetch the RTMP input stream
- Generate HLS and send it to Cassandra
- Fetch binary and meta data from Cassandra and rebuild the HLS playlists with Nginx+lua
- Serve and cache the live content in a scalable way
- Design and implement the player
If you want to understand why we chose HLS check this presentation only in pt-BR. tip: sometimes we need to rebuild some things from scratch.
The live stream comes to our servers as RTMP and we were using EvoStream (now we’re moving to nginx-rtmp) to receive this input and to generate HLS output to a known folder. Then we have some python daemons, running at the same machine, watching this known folder and parsing the m3u8 and posting the data to Cassandra.
To watch files modification and to be notified by these events, we first tried watchdog but for some reason we weren’t able to make it work as fast as we expected and we changed to pyinotify.
Another challenge we had to overcome was to make the python program scale to x cpu cores, we ended up by creating multiple Python processes and using async execution.
tip: maybe the best language / tool is in another castle.
We previously were using Redis to store the live stream data but we thought Cassandra was needed to offer DVR functionality easily (although we still uses Redis a lot). Cassandra response time was increasing with load to a certain point where clients started to timeout and the video playback completely stopped.
We were using it as Queue-like which turns out to be a anti-pattern. We then denormalized our data and also changed to LeveledCompactionStrategy as well as we set durable_writes to false, since we could treat our live stream as ephemeral data.
Finally, but most importantly, since we knew the maximum size a playlist could have, we could specify the start column (filtering with id > minTimeuuid(now – playlist_duration)). This really mitigated the effect of tombstones for reads. After these changes, we were able to achieve a latency in the order of 10ms for our 99% percentile.
tip: limit your queries + denormalize your data + send instrumentation data to graphite + use SSD.
With all the data and meta-data we could build the HLS manifest and serve the video chunks. The only thing we were struggling was that we didn’t want to add an extra server to fetch and build the manifests.
Since we already had invested a lot of effort into Nginx+Lua, we thought it could be possible to use lua to fetch and build the manifest. It was a matter of building a lua driver for Cassandra and use it. One good thing about this approach (rebuilding the manifest) was that in the end we realized that we were almost ready to serve DASH.
In order to provide a better experience, we chose to build Clappr, an extensible open-source HTML5 video player. With Clappr – and a few custom extensions like PiP (Picture In Picture) and Multi-angle replays – we were able to deliver a great experience to our users.
tip: open source it from day 0 + follow to flow issue -> commit FIX#123
To keep an eye over all these system, we built a monitoring dashboard using mostly open source projects like: logstash, elastic search, graphite, graphana, kibana, seyren, angular, mongo, redis, rails and many others.
tip: use SSD for graphite and elasticsearch
The bonus round
Although we didn’t open sourced the entire solution, you can check most of them:
- Python HLS manifest parser / utility
- Pure lua Cassandra client driver
- Live thumbnail
- Audio only from HLS nginx module
- Python lock (redis)
- P2P and HTTP live streaming
When we usually are interested about scalability we look for links, explanations, books, and references. This mini article links to the references I think might help you in this journey.
Now that you know you can empower yourself with virtual servers, I challenge you to not only read these links but put them into practice.
- First of all, motivate yourself by watching this tutorial using nodejs + nginx + applying static caching + load balancing + testing, all this in 7 minutes.
- Add these words and their meaning to your vocabulary: scalability, failover, single point of failure (SPOF), sharding, replication and load balancing; even if you don’t understand them completely.
- In order to have a general overview and the reasons/whys about scalable systems, I strongly recommend you to read Scalable Web Architecture and Distributed Systems. This is a great introduction.
- After you get the general idea you can move on to understand how to use a load balancer and what decisions and problems you will face. And then you can try to run a haproxy and make it not a single point of failure too.
- Dare yourself to serve 3 million requests per second but for this task you’ll need to generate 3 million requests, fine tune your web server and finally scale and test it.
- Your application is already scalable, now you need to scale your databases. They are very important part of your application, here I recommend you to read at least how MongoDB scales with sharding and replication and Cassandra with its almost linear scalability and the ease of adding nodes to the cluster.
- Since your application and database are scalable and fault tolerant, it’s good to save your servers unnecessary workload and also make the responses to the user faster. Learn that a good request is the one that never reached the “real server”.
- Let’s assume we’re deploying the whole infrastructure within a single data center, now we have another SPOF. Since all servers are in the same space, some natural disaster might happen or even the simple power outages. Good news is that Cassandra have support to multiple data center out of the box and you can see how google face this issue. If your user is on Brazil, don’t make him travel longer than he needs and remember even with the best situation we still have latency.
Good questions to test your knowledge:
- Why to scale? how people do that usually?
- How to deal with user session on memory RAM with N servers? how LB know which server is up? how LB knows which server to send the request?
- Isn’t LB another SPOF? how can we provide a failover for LB?
- Isn’t my OS limited by 64K ports? is linux capable of doing that out of the box?
- How does mongo solves failover and high scalability? how about cassandra? how cassandra does sharding when a new node come to the cluster?
- What is cache lock? What caching policies should I use?
- How can a single domain have multiple IP addresses (ex: $ host www.google.com)? What is BGP? How can we use DNS or BGP to serve geographically users?
Bonus round: sometimes simple things can achieve your goals of making even an AB test.
Please let me know any mistake, I’ll be happy to fix it.
A friend and I were extending Nginx by using lua scripts on it. Just in case you don’t know, to enable lua scripting at nginx you can use a lua module you can read more about how to push Nginx to its limits with Lua.
Anyway we were in a cycle:
- We did some lua coding.
- Restart nginx.
- Test it manually.
And this cycle was repeating until we have what we want. This was time consuming and pretty boring as well.
We then thought we could try to do some test unit with the scripts. And it was amazingly simple. We created a file called tests.lua and then we import the code we were using on nginx config.
package.path = package.path .. ";puppet/modules/nginx/functions.lua.erb" require("functions")
We also created a simple assertion handler which outputs function name when it fails or pass.
function should(assertive) local test_name = debug.getinfo(2, "n").name assert(assertive, test_name .. " FAILED!") print(test_name .. " OK!") end
Then we could create test suit to run.
function it_sorts_hls_playlist_by_bitrate() local unsorted_playlist = [[#EXTM3U #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1277952 stream_1248/playlist.m3u8 #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=356352 stream_348/playlist.m3u8 #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=485376 stream_474/playlist.m3u8]] local expected_sorted = [[#EXTM3U #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=356352 stream_348/playlist.m3u8 #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=485376 stream_474/playlist.m3u8 #EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1277952 stream_1248/playlist.m3u8]] should(sort_bitrates(unsorted_playlist) == expected_sorted) end
I think that helped us speeding up a lot our cycle. Once again, by isolating a component and testing it, it’s a great way to make us productive.