Computer Networking resources for DevOps and Software Engineers

computer networking for software engineers

Learning deeply about computer networking can lead you to be a better software engineer. I argue in favor of that because all the software we produce/use/maintain uses the network to communicate.

Very often we trust stuff we think we know, and computer networking is one of them. The cloud or even “full application platform” (such as k8s) may increase this knowledge gap for good and for bad.


Here are the most useful resources I think might help you understand how computer networking works.

https://iximiuz.com/en/posts/computer-networking-101/

How TCP Sockets Work

The Brazilian challenges for media streaming

Here at Globo, we have as our mission to build resilient systems to provide an optimal experience for our domestic and international customers, allowing them to watch their favorite international or local tv show, a soap opera, or any of our live streaming channels on globoplay.

The architecture supporting globoplay’s media platform heavily relies on open-source software such as GNU/Linux, nginx, tsuru, kubernetes, clappr, nginx, on-the-fly packaging and much more. Another exciting aspect about Globo is that we built and manage our own CDN. Such a task pushes us to broaden our knowledge and understanding around Brazil’s networking infrastructure.

Nowadays, people are expecting to stream on any device they own. This means that we need to support most of these devices. There are hundreds of smartphones, dozens of smart TVs, many tablets, browsers, casters, etc. Providing a steady and successful playback experience in this scenario inevitably adds a lot of complexity to the architecture.

landscape

We’re live-streaming the famous program Big Brother through the Internet and broadcasting it into the air as well. In one of the days, we reached 2 million simultaneous users watching the BBB live stream. To serve these many users, we must be present in the majority of stream-capable platforms.

Before we discuss the challenges we face dealing with this plethora of devices, here’s the breakdown of play hit and watch session percentage grouped by major platform.

The data from one week in January, 2021. Play metric means the percentage of plays for a specific device. Playtime represents the fraction (%) of whole-time watched by a given platform. Big screen accommodates all casters and HTML also counts mobile browsers.

In the US, the market-share seems to gravitate around the casters (also k    nown as OTT devices like FireTV, Roku, ATV, etc). In contrast, here in Brazil, we have mobile devices and connected TVs sharing a lot of home presence. We have a diverse environment of Connected TVs brands as well as a significant number of old TVs that run legacy and outdated firmware/OS versions.

On the Big Brother case, the big-screen performed fewer plays, but people kept watching on it for longer sessions. It could be due to the convenience that these devices offer, but it’s hard to draw a conclusion.

This data may differ from other players in Brazil. However, here are some observations for the context of Brasil:

  • Netflix, due to its premium VOD only nature,
  • Youtube, because they’re present natively on most platforms,
  • Twitter / Facebook, their massive presence mostly on mobile

Although it wouldn’t be unexpected, if the data presented here align with these players.

challenges

While having people watching everywhere sounds good, it also poses a hard technical challenge. To support the majority of devices, we need to adjust our platform in almost every component of its workflow.

Here we list some of the problems we faced and how we fixed them:

http protocol

  • HTTP Cookies
    • problem: some devices can’t manage cookies
    • solution: persist data over URI or HTTP headers
  • HTTP CORS 
    • problem: some native players don’t follow all the cors specification
    • solution: adapt for their usage
  • chunked transfer
    • problem: a specific device fails silently dealing with chunked transfer
    • solution: buffer the intermediate response and serve the full file

media encoding

  • video frame rate
    • problem: mixing different frame rates (i.e. 30fps and 60fps) causes glitches on rendition swap in some players
    • solution: offer fixed frame rate ladder for them 
  • audio sampling
    • problem: 48khz sampling causing audio glitch/sync on some devices
    • solution: stick to 44.1 khz 
  • aes hls encryption
    • problem: some devices handles aes hls keys as if they were always at the same level as the master variant (if they aren’t, then it’s a 404)
    • solution: put the full URI for each key or always have your variant in the same level as your master
  • syncing by hls media sequence
    • problem: some players use the hls media sequence attribute to sync among the renditions
    • solution: always sync your media sequence
  • high resolutions
    • problem: some devices won’t be able to play high resolution renditions (even if they advise otherwise)
    • solution: filtering out resolutions based on observed capacity

network

Brazil is a continental country with sub-optimal internet infrastructure. One way to offer a smooth experience for media streaming users is to take the content closer to them. We are continually expanding our footprint with new edges, open caches, ISP, IX, etc.

Overview of Telecommunication in Brazil

Although we have an average broadband of 24 Mb/s, we still have a lot of users down below the 2 Mb/s marks. It might be due to the ISP fragmentation (Brasil has at least six thousand of them).

We improved our streaming QoE by adding a 144p rendition within our current ladders. In a live streaming, such a rendition may require around 200kpbs which consumes circa 77MB per hour.

Such a complex scenario made us introduce an extra congestion control level. When we notice that a group of users, behind an ISP, started to fill their link to our CDN, we eagerly adjust the renditions’ suggestion to these users. At the point where the link starts to decrease its saturation, we also re-adjust the suggested bitrates.

In this server-side “ABR” we act much more preventable than in the player’s ABR algorithms or the (almost reactive) TCP mechanism.

it’s all entertainment

Dealing with outdated devices, varying Internet speed, latency, and bandwidth, plus the already complex media streaming surroundings requires teams to be well-integrated. People working together help to alleviate the hard task of debugging in this environment. 

Brazilians love to spend time on the Internet. The quantity of devices (brands, OSes, build year, etc) and the complex Internet infrastructure creates opportunities to be creative while designing systems to support Brasil’s context.

Fake Tweet Screenshot Proliferation

I was following some interesting discussions about how no one should trust a print screen of a tweet, and it got me thinking regarding how to overcome such a challenge.

What if somehow we could attach a verifiable piece of text (or image QR code) to a tweet that could quickly disprove a fake print screen or still optimistically could point that this print might be legit.

simulating a verifiable tweet card

I thought about using HMAC to solve this problem, where the secret is something that only Twitter knows and it’s tied to each user. Every tweet must have an SHA 256 signature, signature = HMAC(user.secret, tweet).

For the sake of UX/UI, this signature can show only its first ten characters, a short hash and it still retains a good number of permutations (16^10).

Now when you see a print screen claiming that @someone told “this and that” you can go to somewhere like twitter.com/check and type the username, the message plus its alleged signature and verify whether it’s fake or might be legit. 

Sounds good and simple, done:

package main
import (
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"fmt"
)
func main() {
handle := "@someone"
secret := "secret" // belonging to handle
tweet := "tweeted a cat"
h := hmac.New(sha256.New, []byte(secret))
h.Write([]byte(tweet))
sign := hex.EncodeToString(h.Sum(nil))
sep := "—————————————"
fmt.Printf("%s\n%s \n%s \n{sign=%s}\n%s\n", sep, handle, tweet, sign[0:10], sep)
}
view raw tweet_fake.go hosted with ❤ by GitHub

Of course not, such an idea needs to be refined and proven to be safe. I foresee many challenges, one of them being, let’s say someone is trying to check a print however this person might not be able to replicate the images, emojis, special characters or even to match the number of spaces/newlines. 

For that reason, a kind of allow list filtering out anything else might be in hand. The question is, what characters such a set would hold? The basic Latin character set [a-z, A-Z] is not enough. We have special accent characters in languages such as Portuguese, Spanish and French. And languages such as Hebrew, Japanese, Chinese are best represented by Unicode/UTF-8 characters.

Also, since I’m not a security specialist in any capacity, I don’t know what happens if the secret remains the same over time with a ton of tweets and signatures exposed. Is it possible for an observer to break this thing?

Well, that’s my silly approach to try to fight the Fake Tweet Print Screen. What do you think? Is it feasible? Is it even necessary to build such a system? Or is it just a plain stupid idea?

Building an edge computing platform

Attention: I released resty-dynacode an openresty library enabling users to add Lua code dynamically to Nginx.

This is the last post in the series, now we’re going to design an edge computing platform using Lua, Nginx, and Redis cluster. Previously, we explained how to add code, not dynamically, in nginx.

Platformize the computations

The platform is meant to provide a way to attach Lua code dynamically into the edge servers. The starting point can be to take the authentication code and port it to this solution.

At the bare minimum, we need a Lua phase name, an identifier and the Lua code. Let’s call this abstraction the computing unit (CU).

If we plan to add a computing unit dynamically to the NOTT, we need to persist it somewhere. A fast data store for this could be Redis.

overview_architecture

We also need to find a way to encode the computing unit into some of the Redis data types. What we can do is to use the string to store the computing unit. The key will be the identity and the value will hold the Lua phase and code separated by the string “||.

# run this in one tab
docker run –rm -it –name some-redis redis
# run this in another tab
docker exec -it some-redis redis-cli
127.0.0.1:6379> set authentication "access_by_lua_block||if token ~= 'token' then \n return ngx.exit(ngx.HTTP_FORBIDDEN) \n else \n ngx.header['Set-Cookie'] = {'superstition=token'} \n end"
OK
127.0.0.1:6379> get authentication
"access_by_lua_block||if token ~= 'token' then \n return ngx.exit(ngx.HTTP_FORBIDDEN) \n else \n ngx.header['Set-Cookie'] = {'superstition=token'} \n end"
view raw exploring.sh hosted with ❤ by GitHub

The platform needs to know all of the computing units, therefore, we need to list them. We could use the keys command but it can be very slow depending on how much data we have.

A somewhat better solution would be to store all the identities in a set data type, providing an O(N) solution, where N is the number of CUs.

KEYS pattern would also be O(N) however with N being the total number of keys in the data store.

# run this in one tab
docker run –rm -it –name some-redis redis
# run this in another tab
docker exec -it some-redis redis-cli
127.0.0.1:6379> set authentication "access_by_lua_block||if token ~= 'token' then \n return ngx.exit(ngx.HTTP_FORBIDDEN) \n else \n ngx.header['Set-Cookie'] = {'superstition=token'} \n end"
OK
127.0.0.1:6379> get authentication
"access_by_lua_block||if token ~= 'token' then \n return ngx.exit(ngx.HTTP_FORBIDDEN) \n else \n ngx.header['Set-Cookie'] = {'superstition=token'} \n end"
127.0.0.1:6379> sadd coding_units authentication
(integer) 1
127.0.0.1:6379> smembers coding_units
1) "authentication"
127.0.0.1:6379> sadd coding_units anewcomputingunit
(integer) 1
127.0.0.1:6379> smembers coding_units
1) "authentication"
2) "anewcomputingunit"
view raw redis.sh hosted with ❤ by GitHub

Now that we know how we’re going to encode the computing unit, we need to find a method to parse this string separated by || and also a process to evaluate this string as code in Lua.

To find a proper and safe delimiter is difficult, so we need to make sure that no Lua code or string will ever contain ||.

— running luajit in repl will be useful for exploration
— docker run -it –rm akorn/luajit:2.1-alpine
— just inspired at
https://stackoverflow.com/questions/1426954/split-string-in-lua
function mysplit (inputstr, sep)
if sep == nil then
sep = "%s"
end
local t={}
for str in string.gmatch(inputstr, "([^"..sep.."]+)") do
table.insert(t, str)
end
return t
end
codeparts = mysplit("luaphase||luacode", "||")
print(codeparts[1])
— luaphase
print(codeparts[2])
— luacode
luacode = "a = 0 \n a = a + 1 \n print(a) \n"
luacodefunction = loadstring(luacode)
luacodefunction()
— 1
view raw lua.lua hosted with ❤ by GitHub

To split a string we’ll use a function taken from Stackoverflow PepeLaugh - Discord Emoji and for the code evaluation, Lua offers the loadstring function.

But now some new questions arise, what happens if the code is syntactically invalid or when the CU raises an error? and how can we deal with these issues?

— Error during evaluation – syntactically invalid
code, err = loadstring(" a = 1 \n a a \n pring(a) \n")
print(code)
— nil
print(err)
— [string " a = 1 …"]:2: '=' expected near 'a'
— Error during runtime – syntactically valid
stringcode = " a = taketime.menu "
code, err = loadstring(stringcode)
print(err) — no evaluation error
— nil
print(code) — a runtime error will happen once we execute this function
— function: 0x40a068a0
function err_handler(err_msg)
print("an err was raised", err_msg)
return err_msg
end
status, ret = xpcall(code, err_handler)
print(status)
— false
print(ret)
— [string " a = taketime.menu "]:1: attempt to index global 'taketime' (a nil value)
view raw lua.lua hosted with ❤ by GitHub

To deal with syntax errors, we need to validate the returned values from the function loadstring where the first value is the function instance and the last is the error.

And to protect against runtime error, Lua has a builtin function called xpcall (pcall meaning protected call) that receives a function to execute and a second argument which is an error handler function.

With all this in mind, we can develop the core of our platform. Somehow we need to get all the computing units from Redis, parse them to something easier to consume and finally, we can execute the given function.

Before we start to code, we can create a prototype that will replicate the authorization token system we did before but now using Redis to add and fetch the computing unit as well as shielding the platform from broken code.

— location /test {
— content_by_lua_block {
local redis_client = redis_cluster:new(config)
— simulating a dynamic code being stored at Redis —
redis_client:set('authentication', "access_by_lua_block||local token = ngx.var.arg_token or ngx.var.cookie_superstition \n if token ~= 'token' then \n return ngx.exit(ngx.HTTP_FORBIDDEN) \n else \n ngx.header['Set-Cookie'] = {'superstition=token'} \n end")
redis_client:sadd('coding_units', 'authentication')
— simulating a dynamic code being stored at Redis —
— fetch —
local resp, err = redis_client:smembers("coding_units")
local raw_coding_units = {}
for _, coding_unit_key in ipairs(resp) do
local resp, err = redis_client:get(coding_unit_key)
table.insert(raw_coding_units, resp)
end
— fetch —
— parse —
local coding_units = {}
for _, raw_coding_unit in ipairs(raw_coding_units) do
local parts = mysplit(raw_coding_unit, "||")
local cu = {}
cu['phase'] = parts[1]
cu['code'] = parts[2]
table.insert(coding_units, cu)
end
— parse —
— execute —
for _, coding_unit in ipairs(coding_units) do
ngx.log(ngx.ERR, "phase ", coding_unit['phase'])
ngx.log(ngx.ERR, "code ", coding_unit['code'])
local function_code, err = loadstring(coding_unit['code'])
ngx.log(ngx.ERR, "loadstring error ", err)
local status, ret = xpcall(function_code, err_handler)
ngx.log(ngx.ERR, "xpcall status ", status, " ret ", ret)
end
— execute —
ngx.say(" R$ 45,567,900,00 ") — a random value
— }
— }
view raw test.lua hosted with ❤ by GitHub

To test these lines of code, we can go to the terminal and simulate calls to the proper nginx location. Therefore we can understand if the expected behavior is shown.

Screen Shot 2020-04-14 at 12.59.56 PM

Since we’re comfortable with our experiment, we can start to brainstorm thoughts about the code design and performance trade-offs.

Querying Computer Units

The first decision we can take is about when we’re going to fetch all the computing units (CUs). For the sake of simplicity, we can gather all the CUs for every request but then we’re going to pay extra latency for every client’s request.

Screen Shot 2020-04-14 at 11.36.37 PM
the step 2 can generate several round trips in Redis cluster

To overcome these challenges, we’ll rely on two known techniques,  caching and  background processing.

We’ll move the fetch and parse logic to run in the background. With that running periodically, we then store the CUs into a shared memory where the edge computing core can lookup without the need for additional network connections.

Openresty has a function called ngx.timer.every(delay, callback), it runs a callback function every delay seconds in a “light thread” completely detached from the original request. This background job will do the fetch/parser instead of doing so for every request.

Once we got the CUs, we need to find a buffer that our fetcher background function will store them for later execution, openresty offers at least two ways to share data:

  • a declared shared memory (lua_shared_dict) with all the Nginx workers
  • encapsulate the shared data into a Lua module and use the require function to import the module

The nature of the first option requires software locking. To make it scalable, we need to try to avoid this lock contention.

The Lua module sharing model also requires some care:

“to share changeable data among all the concurrent requests of each Nginx worker, there is must be no nonblocking I/O operations (including ngx.sleep) in the middle of the calculations. As long as you do not give the control back to the Nginx event loop and ngx_lua’s light thread scheduler (even implicitly), there can never be any race conditions in between. “

source

Edge Computing Bootstrapping

The usage of this edge computing lua library requires you to start the background process and also to explicitly call an execution function for each location and lua phase you want to add it to.

http {
init_by_lua_block {
config = "redis-cluster-config"
redis_cluster = require "resty-redis-cluster"
edge_computing = require "resty-edge-computing"
}
server {
listen 8080;
location /platform {
alias /usr/local/openresty/nginx/;
}
rewrite_by_lua_block {
local redis_client = redis_cluster:new(config)
local status, err = edge_computing.start(redis_client)
if not status then
ngx.log(ngx.ERR, " edge_computing.start error ", err)
end
}
access_by_lua_block {
local status, errs = edge_computing.execute()
if errs ~= {} then
for _, err in ipairs(errs) do
ngx.log(ngx.ERR, " edge_computing.execute error ", err)
end
end
}
}
}
view raw nginx.conf hosted with ❤ by GitHub

In the past example, we started, on the first request, at the rewrite phase, this will also initiate the background job to update every X seconds.

On the access phase, we’re going to execute the available CUs. If a second request comes in, it’ll “skip” the start and just execute all the cached CUs.

no_extra_cost

While using these APIs solve our challenge of adding unnecessary latency, it comes at a price: now when we add a new CU to the data store, it might take X seconds to be available for workers.

The rewrite_by_lua_block was used because it’s the first phase where we can start a background job that can access cosockets and works with lua-resty-lock (a library used by resty-redis-cluster).

funny behavior will happen, related to the eventual consistency nature of this solution: if a client issues a request1, in a given time for a Worker1, later the same user does another request2 and a different Worker2 will accept it. The time in which each worker will run the update function will be different.

This means that the effective deployment of your CUs will be different even within a single server. The practical consequence for this is that the server might answer something different for a worker in comparison to another one. It will eventually be consistent given the x seconds delay declared as update interval.

worker_memory_update

Nginx worker load balancing relies on the event-based model and OS-dependent mechanisms to efficiently distribute requests among worker processes.

How to use it

Adding the CU via the Redis cluster will make it work.

git checkout 1.0.4
# in tab1 – run the NOTT
make run
# in tab2 – run the tv show
make broadcast_tvshow
# in tab3 – test
http http://localhost:8080/hls/colorbar.m3u8
# in tab4 – let's add the token CU
# to act in access rewrite phase
# — first need to discovery the redis cluster id
docker ps | grep redis
# — then let's connect to the redis cluster
docker exec -it f44ed71b3056 redis-cli -c -p 7000
# inside redis-cluster let's add the CU
set authentication "rewrite||local token = ngx.var.arg_token or ngx.var.cookie_superstition \n if token ~= 'token' then \n return ngx.exit(ngx.HTTP_FORBIDDEN) \n else \n ngx.header['Set-Cookie'] = {'superstition=token'} \n end"
sadd coding_units authentication
# go back to tab3 – you should eventually (after max 20s)
# receive 403 as response
http http://localhost:8080/hls/colorbar.m3u8
# then add the token and it'll work again
http http://localhost:8080/hls/colorbar.m3u8?token=token
view raw to_test.sh hosted with ❤ by GitHub

Computing Edge Use Cases

Let’s list some of the possible usages for this platform so we can think a little bit ahead of time.

  • access control – tokens, access control, origin
  • change response
  • decorate headers
  • generate content
  • traffic redirect
  • advanced caching

The options are endless, but let’s try to summarize the features that we didn’t add to the code yet.

When we implemented the request counter we used redis as our data store so it’s safe to assume that somehow the CUs might use redis to persist data. Another thing we could do is to offer sampling, instead of executing the task for all requests we could run it for 3% of the them.

Another feature we could do is to allow filtering by the host. In this case, we want a given CU to execute in a specific set of machines only, but we also can achieve that in the CU itself if we need to.

The persistence needs to be passed for the CU, we can achieve that by wrapping the provided raw string code with a function that receives an input and pass this argument through the pcall call.

local wrapper_loadstring = function(str_code)
local api_fun, err = loadstring("return function (edge_computing) " .. str_code .. " end")
if api_fun then return api_fun() else return api_fun, err end
end
— … compile cu["code] = wrapper_loadstring(raw_code)
local status, ret = pcall(cu["code"], {redis_client=edge_computing.redis_client})
— CU in Redis —
— set authentication "rewrite||local resp, err = edge_computing.redis_client:incr(\"key\") \n ngx.log(ngx.ERR, \" >>>> \", resp) \n"
local resp, err = edge_computing.redis_client:incr("key")
ngx.log(ngx.ERR, " >>>> ", resp)
view raw wrapping.lua hosted with ❤ by GitHub

We’ll have access to the edge_computing in our CUs as if it was a global variable.

And finally, we can mimic the sampling technique by using a random function. Let’s say we want a given CU to be executed 50% of the time.

We need to encode the desired state at the datastore level and before we run the CU, we check if the random number, ranging from 1 to 100, is smaller or equal to the desired sampling rate.

if math.random(100) <= sampling then
pcall(code)
end
view raw sampling.lua hosted with ❤ by GitHub

Pure random distribution is not the most adequate, maybe in the future, we can use an algorithm similar to the power of two choices.

Future

Any platform is a complex piece of software that needs to be flexible, accommodate many kinds of usage, be easy to use, be safe and it still needs to be open for future innovation/changes such as:

  • a control plane where you can create CUs using UI
  • if the execution order is important then change the current data types
  • add usage metrics per CU
  • wrapper the execution with a timeout/circuit breaker mechanism

I hope you enjoyed these blog posts, they were meant to mostly show some trade-offs and also to deepen the nginx and Lua knowledge.

Empowering Nginx with Lua code

Attention: I released resty-dynacode an openresty library enabling users to add Lua code dynamically to Nginx.

This is the second post in the series where we develop an edge computing platform. In this post, we’ll add some code/behavior to the front end servers. Here’s a link to the previous entry.

Add code inside the front end

The OTT service we did before don’t employ any kind of authentication thus the users can watch the streams for free. To solve this authentication issue we can add Lua code embed into nginx.

OpenResty – an Nginx with support for LuaJIT 2.0/2.1 code.

To run Lua code inside nginx you need to understand a little bit of the request phases within the server. The request will travel across different stages where you can intercept it using Nginx directives and add the code.

Screen Shot 2020-04-20 at 2.09.49 PM

Just for the sake of learning, the authentication logic will be a straightforward token system. During the access phase, we’ll deny access for those with no proper authentication. Once a user has the required token it’s going to be persisted in form of a cookie.

Fixed token with no expiration is unsafe for production usage, you should look for something like JWT.

server {
location / {
proxy_cache my_cache;
proxy_cache_lock on;
proxy_pass http://backend;
access_by_lua_block {
local token = ngx.var.arg_token or ngx.var.cookie_superstition
if token ~= "token" then
return ngx.exit(ngx.HTTP_FORBIDDEN)
else
ngx.header['Set-Cookie'] = {'superstition=token'}
end
}
}
}
view raw nginx.conf hosted with ❤ by GitHub

The edge server can run useful behavior/code, now let’s laid out some examples that demonstrate the power we can have while executing functions at the front end.

Suppose a hacker, behing the IP 192.168.0.253, is exploting a known issue, that is going to be fixed soon. We can solve that by forbiddening his/her IP. Adding lua code, to the same phase, can fix this problem.

You can access all the nginx variables using the api ngx.var.VARIABLE.

if ngx.var.remote_addr == "192.168.0.253" then
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
view raw acl.lua hosted with ❤ by GitHub

Nginx has the deny directive to solve this problem although it doesn’t allow a dynamic way to update the IP list. We would need to reload the server every time we want to update the IPs.

It’s wanted to avoid different domains to consume our streams, to prevent that, we’re going to examine the referer header and reject all the requests not originated from our domain.

CORS and CSP will be safer to solve this issue.

local headers = ngx.req.get_headers()
if not string.find(headers["Referer"],"localhost") then
return ngx.exit(ngx.HTTP_FORBIDDEN)
end
view raw Origin.lua hosted with ❤ by GitHub

To on-the-fly change the response from the backend, we’ll add a custom HLS tag in the playlist.

— header_filter_by_lua_block
— since we're going to change the content
— we need to
ngx.header.content_length = nil
— body_filter_by_lua_block
ngx.arg[1] = ngx.arg[1] .. "\n#COPYRIGHT: mysite.com"
view raw changing.lua hosted with ❤ by GitHub

To decorate the HTTP headers, we’ll attach new ones exposing some metrics from the server and for that matter, it can rely on the ngx.header[‘name’] API.

header_filter_by_lua_block
ngx.header['X-Metrics-upstream_response_time'] = ngx.var.upstream_response_time
ngx.header['X-Metrics-upstream_connect_time'] = ngx.var.upstream_connect_time
ngx.header['X-Metrics-request_time'] = ngx.var.request_time
ngx.header['X-Metrics-tcpinfo_rtt'] = ngx.var.tcpinfo_rtt
ngx.header['X-Metrics-time_iso8601'] = ngx.var.time_iso8601
view raw gistfile1.lua hosted with ❤ by GitHub

Finally, we’ll count how many requests a given user (based on her/his IP) does and expose it through a custom HTTP header. The counter was stored in Redis.

— counting how many requests a given ip did
local redis_client = redis_cluster:new(config)
local resp, err = redis_client:incr(ngx.var.remote_addr)
ngx.header['X-Request-Counter'] = resp

All this is working, if you want, you can test it by yourself.

# make sure you have docker
git clone https://github.com/leandromoreira/nott.git
cd nott
git checkout 1.0.2
# in a tab
make run
# wait until the platform is up and running
# and in another tab run
make broadcast_tvshow
# ^ for linux users you use –network=host and your
# IP instead of this docker.for.mac.host.internal
# for windows user I dunno =(
# but you can use OBS and point to your own machine
# open your browser and point it to http://localhost:8080/app
# observe the metrics in the network console window
# or even our custom hls manifest
# try to play our stream in clappr.io/demo/ it won't do it.
view raw steps1.sh hosted with ❤ by GitHub

Conclusion

Did you notice a pattern? Every time we want add a new feature, we need to:

  • write a little bit of Lua code
  • attach it to a request phase directive in nginx
  • deploy and reload the edge servers

That’s why we need to build an edge computing platform, to put code faster into production and avoid server reload.