PublicNTP

Menu

What Goes Into Deployment?

February 18, 2019

Checklist
Checklist

February 18, 2019

Brad Woodfin

As of this writing, all of PublicNTP’s deployed time servers are virtual (i.e., running in the cloud) -- except for Salt Lake City, because Scott is a fan of turning things up to 11. This approach has served us well so far. It’s becoming clear that we’re reaching the point of diminishing returns for cloud deployments. In the underserved countries we’re now trying to target, cloud just won’t be an option.

Before we talk about the deployment avenues that remain viable for us, let’s take a minute to “talk stratums.”

If you read up on Network Time Protocol (NTP), you’ll quickly hit the term “stratum,” which is followed by a number. You’ll see “stratum zero” servers, “stratum one” servers, etc.

What does that mean?

Stratum is a term to describe how close to an authoritative NTP time source a device is.

To oversimplify, you can view it as “the number of wires you need to cross before you can reach an authoritative time source.”

In other words, stratum zero servers are authoritative devices like GPS and other atomic clock sources -- no wires have to be crossed to reach the high-precision time source, they ARE the time source.

Devices like a Linux server connected directly to a stratum zero device is considered a stratum one device (there’s “one wire” from the server to its GPS receiver). The math follows on down the stratum tiers: if a stratum two device obtains its time from stratum one servers (usually across the Internet).

The cool thing is that using NTP, devices can connect to multiple sources of time -- meaning that a stratum two servers can be querying multiple stratum one servers. That web of connections allows a device that’s two steps removed from an “authoritative” source to be much more reliable, as it can “double-check” the answers it’s getting from multiple upstream sources

All the PublicNTP servers (other than Utah!) are stratum two servers. We sync to at least five stratum one sources, which keeps them within 1 and 10 milliseconds of the international time standard. So far we’ve been very pleased with the results. Deploy a server in developed parts of the world with major cloud infrastructure, such as Europe or East Asia and you’ll find a plethora of upstream time sources to ensure your clocks are accurate.

When PublicNTP set out to deploy in lesser-developed regions of the world, we figured that NTP would still be resilient enough to be able to work well.

That was a tad optimistic -- at least with our definition of NTP when it’s “working well.” :)

Turns out that absolutely everything is stacked against having highly-accurate time in much of the world. For example, we deployed a virtual server in Lagos, Nigeria. Through this process, we discovered first-hand how the severe lack of terrestrial cables across Africa turned out to be a huge obstacle. If you looked at the path of all data traveling out of Lagos through our time server, the only two paths we had access to were submarine fiber optic cabling to either South Africa OR London.

When you pause to look at a map, that’s an enormous distance to cross even for data moving at the speed of light. Testing quickly showed we couldn’t get more accurate than 100 milliseconds from UTC, due to lack of upstream time sources within several thousand miles from Nigeria.

For humans, a tenth of a second is almost negligible. But for digital infrastructure that requires synchronization within 1/1,000th of a second or better, it was a rough situation.

We quickly realized we were facing a situation that our current approach wasn’t going to work with.

Enter Project Ikenga and Project Tonatiuh: PublicNTP’s first two physical deployment campaigns. We’ll talk specifically about both of these projects in future articles but in short, we realized that we had to bring our own stratum one sources, as there weren’t any in the region which could make stratum two servers viable. Pursuing physical deployments come with their own brain teasers which we’ve been digging into for the last several months.

PublicNTP is excited to continue expanding our deployment footprint, installing our servers wherever they’re most needed. After hundreds of (combined) hours investigating physical deployments, we’ve gradually developed a checklist of sorts that we use to try and quickly vet the viability of a potential deployment location. Watch for an article coming soon that breaks that checklist down -- and consider yourself invited to help us improve it. It continues to improve, but like everything about this project, it can only get better!

PublicNTP Consent Manager

Like many companies, PublicNTP uses cookies and other technologies, some of which are essential to make our website work. Others help us improve services and the user experience or to advertise. In using our site, you consent to the use of these cookies and other technologies. Learn more about cookies and other technologies we use.