The Twitter Engineering team has recently provided an insight into the evolution and scaling of the core technologies behind their custom data center infrastructure that powers the social media service. Core lessons shared included: architect beyond the original specifications and requirements, and making quick and bold changes if traffic trends toward the upper end of designed capacity; there is no such a thing as a “temporary change or workaround” – workarounds are technical debt; focus on providing the right tool for the job, but this means legitimately understanding all possible use cases; and documenting internal and community best practices has been a “force multiplier”.
The social networking and online news service Twitter was created in 2006, when hardware from physical enterprise vendors “ruled the data center”, according to a recent post on the Twitter Engineering Blog. In the ten years preceding the launch, the rapid growth of Twitter’s user base has provided many engineering challenges at the hardware layer. Although Twitter has a ” good presence ” within public cloud, they have primarily invested within their own private infrastructure; in 2010 Twitter migrated from third-party colocated hosting to their own private data center infrastructure , which over the following six years has been “continually engineered and refreshed […] to take advantage of the latest open standards in technology and hardware efficiency”.
By late 2010 Twitter had finalised their first in-house network architecture that was designed to address scale and service issues that were encountered with the existing third-party colocated infrastructure. Initially, the use of deep buffer Top of Rack (ToR) switches, and carrier-grade core network switches allowed Twitter to support previously unseen transaction per second (TPS) volumes that were generated as a result of global events such as the World Cup Football in 2014.
Home
United States
USA — software The Infrastructure Behind Twitter: Scaling Networking, Storage and Provisioning