high_scalability high_scalability-2012 high_scalability-2012-1316 knowledge-graph by maker-knowledge-mining

1316 high scalability-2012-09-04-Changing Architectures: New Datacenter Networks Will Set Your Code and Data Free


meta infos for this blog

Source: html

Introduction: One consequence of IT standardization and commodification has been Google’s datacenter is the computer view of the world. In that view all compute resources (memory, CPU, storage) are fungible. They are interchangeable and location independent, individual computers lose identity and become just a part of a service. Thwarting that nirvana has been the abysmal performance of commodity datacenter networks which have caused the preference of architectures that favor the collocation of state and behaviour on the same box. MapReduce famously ships code over to storage nodes for just this reason. Change the network and you change the fundamental assumption driving collocation based software architectures. You are then free to store data anywhere and move compute anywhere you wish. The datacenter becomes the computer. On the host side with an x8 slot running at PCI-Express 3.0 speeds able to push 8GB/sec (that’s bytes) of bandwidth in both directions, we have


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 One consequence of IT standardization and commodification has been Google’s datacenter is the computer view of the world. [sent-1, score-0.236]

2 Thwarting that nirvana has been the abysmal performance of commodity datacenter networks which have caused the preference of architectures that favor the collocation of state and behaviour on the same box. [sent-4, score-0.684]

3 Change the network and you change the fundamental assumption driving collocation based software architectures. [sent-6, score-0.342]

4 You are then free to store data anywhere and move compute anywhere you wish. [sent-7, score-0.465]

5 0 speeds able to push 8GB/sec (that’s bytes) of bandwidth in both directions, we have enough IO to feed Moore’s progeny, wild packs of hungry hungry cores. [sent-10, score-0.369]

6 Why we are still using TCP and shoving data through OS stacks in the datacenter is a completely separate question. [sent-12, score-0.3]

7 If you draw a line somewhere in a network bisectional bandwidth is the rate of communication at which servers on one side of the line can communicate with servers on the other side. [sent-16, score-0.797]

8 With enough bisectional bandwidth any server can communicate with any other server at full network speeds. [sent-17, score-0.738]

9 Wait, don’t we have high bisectional bandwidth in datacenters now? [sent-18, score-0.516]

10 To support mostly North-South traffic with a little East-West traffic, datacenters used a tree topology with core, aggregation, and access layers. [sent-29, score-0.242]

11 The idea being that the top routing part of the network has enough bandwidth to handle all the traffic from all the machines lower down in the tree. [sent-30, score-0.436]

12 Creating an affordable high bisectional bandwidth network requires a more thoughtful approach. [sent-35, score-0.68]

13 The general idea is to create a flat L2 network using a CLOS topology. [sent-40, score-0.365]

14 Now that you have this super cool datacenter topology what do you do with it? [sent-48, score-0.34]

15 First, we give each storage node network bandwidth that matches its storage bandwidth. [sent-55, score-0.64]

16 SAS disks have read performance of about 120MByte/sec, or about 1 gigabit/sec, so in our FDS cluster a storage node is always provisioned with at least as many gigabits of network bandwidth as it has disks. [sent-56, score-0.635]

17 Second, we connect the storage nodes to compute nodes using a full bisection bandwidth network—specifically, a CLOS network topology, as used in projects such as Monsoon. [sent-57, score-0.56]

18 The combination of these two factors produces an uncongested path from remote disks to CPUs, giving the  system an aggregate I/O bandwidth essentially equivalent to a system such as MapReduce that uses local storage. [sent-58, score-0.372]

19 With 10/100 Gbps networks on the way and technologies like VL2 and FDS, we’ve made good progress at making CPU, RAM, and storage fungible pools of resources within a datacenter. [sent-62, score-0.374]

20 Software Defined Networking will help networks to become first class objects, which seems close, but for performance reasons networks can never really be disentangled from their underlying topology. [sent-64, score-0.303]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('fds', 0.438), ('bisectional', 0.313), ('datacenter', 0.236), ('bandwidth', 0.203), ('flat', 0.201), ('network', 0.164), ('anywhere', 0.155), ('networkby', 0.125), ('networks', 0.117), ('clos', 0.107), ('topology', 0.104), ('storage', 0.102), ('arp', 0.102), ('collocation', 0.102), ('disks', 0.097), ('fungible', 0.095), ('nirvana', 0.095), ('compute', 0.091), ('fanout', 0.086), ('hungry', 0.083), ('architectures', 0.077), ('change', 0.076), ('remote', 0.072), ('broadcast', 0.071), ('traffic', 0.069), ('seems', 0.069), ('mostly', 0.069), ('node', 0.069), ('levels', 0.067), ('data', 0.064), ('pools', 0.06), ('flexible', 0.06), ('somewhere', 0.059), ('communicate', 0.058), ('networkingdata', 0.057), ('asymmetry', 0.057), ('wang', 0.057), ('fizz', 0.057), ('vswitch', 0.057), ('sidestepped', 0.057), ('abysmal', 0.057), ('theflat', 0.057), ('even', 0.055), ('microsoft', 0.055), ('centralized', 0.054), ('communicated', 0.053), ('crushes', 0.053), ('uniformly', 0.053), ('les', 0.053), ('articlesa', 0.053)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000004 1316 high scalability-2012-09-04-Changing Architectures: New Datacenter Networks Will Set Your Code and Data Free

Introduction: One consequence of IT standardization and commodification has been Google’s datacenter is the computer view of the world. In that view all compute resources (memory, CPU, storage) are fungible. They are interchangeable and location independent, individual computers lose identity and become just a part of a service. Thwarting that nirvana has been the abysmal performance of commodity datacenter networks which have caused the preference of architectures that favor the collocation of state and behaviour on the same box. MapReduce famously ships code over to storage nodes for just this reason. Change the network and you change the fundamental assumption driving collocation based software architectures. You are then free to store data anywhere and move compute anywhere you wish. The datacenter becomes the computer. On the host side with an x8 slot running at PCI-Express 3.0 speeds able to push 8GB/sec (that’s bytes) of bandwidth in both directions, we have

2 0.18044566 750 high scalability-2009-12-16-Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud

Introduction: "But it is not complicated. [There's] just a lot of it." \--Richard Feynmanon how the immense variety of the world arises from simple rules.Contents:Have We Reached the End of Scaling?Applications Become Black Boxes Using Markets to Scale and Control CostsLet's Welcome our Neo-Feudal OverlordsThe Economic Argument for the Ambient CloudWhat Will Kill the Cloud?The Amazing Collective Compute Power of the Ambient CloudUsing the Ambient Cloud as an Application RuntimeApplications as Virtual StatesConclusionWe have not yet begun to scale. The world is still fundamentally disconnected and for all our wisdom we are still in the earliest days of learning how to build truly large planet-scaling applications.Today 350 million users on Facebook is a lot of users and five million followers on Twitter is a lot of followers. This may seem like a lot now, but consider we have no planet wide applications yet. None.Tomorrow the numbers foreshadow a newCambrian explosionof connectivity that will look as

3 0.18036655 1355 high scalability-2012-11-05-Gone Fishin': Building Super Scalable Systems: Blade Runner Meets Autonomic Computing In The Ambient Cloud

Introduction: All in all this is still my favorite post and I still think it's an accurate vision of a future. Not everyone agrees, but I guess we'll see..."But it is not complicated. [There's] just a lot of it." \--Richard Feynmanon how the immense variety of the world arises from simple rules.Contents:Have We Reached the End of Scaling?Applications Become Black Boxes Using Markets to Scale and Control CostsLet's Welcome our Neo-Feudal OverlordsThe Economic Argument for the Ambient CloudWhat Will Kill the Cloud?The Amazing Collective Compute Power of the Ambient CloudUsing the Ambient Cloud as an Application RuntimeApplications as Virtual StatesConclusionWe have not yet begun to scale. The world is still fundamentally disconnected and for all our wisdom we are still in the earliest days of learning how to build truly large planet-scaling applications.Today 350 million users on Facebook is a lot of users and five million followers on Twitter is a lot of followers. This may seem like a lot now, but c

4 0.16816975 768 high scalability-2010-02-01-What Will Kill the Cloud?

Introduction: This is an excerpt from my article Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud. If datacenters are the new castles, then what will be the new gunpowder? As soon as gunpowder came on the scene, castles, which are defensive structures, quickly became the future's cold, drafty hotels. Gunpowder fueled cannon balls make short work of castle walls. There's a long history of "gunpowder" type inventions in the tech industry. PCs took out the timeshare model. The cloud is taking out the PC model. There must be something that will take out the cloud. Right now it's hard to believe the cloud will one day be no more. They seem so much the future, but something will transcend the cloud. We even have a law that says so: Bell's Law of Computer Classes which holds that roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of

5 0.15811846 538 high scalability-2009-03-16-Are Cloud Based Memory Architectures the Next Big Thing?

Introduction: We are on the edge of two potent technological changes: Clouds and Memory Based Architectures. This evolution will rip open a chasm where new players can enter and prosper. Google is the master of disk. You can't beat them at a game they perfected. Disk based databases like SimpleDB and BigTable are complicated beasts, typical last gasp products of any aging technology before a change. The next era is the age of Memory and Cloud which will allow for new players to succeed. The tipping point will be soon. Let's take a short trip down web architecture lane: It's 1993: Yahoo runs on FreeBSD, Apache, Perl scripts and a SQL database It's 1995: Scale-up the database. It's 1998: LAMP It's 1999: Stateless + Load Balanced + Database + SAN It's 2001: In-memory data-grid. It's 2003: Add a caching layer. It's 2004: Add scale-out and partitioning. It's 2005: Add asynchronous job scheduling and maybe a distributed file system. It's 2007: Move it all into the cloud. It's 2008: C

6 0.15613481 1116 high scalability-2011-09-15-Paper: It's Time for Low Latency - Inventing the 1 Microsecond Datacenter

7 0.15183851 1240 high scalability-2012-05-07-Startups are Creating a New System of the World for IT

8 0.13739671 1392 high scalability-2013-01-23-Building Redundant Datacenter Networks is Not For Sissies - Use an Outside WAN Backbone

9 0.13649333 796 high scalability-2010-03-16-Justin.tv's Live Video Broadcasting Architecture

10 0.13605136 1359 high scalability-2012-11-15-Gone Fishin': Justin.Tv's Live Video Broadcasting Architecture

11 0.13126911 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

12 0.13042848 778 high scalability-2010-02-15-The Amazing Collective Compute Power of the Ambient Cloud

13 0.12955754 1186 high scalability-2012-02-02-The Data-Scope Project - 6PB storage, 500GBytes-sec sequential IO, 20M IOPS, 130TFlops

14 0.12663171 687 high scalability-2009-08-24-How Google Serves Data from Multiple Datacenters

15 0.1209033 1157 high scalability-2011-12-14-Virtualization and Cloud Computing is Changing the Network to East-West Routing

16 0.12045136 371 high scalability-2008-08-24-A Scalable, Commodity Data Center Network Architecture

17 0.11859196 761 high scalability-2010-01-17-Applications Become Black Boxes Using Markets to Scale and Control Costs

18 0.11858414 448 high scalability-2008-11-22-Google Architecture

19 0.1179132 960 high scalability-2010-12-20-Netflix: Use Less Chatty Protocols in the Cloud - Plus 26 Fixes

20 0.11601756 879 high scalability-2010-08-12-Think of Latency as a Pseudo-permanent Network Partition


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.221), (1, 0.104), (2, 0.034), (3, 0.054), (4, -0.117), (5, -0.013), (6, 0.076), (7, 0.021), (8, -0.071), (9, 0.042), (10, 0.007), (11, -0.027), (12, -0.014), (13, 0.009), (14, 0.06), (15, 0.097), (16, -0.015), (17, 0.052), (18, -0.053), (19, -0.008), (20, -0.013), (21, 0.106), (22, -0.027), (23, -0.028), (24, 0.02), (25, 0.04), (26, -0.003), (27, -0.04), (28, -0.02), (29, -0.049), (30, -0.015), (31, -0.04), (32, 0.025), (33, -0.0), (34, 0.013), (35, 0.034), (36, -0.005), (37, 0.022), (38, -0.059), (39, 0.036), (40, 0.019), (41, -0.024), (42, -0.002), (43, -0.016), (44, 0.001), (45, 0.033), (46, -0.054), (47, -0.047), (48, -0.034), (49, -0.002)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96925116 1316 high scalability-2012-09-04-Changing Architectures: New Datacenter Networks Will Set Your Code and Data Free

Introduction: One consequence of IT standardization and commodification has been Google’s datacenter is the computer view of the world. In that view all compute resources (memory, CPU, storage) are fungible. They are interchangeable and location independent, individual computers lose identity and become just a part of a service. Thwarting that nirvana has been the abysmal performance of commodity datacenter networks which have caused the preference of architectures that favor the collocation of state and behaviour on the same box. MapReduce famously ships code over to storage nodes for just this reason. Change the network and you change the fundamental assumption driving collocation based software architectures. You are then free to store data anywhere and move compute anywhere you wish. The datacenter becomes the computer. On the host side with an x8 slot running at PCI-Express 3.0 speeds able to push 8GB/sec (that’s bytes) of bandwidth in both directions, we have

2 0.85947996 371 high scalability-2008-08-24-A Scalable, Commodity Data Center Network Architecture

Introduction: Looks interesting... Abstract: Today’s data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Nonuniform bandwidth among data center nodes complicates application design and limits overall system performance. In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodi

3 0.80901664 374 high scalability-2008-08-30-Paper: GargantuanComputing—GRIDs and P2P

Introduction: I found the discussion of the available bandwidth of tree vs higher dimensional virtual networks topologies quite, to quote Spock, fascinating : A mathematical analysis by Ritter (2002) (one of the original developers of Napster) presented a detailed numerical argument demonstrating that the Gnutella network could not scale to the capacity of its competitor, the Napster network. Essentially, that model showed that the Gnutella network is severely bandwidth-limited long before the P2P population reaches a million peers. In each of these previous studies, the conclusions have overlooked the intrinsic bandwidth limits of the underlying topology in the Gnutella network: a Cayley tree (Rains and Sloane 1999) (see Sect. 9.4 for the definition). Trees are known to have lower aggregate bandwidth than higher dimensional topologies, e.g., hypercubes and hypertori. Studies of interconnection topologies in the literature have tended to focus on hardware implementations (see, e.g., Culler et

4 0.79088908 1213 high scalability-2012-03-22-Paper: Revisiting Network I-O APIs: The netmap Framework

Introduction: Here's a really good article in the Communications of the ACM on reducing network packet processing overhead by redesigning the network stack:  Revisiting Network I/O APIs: The Netmap Framework  by  Luigi Rizzo . As commodity networking performance increases operating systems need to keep up or all those CPUs will go to waste. How do they make this happen?   Abstract: Today 10-gigabit interfaces are used more and more in datacenters and servers. On these links, packets flow as fast as one every 67.2 nanoseconds, yet modern operating systems can take 10-20 times longer just to move one packet between the wire and the application. We can do much better, not with more powerful hardware but by revising architectural decisions made long ago regarding the design of device drivers and network stacks. The netmap framework is a promising step in this direction. Thanks to a careful design and the engineering of a new packet I/O API, netmap eliminates much unnecessary overhead and moves

5 0.78514779 1177 high scalability-2012-01-19-Is it time to get rid of the Linux OS model in the cloud?

Introduction: You program in a dynamic language, that runs on a JVM, that runs on a OS designed 40 years ago  for a completely different purpose, that runs on virtualized hardware. Does this make sense? We've talked about this idea before in Machine VM + Cloud API - Rewriting The Cloud From Scratch , where the vision is to treat cloud virtual hardware as a compiler target, and converting high-level language source code directly into kernels that run on it. As new technologies evolve the friction created by our old tool chains and architecture models becomes ever more obvious. Take, for example, what a team at UCSD  is releasing: a phase-change memory prototype   - a   solid state storage device that provides performance thousands of times faster than a conventional hard drive and up to seven times faster than current state-of-the-art solid-state drives (SSDs). However, PCM has access latencies several times slower than DRAM. This technology has obvious mind blowing implications, but an

6 0.78468466 1256 high scalability-2012-06-04-OpenFlow-SDN is Not a Silver Bullet for Network Scalability

7 0.77648699 645 high scalability-2009-06-30-Hot New Trend: Linking Clouds Through Cheap IP VPNs Instead of Private Lines

8 0.75787777 1338 high scalability-2012-10-11-RAMCube: Exploiting Network Proximity for RAM-Based Key-Value Store

9 0.75162637 1116 high scalability-2011-09-15-Paper: It's Time for Low Latency - Inventing the 1 Microsecond Datacenter

10 0.75087643 823 high scalability-2010-05-05-How will memristors change everything?

11 0.74661988 1594 high scalability-2014-02-12-Paper: Network Stack Specialization for Performance

12 0.74206799 1392 high scalability-2013-01-23-Building Redundant Datacenter Networks is Not For Sissies - Use an Outside WAN Backbone

13 0.74108094 1310 high scalability-2012-08-23-Economies of Scale in the Datacenter: Gmail is 100x Cheaper to Run than Your Own Server

14 0.73455274 1651 high scalability-2014-05-20-It's Networking. In Space! Or How E.T. Will Phone Home.

15 0.73301661 284 high scalability-2008-03-19-RAD Lab is Creating a Datacenter Operating System

16 0.73040706 1584 high scalability-2014-01-22-How would you build the next Internet? Loons, Drones, Copters, Satellites, or Something Else?

17 0.72906244 1627 high scalability-2014-04-07-Google Finds: Centralized Control, Distributed Data Architectures Work Better than Fully Decentralized Architectures

18 0.72739792 266 high scalability-2008-03-04-Manage Downtime Risk by Connecting Multiple Data Centers into a Secure Virtual LAN

19 0.72226197 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

20 0.72079778 726 high scalability-2009-10-22-Paper: The Case for RAMClouds: Scalable High-Performance Storage Entirely in DRAM


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.088), (2, 0.189), (10, 0.069), (40, 0.039), (47, 0.012), (50, 0.143), (61, 0.059), (77, 0.024), (79, 0.16), (85, 0.054), (94, 0.068)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.92590421 657 high scalability-2009-07-16-Scaling Traffic: People Pod Pool of On Demand Self Driving Robotic Cars who Automatically Refuel from Cheap Solar

Introduction: Update 17 : Are Wireless Road Trains the Cure for Traffic Congestion? BY   ADDY DUGDALE . The concept of road trains--up to eight vehicles zooming down the road together--has long been considered a faster, safer, and greener way of traveling long distances by car Update 16: The first electric vehicle in the country powered completely by ultracapacitors . The minibus can be fully recharged in fifteen minutes, unlike battery vehicles, which typically takes hours to recharge. Update 15: How to Make UAVs Fully Autonomous . The Sense-and-Avoid system uses a four-megapixel camera on a pan tilt to detect obstacles from the ground. It puts red boxes around planes and birds, and blue boxes around movement that it determines is not an obstacle (e.g., dust on the lens). Update 14: ATNMBL is a concept vehicle for 2040 that represents the end of driving and an alternative approach to car design. Upon entering ATNMBL, you are presented with a simple question: "Where can I take you

same-blog 2 0.92326361 1316 high scalability-2012-09-04-Changing Architectures: New Datacenter Networks Will Set Your Code and Data Free

Introduction: One consequence of IT standardization and commodification has been Google’s datacenter is the computer view of the world. In that view all compute resources (memory, CPU, storage) are fungible. They are interchangeable and location independent, individual computers lose identity and become just a part of a service. Thwarting that nirvana has been the abysmal performance of commodity datacenter networks which have caused the preference of architectures that favor the collocation of state and behaviour on the same box. MapReduce famously ships code over to storage nodes for just this reason. Change the network and you change the fundamental assumption driving collocation based software architectures. You are then free to store data anywhere and move compute anywhere you wish. The datacenter becomes the computer. On the host side with an x8 slot running at PCI-Express 3.0 speeds able to push 8GB/sec (that’s bytes) of bandwidth in both directions, we have

3 0.92095959 747 high scalability-2009-11-26-What I'm Thankful For on Thanksgiving

Introduction: I try to keep this blog targeted and on topic. So even though I may be thankful for the song of the tinniest sparrow at sunrise , I'll save you from all that. It's hard to tie scalability and the giving of thanks together, especially as it sometimes occurs to me that this blog may be a self-indulgent waste of time. But I think I found a sentiment in  A New THEORY of AWESOMENESS and MIRACLES  by James Bridle that manages to marry the topic of this blog and giving thanks meaningfully together: I distrust commercial definitions of innovation, and particularly of awesomeness. It’s an overused term. When I think of awesomeness, I want something awe-inspiring, vast and mind-expanding. So I started thinking about things that I think are awesome, or miraculous, and for me, it kept coming back to scale and complexity. We’re not actually very good about thinking about scale and complexity in real terms, so we have to use metaphors and examples. Douglas Adams writes s

4 0.8839274 1484 high scalability-2013-06-28-Stuff The Internet Says On Scalability For June 28, 2013

Introduction: Hey, it's HighScalability time: (Leandro Erlich's super cool scaling illusion ) Who am I? I have 50 petabytes of data stored in Hadoop and Teradata, 400 million items for sale, 250 million queries a day, 100,000 pages served per second, 112 million active users, $75 billions sold in 2012...If you guessed eBay  then you've won the auction. Quotable Quotes: Controlled Experiments at Large Scale : Bing found that every 100ms faster they deliver search result pages yields 0.6% more in revenue Luis Bettencourt : A city is first and foremost a social reactor. It works like a star, attracting people and accelerating social interaction and social outputs in a way that is analogous to how stars compress matter and burn brighter and faster the bigger they are. @nntaleb : unless you understand that fat tails come from concentration of errors, you should not discuss probability & risk  Need to make Hadoop faster?  Hadoop + GPU

5 0.88137311 809 high scalability-2010-04-13-Strategy: Saving Your Butt With Deferred Deletes

Introduction: Deferred Deletes is a technique where deleted items are marked as deleted but not garbage collected until some days or preferably weeks later .   James Hamilton talks describes this strategy in his classic On Designing and Deploying Internet-Scale Services: Never delete anything. Just mark it deleted. When new data comes in, record the requests on the way. Keep a rolling two week (or more) history of all changes to help recover from software or administrative errors. If someone makes a mistake and forgets the where clause on a delete statement (it has happened before and it will again), all logical copies of the data are deleted. Neither RAID nor mirroring can protect against this form of error. The ability to recover the data can make the difference between a highly embarrassing issue or a minor, barely noticeable glitch. For those systems already doing off-line backups, this additional record of data coming into the service only needs to be since the last backup. But, bein

6 0.8767131 289 high scalability-2008-03-27-Amazon Announces Static IP Addresses and Multiple Datacenter Operation

7 0.8730827 1612 high scalability-2014-03-14-Stuff The Internet Says On Scalability For March 14th, 2014

8 0.87225395 716 high scalability-2009-10-06-Building a Unique Data Warehouse

9 0.86720902 1357 high scalability-2012-11-12-Gone Fishin': Hilarious Video: Relational Database Vs NoSQL Fanbois

10 0.86693609 1545 high scalability-2013-11-08-Stuff The Internet Says On Scalability For November 8th, 2013

11 0.86593497 1098 high scalability-2011-08-15-Should any cloud be considered one availability zone? The Amazon experience says yes.

12 0.86511862 1112 high scalability-2011-09-07-What Google App Engine Price Changes Say About the Future of Web Architecture

13 0.86494738 1647 high scalability-2014-05-14-Google Says Cloud Prices Will Follow Moore’s Law: Are We All Renters Now?

14 0.86459786 789 high scalability-2010-03-05-Strategy: Planning for a Power Outage Google Style

15 0.86446869 1382 high scalability-2013-01-07-Analyzing billions of credit card transactions and serving low-latency insights in the cloud

16 0.86293221 1589 high scalability-2014-02-03-How Google Backs Up the Internet Along With Exabytes of Other Data

17 0.86143386 1649 high scalability-2014-05-16-Stuff The Internet Says On Scalability For May 16th, 2014

18 0.86131257 1355 high scalability-2012-11-05-Gone Fishin': Building Super Scalable Systems: Blade Runner Meets Autonomic Computing In The Ambient Cloud

19 0.86125273 750 high scalability-2009-12-16-Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud

20 0.86051989 526 high scalability-2009-03-05-Strategy: In Cloud Computing Systematically Drive Load to the CPU