high_scalability high_scalability-2009 high_scalability-2009-661 knowledge-graph by maker-knowledge-mining

661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it


meta infos for this blog

Source: html

Introduction: Update 8 : The Cost of Latency  by James Hamilton. James summarizing some latency info from   Steve Souder , Greg Linden , and Marissa Mayer .   Speed [is] an undervalued and under-discussed asset on the web. Update 7: How do you know when you need more memcache servers? . Dathan Pattishall talks about using memcache not to scale, but to reduce latency and reduce I/O spikes, and how to use stats to know when more servers are needed. Update 6: Stock Traders Find Speed Pays, in Milliseconds . Goldman Sachs is making record profits off a 500 millisecond trading advantage. Yes, latency matters. As an interesting aside, Libet found 500 msecs is about the time it takes the brain to weave together an experience of consciousness from all our sensor inputs. Update 5: Shopzilla's Site Redo - You Get What You Measure . At the Velocity conference Phil Dixon, from Shopzilla, presented data showing a 5 second speed up resulted in a 25% increase in page views, a 10% increas


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Dathan Pattishall talks about using memcache not to scale, but to reduce latency and reduce I/O spikes, and how to use stats to know when more servers are needed. [sent-6, score-0.64]

2 And latency is one of those quantifiable qualities that takes real engineering to create. [sent-67, score-0.61]

3 Latency Explained The best explanation of latency I've ever read is still It's the Latency, Stupid by admitted network wizard Stuart Cheshire . [sent-82, score-0.615]

4 A wonderful and detailed rant explaining latency as it relates to network communication, but the ideas are applicable everywhere. [sent-83, score-0.615]

5 So if we want to increase interactivity we have to address every component in the system that introduces latency and minimize or remove it's contribution. [sent-87, score-0.665]

6 If component A calls compont B then the latency is the sum of the latency for each component and overall availability is reduced. [sent-111, score-1.048]

7 To reduce latency your only choice is to reduce the distance between endpoints. [sent-116, score-0.712]

8 Draw out the list of every hop a client request takes and the potential number of latency gremlins is quite impressive. [sent-126, score-0.732]

9 Remove and/or minimize all latency sources that are found. [sent-136, score-0.665]

10 With latency variability is the name of the game, but that doesn't mean that variability can't be better controlled and managed. [sent-142, score-0.634]

11 As memory is an order of magnitude faster than disk it's hard to argue that latency in such a system wouldn't plummet. [sent-172, score-0.697]

12 Use Ajax to minimize perceived latency to the user. [sent-205, score-0.605]

13 A high speed InfiniBand link can have an end-to end latency of about 1 microsecond. [sent-208, score-0.626]

14 One way to minimize the impact of garbage collection on latency is to use more VMs and less memory in each VM instead of VM with a lot of memory. [sent-216, score-0.767]

15 Some extra distance is required, based on the availability of fiber routes and interconnects, but much more attention should be given to minimizing latency as we design our network topologies and routing. [sent-228, score-0.687]

16 Block for any reason and your performance tanks because not only do you incur the latency of the operation but there's added rescheduling latency as well. [sent-266, score-1.048]

17 The number and location of network hops a message has to travel through is a big part of the end-to-end latency of a system. [sent-280, score-0.784]

18 If you want to minimize latency then the clear strategy is to colocate your service in the London Stock Exchange. [sent-282, score-0.605]

19 ACTIV Financial , for example, uses a custom FGPA for low latency processing of high speed financial data flows. [sent-332, score-0.682]

20 end-to-end latency by Nati Shalom Low-Latency Delivery Enters Mainstream; But Standard Measurement Remains Elusive by Andrew Delaney The three faces of latency by By Scott Parsons, Chief Scientist at Exegy, Inc. [sent-355, score-1.048]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('latency', 0.524), ('fpgas', 0.18), ('latencyby', 0.178), ('fpga', 0.149), ('dan', 0.102), ('speed', 0.102), ('pritchett', 0.097), ('standard', 0.092), ('network', 0.091), ('memory', 0.089), ('takes', 0.086), ('faster', 0.084), ('minimize', 0.081), ('zero', 0.076), ('less', 0.073), ('cope', 0.072), ('distance', 0.072), ('packet', 0.066), ('hop', 0.063), ('topic', 0.062), ('microprocessor', 0.061), ('sources', 0.06), ('milliseconds', 0.06), ('increase', 0.06), ('asynch', 0.059), ('stupidby', 0.059), ('number', 0.059), ('minimized', 0.059), ('hops', 0.059), ('tcp', 0.059), ('reduce', 0.058), ('objects', 0.057), ('paging', 0.057), ('page', 0.057), ('data', 0.056), ('means', 0.055), ('variability', 0.055), ('respond', 0.054), ('bw', 0.054), ('shopzilla', 0.054), ('stuart', 0.054), ('containing', 0.052), ('graphics', 0.052), ('communication', 0.052), ('bandwidth', 0.052), ('stock', 0.051), ('message', 0.051), ('bcp', 0.051), ('adapters', 0.051), ('loosely', 0.05)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000002 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

Introduction: Update 8 : The Cost of Latency  by James Hamilton. James summarizing some latency info from   Steve Souder , Greg Linden , and Marissa Mayer .   Speed [is] an undervalued and under-discussed asset on the web. Update 7: How do you know when you need more memcache servers? . Dathan Pattishall talks about using memcache not to scale, but to reduce latency and reduce I/O spikes, and how to use stats to know when more servers are needed. Update 6: Stock Traders Find Speed Pays, in Milliseconds . Goldman Sachs is making record profits off a 500 millisecond trading advantage. Yes, latency matters. As an interesting aside, Libet found 500 msecs is about the time it takes the brain to weave together an experience of consciousness from all our sensor inputs. Update 5: Shopzilla's Site Redo - You Get What You Measure . At the Velocity conference Phil Dixon, from Shopzilla, presented data showing a 5 second speed up resulted in a 25% increase in page views, a 10% increas

2 0.3212871 1207 high scalability-2012-03-12-Google: Taming the Long Latency Tail - When More Machines Equals Worse Results

Introduction: Likewise the current belief that, in the case of artificial machines the very large and the very small are equally feasible and lasting is a manifest error. Thus, for example, a small obelisk or column or other solid figure can certainly be laid down or set up without danger of breaking, while the large ones will go to pieces under the slightest provocation, and that purely on account of their own weight. -- Galileo Galileo observed how things broke if they were naively scaled up. Interestingly, Google noticed a similar pattern when building larger software systems using the same techniques used to build smaller systems.  Luiz André Barroso , Distinguished Engineer at Google, talks about this fundamental property of scaling systems in his fascinating talk,  Warehouse-Scale Computing: Entering the Teenage Decade . Google found the larger the scale the greater the impact of latency variability. When a request is implemented by work done in parallel, as is common with today's service

3 0.25299981 1266 high scalability-2012-06-18-Google on Latency Tolerant Systems: Making a Predictable Whole Out of Unpredictable Parts

Introduction: In Taming The Long Latency Tail we covered Luiz Barroso ’s exploration of the long tail latency (some operations are really slow) problems generated by large fanout architectures (a request is composed of potentially thousands of other requests). You may have noticed there weren’t a lot of solutions. That’s where a talk I attended, Achieving Rapid Response Times in Large Online Services ( slide deck ), by Jeff Dean , also of Google, comes in: In this talk, I’ll describe a collection of techniques and practices lowering response times in large distributed systems whose components run on shared clusters of machines, where pieces of these systems are subject to interference by other tasks, and where unpredictable latency hiccups are the norm, not the exception. The goal is to use software techniques to reduce variability given the increasing variability in underlying hardware, the need to handle dynamic workloads on a shared infrastructure, and the need to use lar

4 0.2477233 1387 high scalability-2013-01-15-More Numbers Every Awesome Programmer Must Know

Introduction: Colin Scott , a Berkeley researcher, updated Jeff Dean’s famous Numbers Everyone Should Know with his Latency Numbers Every Programmer Should Know interactive graphic. The interactive aspect is cool because it has a slider that let’s you see numbers back from as early as 1990 to the far far future of 2020.  Colin explained his motivation for updating the numbers : The other day, a friend mentioned a latency number to me, and I realized that it was an order of magnitude smaller than what I had memorized from Jeff’s talk. The problem, of course, is that hardware performance increases exponentially! After some digging, I actually found that the numbers Jeff quotes are over a decade old Since numbers without interpretation are simply data, take a look at Google Pro Tip: Use Back-Of-The-Envelope-Calculations To Choose The Best Design . The idea is back-of-the-envelope calculations are estimates you create using a combination of thought experiments and common perfor

5 0.23715171 1558 high scalability-2013-12-04-How Can Batching Requests Actually Reduce Latency?

Introduction: Jeremy Edberg gave a talk on  Scaling Reddit from 1 Million to 1 Billion–Pitfalls and Lessons  and one of the issues they had was that they: Did not account for increased latency after moving to EC2. In the datacenter they had submillisecond access between machines so it was possible to make a 1000 calls to memache for one page load. Not so on EC2. Memcache access times increased 10x to a millisecond which made their old approach unusable. Fix was to batch calls to memcache so a large number of gets are in one request. Dave Pacheco had an interesting question about batching requests and its impact on latency: I was confused about the memcached problem after moving to the cloud.  I understand why network latency may have gone from submillisecond to milliseconds, but how could you improve latency by batching requests? Shouldn't that improve efficiency, not latency, at the possible expense of latency (since some requests will wait on the client as they get batched)?

6 0.23105341 538 high scalability-2009-03-16-Are Cloud Based Memory Architectures the Next Big Thing?

7 0.20691717 687 high scalability-2009-08-24-How Google Serves Data from Multiple Datacenters

8 0.20626104 1413 high scalability-2013-02-27-42 Monster Problems that Attack as Loads Increase

9 0.20386657 750 high scalability-2009-12-16-Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud

10 0.20378365 1355 high scalability-2012-11-05-Gone Fishin': Building Super Scalable Systems: Blade Runner Meets Autonomic Computing In The Ambient Cloud

11 0.20347449 920 high scalability-2010-10-15-Troubles with Sharding - What can we learn from the Foursquare Incident?

12 0.19522001 879 high scalability-2010-08-12-Think of Latency as a Pseudo-permanent Network Partition

13 0.19211929 1177 high scalability-2012-01-19-Is it time to get rid of the Linux OS model in the cloud?

14 0.18938738 1509 high scalability-2013-08-30-Stuff The Internet Says On Scalability For August 30, 2013

15 0.18681511 960 high scalability-2010-12-20-Netflix: Use Less Chatty Protocols in the Cloud - Plus 26 Fixes

16 0.1862184 645 high scalability-2009-06-30-Hot New Trend: Linking Clouds Through Cheap IP VPNs Instead of Private Lines

17 0.18386333 1456 high scalability-2013-05-13-The Secret to 10 Million Concurrent Connections -The Kernel is the Problem, Not the Solution

18 0.18101285 1359 high scalability-2012-11-15-Gone Fishin': Justin.Tv's Live Video Broadcasting Architecture

19 0.18087372 796 high scalability-2010-03-16-Justin.tv's Live Video Broadcasting Architecture

20 0.17706302 834 high scalability-2010-06-01-Web Speed Can Push You Off of Google Search Rankings! What Can You Do?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.351), (1, 0.189), (2, -0.007), (3, 0.012), (4, -0.074), (5, 0.013), (6, 0.079), (7, 0.156), (8, -0.161), (9, -0.024), (10, 0.003), (11, -0.035), (12, -0.02), (13, 0.02), (14, 0.01), (15, 0.024), (16, 0.039), (17, 0.029), (18, 0.015), (19, -0.049), (20, 0.046), (21, 0.042), (22, 0.056), (23, -0.015), (24, -0.012), (25, 0.042), (26, -0.038), (27, -0.032), (28, -0.011), (29, -0.035), (30, 0.076), (31, -0.024), (32, 0.034), (33, -0.03), (34, 0.05), (35, 0.052), (36, 0.023), (37, 0.034), (38, -0.079), (39, -0.033), (40, 0.05), (41, 0.035), (42, 0.026), (43, -0.013), (44, -0.0), (45, -0.089), (46, 0.049), (47, -0.057), (48, -0.029), (49, -0.01)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98484397 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

Introduction: Update 8 : The Cost of Latency  by James Hamilton. James summarizing some latency info from   Steve Souder , Greg Linden , and Marissa Mayer .   Speed [is] an undervalued and under-discussed asset on the web. Update 7: How do you know when you need more memcache servers? . Dathan Pattishall talks about using memcache not to scale, but to reduce latency and reduce I/O spikes, and how to use stats to know when more servers are needed. Update 6: Stock Traders Find Speed Pays, in Milliseconds . Goldman Sachs is making record profits off a 500 millisecond trading advantage. Yes, latency matters. As an interesting aside, Libet found 500 msecs is about the time it takes the brain to weave together an experience of consciousness from all our sensor inputs. Update 5: Shopzilla's Site Redo - You Get What You Measure . At the Velocity conference Phil Dixon, from Shopzilla, presented data showing a 5 second speed up resulted in a 25% increase in page views, a 10% increas

2 0.93878084 1207 high scalability-2012-03-12-Google: Taming the Long Latency Tail - When More Machines Equals Worse Results

Introduction: Likewise the current belief that, in the case of artificial machines the very large and the very small are equally feasible and lasting is a manifest error. Thus, for example, a small obelisk or column or other solid figure can certainly be laid down or set up without danger of breaking, while the large ones will go to pieces under the slightest provocation, and that purely on account of their own weight. -- Galileo Galileo observed how things broke if they were naively scaled up. Interestingly, Google noticed a similar pattern when building larger software systems using the same techniques used to build smaller systems.  Luiz André Barroso , Distinguished Engineer at Google, talks about this fundamental property of scaling systems in his fascinating talk,  Warehouse-Scale Computing: Entering the Teenage Decade . Google found the larger the scale the greater the impact of latency variability. When a request is implemented by work done in parallel, as is common with today's service

3 0.86554849 1558 high scalability-2013-12-04-How Can Batching Requests Actually Reduce Latency?

Introduction: Jeremy Edberg gave a talk on  Scaling Reddit from 1 Million to 1 Billion–Pitfalls and Lessons  and one of the issues they had was that they: Did not account for increased latency after moving to EC2. In the datacenter they had submillisecond access between machines so it was possible to make a 1000 calls to memache for one page load. Not so on EC2. Memcache access times increased 10x to a millisecond which made their old approach unusable. Fix was to batch calls to memcache so a large number of gets are in one request. Dave Pacheco had an interesting question about batching requests and its impact on latency: I was confused about the memcached problem after moving to the cloud.  I understand why network latency may have gone from submillisecond to milliseconds, but how could you improve latency by batching requests? Shouldn't that improve efficiency, not latency, at the possible expense of latency (since some requests will wait on the client as they get batched)?

4 0.8441962 1266 high scalability-2012-06-18-Google on Latency Tolerant Systems: Making a Predictable Whole Out of Unpredictable Parts

Introduction: In Taming The Long Latency Tail we covered Luiz Barroso ’s exploration of the long tail latency (some operations are really slow) problems generated by large fanout architectures (a request is composed of potentially thousands of other requests). You may have noticed there weren’t a lot of solutions. That’s where a talk I attended, Achieving Rapid Response Times in Large Online Services ( slide deck ), by Jeff Dean , also of Google, comes in: In this talk, I’ll describe a collection of techniques and practices lowering response times in large distributed systems whose components run on shared clusters of machines, where pieces of these systems are subject to interference by other tasks, and where unpredictable latency hiccups are the norm, not the exception. The goal is to use software techniques to reduce variability given the increasing variability in underlying hardware, the need to handle dynamic workloads on a shared infrastructure, and the need to use lar

5 0.83331788 960 high scalability-2010-12-20-Netflix: Use Less Chatty Protocols in the Cloud - Plus 26 Fixes

Introduction: In  5 Lessons We’ve Learned Using AWS , Netflix's John Ciancutti says one of the big lessons they've learned is to create less chatty protocols : In the Netflix data centers, we have a high capacity, super fast, highly reliable network. This has afforded us the luxury of designing around chatty APIs to remote systems. AWS networking has more variable latency. We’ve had to be much more structured about “over the wire” interactions, even as we’ve transitioned to a more highly distributed architecture. There's not a lot of advice out there on how to create protocols. Combine that with a rush to the cloud and you have a perfect storm for chatty applications crushing application performance. Netflix is far from the first to be surprised by the less than stellar networks inside AWS.  A chatty protocol is one where a client makes a series of requests to a server and the client must wait on each reply before sending the next request. On a LAN this can work great. LAN's are typically

6 0.82816041 1387 high scalability-2013-01-15-More Numbers Every Awesome Programmer Must Know

7 0.8213464 1237 high scalability-2012-05-02-12 Ways to Increase Throughput by 32X and Reduce Latency by 20X

8 0.80381632 1048 high scalability-2011-05-27-Stuff The Internet Says On Scalability For May 27, 2011

9 0.79388982 1174 high scalability-2012-01-13-Stuff The Internet Says On Scalability For January 13, 2012

10 0.79213172 1588 high scalability-2014-01-31-Stuff The Internet Says On Scalability For January 31st, 2014

11 0.78358072 1413 high scalability-2013-02-27-42 Monster Problems that Attack as Loads Increase

12 0.77669191 1213 high scalability-2012-03-22-Paper: Revisiting Network I-O APIs: The netmap Framework

13 0.76622391 1116 high scalability-2011-09-15-Paper: It's Time for Low Latency - Inventing the 1 Microsecond Datacenter

14 0.76511759 1190 high scalability-2012-02-10-Stuff The Internet Says On Scalability For February 10, 2012

15 0.7638948 1572 high scalability-2014-01-03-Stuff The Internet Says On Scalability For January 3rd, 2014

16 0.75901252 981 high scalability-2011-02-01-Google Strategy: Tree Distribution of Requests and Responses

17 0.75702655 1051 high scalability-2011-06-01-Why is your network so slow? Your switch should tell you.

18 0.75664622 1460 high scalability-2013-05-17-Stuff The Internet Says On Scalability For May 17, 2013

19 0.7566421 1475 high scalability-2013-06-13-Busting 4 Modern Hardware Myths - Are Memory, HDDs, and SSDs Really Random Access?

20 0.75577015 1612 high scalability-2014-03-14-Stuff The Internet Says On Scalability For March 14th, 2014


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.096), (2, 0.239), (10, 0.059), (27, 0.014), (30, 0.027), (40, 0.018), (43, 0.012), (47, 0.032), (50, 0.073), (51, 0.013), (61, 0.088), (77, 0.016), (79, 0.118), (85, 0.033), (94, 0.052)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97709006 1484 high scalability-2013-06-28-Stuff The Internet Says On Scalability For June 28, 2013

Introduction: Hey, it's HighScalability time: (Leandro Erlich's super cool scaling illusion ) Who am I? I have 50 petabytes of data stored in Hadoop and Teradata, 400 million items for sale, 250 million queries a day, 100,000 pages served per second, 112 million active users, $75 billions sold in 2012...If you guessed eBay  then you've won the auction. Quotable Quotes: Controlled Experiments at Large Scale : Bing found that every 100ms faster they deliver search result pages yields 0.6% more in revenue Luis Bettencourt : A city is first and foremost a social reactor. It works like a star, attracting people and accelerating social interaction and social outputs in a way that is analogous to how stars compress matter and burn brighter and faster the bigger they are. @nntaleb : unless you understand that fat tails come from concentration of errors, you should not discuss probability & risk  Need to make Hadoop faster?  Hadoop + GPU

2 0.97419167 657 high scalability-2009-07-16-Scaling Traffic: People Pod Pool of On Demand Self Driving Robotic Cars who Automatically Refuel from Cheap Solar

Introduction: Update 17 : Are Wireless Road Trains the Cure for Traffic Congestion? BY   ADDY DUGDALE . The concept of road trains--up to eight vehicles zooming down the road together--has long been considered a faster, safer, and greener way of traveling long distances by car Update 16: The first electric vehicle in the country powered completely by ultracapacitors . The minibus can be fully recharged in fifteen minutes, unlike battery vehicles, which typically takes hours to recharge. Update 15: How to Make UAVs Fully Autonomous . The Sense-and-Avoid system uses a four-megapixel camera on a pan tilt to detect obstacles from the ground. It puts red boxes around planes and birds, and blue boxes around movement that it determines is not an obstacle (e.g., dust on the lens). Update 14: ATNMBL is a concept vehicle for 2040 that represents the end of driving and an alternative approach to car design. Upon entering ATNMBL, you are presented with a simple question: "Where can I take you

same-blog 3 0.96736723 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

Introduction: Update 8 : The Cost of Latency  by James Hamilton. James summarizing some latency info from   Steve Souder , Greg Linden , and Marissa Mayer .   Speed [is] an undervalued and under-discussed asset on the web. Update 7: How do you know when you need more memcache servers? . Dathan Pattishall talks about using memcache not to scale, but to reduce latency and reduce I/O spikes, and how to use stats to know when more servers are needed. Update 6: Stock Traders Find Speed Pays, in Milliseconds . Goldman Sachs is making record profits off a 500 millisecond trading advantage. Yes, latency matters. As an interesting aside, Libet found 500 msecs is about the time it takes the brain to weave together an experience of consciousness from all our sensor inputs. Update 5: Shopzilla's Site Redo - You Get What You Measure . At the Velocity conference Phil Dixon, from Shopzilla, presented data showing a 5 second speed up resulted in a 25% increase in page views, a 10% increas

4 0.96662003 1316 high scalability-2012-09-04-Changing Architectures: New Datacenter Networks Will Set Your Code and Data Free

Introduction: One consequence of IT standardization and commodification has been Google’s datacenter is the computer view of the world. In that view all compute resources (memory, CPU, storage) are fungible. They are interchangeable and location independent, individual computers lose identity and become just a part of a service. Thwarting that nirvana has been the abysmal performance of commodity datacenter networks which have caused the preference of architectures that favor the collocation of state and behaviour on the same box. MapReduce famously ships code over to storage nodes for just this reason. Change the network and you change the fundamental assumption driving collocation based software architectures. You are then free to store data anywhere and move compute anywhere you wish. The datacenter becomes the computer. On the host side with an x8 slot running at PCI-Express 3.0 speeds able to push 8GB/sec (that’s bytes) of bandwidth in both directions, we have

5 0.96260971 1112 high scalability-2011-09-07-What Google App Engine Price Changes Say About the Future of Web Architecture

Introduction: When I was a child, I spake as a child, I understood as a child, I thought as a child: but when I became a man, I put away childish things . -- Corinthians With this new pricing, developments will be driven by the costs . I like to optimize my apps to make them better or faster, but to optimize them just to make them cheaper is a waste of time. -- Sylvain on Google Groups The dream is dead. Google App Engine's bold pay for what you use dream dies as it leaves childish things behind and becomes a real product . Pricing will change . Architectures will change. Customers will change. Hearts and minds will change. But Google App Engine  will survive.  Google is shutting down many of its projects . GAE is not among them. Do we have GAE's pricing change to thank for it surving the  more wood behind more deadly arrows push? Without a radical and quick shift towards profitably GAE would no doubt be a historical footnote in the long scroll of good ideas. The urgency involve

6 0.95849252 1460 high scalability-2013-05-17-Stuff The Internet Says On Scalability For May 17, 2013

7 0.95836419 1010 high scalability-2011-03-24-Strategy: Disk Backup for Speed, Tape Backup to Save Your Bacon, Just Ask Google

8 0.95793146 1207 high scalability-2012-03-12-Google: Taming the Long Latency Tail - When More Machines Equals Worse Results

9 0.95745635 849 high scalability-2010-06-28-VoltDB Decapitates Six SQL Urban Myths and Delivers Internet Scale OLTP in the Process

10 0.95606911 1439 high scalability-2013-04-12-Stuff The Internet Says On Scalability For April 12, 2013

11 0.95559126 1117 high scalability-2011-09-16-Stuff The Internet Says On Scalability For September 16, 2011

12 0.95540845 1509 high scalability-2013-08-30-Stuff The Internet Says On Scalability For August 30, 2013

13 0.95436883 1475 high scalability-2013-06-13-Busting 4 Modern Hardware Myths - Are Memory, HDDs, and SSDs Really Random Access?

14 0.95424885 76 high scalability-2007-08-29-Skype Failed the Boot Scalability Test: Is P2P fundamentally flawed?

15 0.95354033 825 high scalability-2010-05-10-Sify.com Architecture - A Portal at 3900 Requests Per Second

16 0.95309925 920 high scalability-2010-10-15-Troubles with Sharding - What can we learn from the Foursquare Incident?

17 0.9530766 1233 high scalability-2012-04-25-The Anatomy of Search Technology: blekko’s NoSQL database

18 0.9526909 1572 high scalability-2014-01-03-Stuff The Internet Says On Scalability For January 3rd, 2014

19 0.95265657 1564 high scalability-2013-12-13-Stuff The Internet Says On Scalability For December 13th, 2013

20 0.9526096 1279 high scalability-2012-07-09-Data Replication in NoSQL Databases