high_scalability high_scalability-2009 high_scalability-2009-507 knowledge-graph by maker-knowledge-mining

507 high scalability-2009-02-03-Paper: Optimistic Replication


meta infos for this blog

Source: html

Introduction: To scale in the large you have to partition. Data has to be spread around, replicated, and kept consistent (keeping replicas sufficiently similar to one another despite operations being submitted independently at different sites). The result is a highly available, well performing, and scalable system. Partitioning is required, but it's a pain to do efficiently and correctly. Until Quantum teleportation becomes a reality how data is kept consistent across a bewildering number of failure scenarios is a key design decision. This excellent paper by Yasushi Saito and Marc Shapiro takes us on a wild ride (OK, maybe not so wild) of different approaches to achieving consistency. What's cool about this paper is they go over some real systems that we are familiar with and cover how they work: DNS (single-master, state-transfer), Usenet (multi-master), PDAs (multi-master, state-transfer, manual or application-specific conflict resolution), Bayou (multi-master, operation-transfer, epidemic


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Data has to be spread around, replicated, and kept consistent (keeping replicas sufficiently similar to one another despite operations being submitted independently at different sites). [sent-2, score-0.355]

2 Until Quantum teleportation becomes a reality how data is kept consistent across a bewildering number of failure scenarios is a key design decision. [sent-5, score-0.237]

3 This excellent paper by Yasushi Saito and Marc Shapiro takes us on a wild ride (OK, maybe not so wild) of different approaches to achieving consistency. [sent-6, score-0.488]

4 The paper then goes on to explain in detail the different approaches to achieving consistency. [sent-8, score-0.383]

5 The abstract: Data replication is a key technology in distributed data sharing systems, enabling higher availability and performance. [sent-10, score-0.234]

6 This paper surveys optimistic replication algorithms that allow replica contents to diverge in the short term, in order to support concurrent work practices and to tolerate failures in low-quality communication links. [sent-11, score-1.059]

7 The importance of such techniques is increasing as collaboration through wide-area and mobile networks becomes popular. [sent-12, score-0.156]

8 Optimistic replication techniques are different from traditional “pessimistic” ones. [sent-13, score-0.39]

9 Instead of synchronous replica coordination, an optimistic algorithm propagates changes in the background, discovers conflicts after they happen and reaches agreement on the final contents incrementally. [sent-14, score-0.79]

10 We explore the solution space for optimistic replication algorithms. [sent-15, score-0.533]

11 This paper identifies key challenges facing optimistic replication systems — ordering operations, detecting and resolving conflicts, propagating changes efficiently, and bounding replica divergence—and provides a comprehensive survey of techniques developed for addressing these challenges. [sent-16, score-1.238]

12 Optimistic, asynchronous data replication is an appealing technique; it indeed improves networking flexibility and scalability. [sent-18, score-0.402]

13 Some environments or application areas could simply not function without optimistic replication. [sent-19, score-0.299]

14 Traditional, pessimistic replication, with many off-the-shelf solutions, is perfectly adequate in small-scale, fully connected, reliable networking environments. [sent-24, score-0.268]

15 Where pessimistic techniques are the cause of poor performance or lack of availability, or do not scale well, try single-master replication: it is simple, conflictfree, and scales well in practice. [sent-25, score-0.424]

16 State transfer using Thomas’s write rule works well for many applications. [sent-26, score-0.187]

17 Advanced techniques such as version vectors and operation transfer should be used only when you need flexibility and semantically rich conflict resolution. [sent-27, score-0.815]

18 While connected, propagate often and keep replicas in close synchronization. [sent-29, score-0.222]

19 Commutativity should be the default; design your system so that non-commutative operations are the uncommon case. [sent-32, score-0.173]

20 When operations are dependent upon each other, represent the invariants explicitly. [sent-35, score-0.175]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('optimistic', 0.299), ('pessimistic', 0.268), ('replication', 0.234), ('resolution', 0.223), ('conflict', 0.223), ('transfer', 0.187), ('paper', 0.166), ('techniques', 0.156), ('propagate', 0.131), ('conflicts', 0.123), ('contents', 0.117), ('approaches', 0.115), ('wild', 0.105), ('achieving', 0.102), ('operations', 0.1), ('consistent', 0.095), ('replicas', 0.091), ('replica', 0.09), ('disconnection', 0.089), ('revisitedby', 0.089), ('saito', 0.089), ('usenet', 0.089), ('yasushi', 0.089), ('flexibility', 0.088), ('discovers', 0.084), ('semantically', 0.084), ('shapiro', 0.084), ('appealing', 0.08), ('diverge', 0.08), ('epidemic', 0.08), ('manual', 0.079), ('propagating', 0.077), ('bounding', 0.077), ('monotonically', 0.077), ('propagates', 0.077), ('vectors', 0.077), ('connected', 0.077), ('divergence', 0.075), ('invariants', 0.075), ('surveys', 0.073), ('reviewing', 0.073), ('uncommon', 0.073), ('bewildering', 0.073), ('commutativity', 0.073), ('identifies', 0.071), ('commutative', 0.071), ('thomas', 0.069), ('kept', 0.069), ('resolving', 0.068), ('marc', 0.067)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000002 507 high scalability-2009-02-03-Paper: Optimistic Replication

Introduction: To scale in the large you have to partition. Data has to be spread around, replicated, and kept consistent (keeping replicas sufficiently similar to one another despite operations being submitted independently at different sites). The result is a highly available, well performing, and scalable system. Partitioning is required, but it's a pain to do efficiently and correctly. Until Quantum teleportation becomes a reality how data is kept consistent across a bewildering number of failure scenarios is a key design decision. This excellent paper by Yasushi Saito and Marc Shapiro takes us on a wild ride (OK, maybe not so wild) of different approaches to achieving consistency. What's cool about this paper is they go over some real systems that we are familiar with and cover how they work: DNS (single-master, state-transfer), Usenet (multi-master), PDAs (multi-master, state-transfer, manual or application-specific conflict resolution), Bayou (multi-master, operation-transfer, epidemic

2 0.18891723 963 high scalability-2010-12-23-Paper: CRDTs: Consistency without concurrency control

Introduction: For a great Christmas read forget The Night Before Christmas , a heart warming poem written by Clement Moore for his children, that created the modern idea of Santa Clause we all know and anticipate each Christmas eve. Instead, curl up with a some potent  eggnog , nog being any drink made with rum, and read  CRDTs: Consistency without concurrency control  by Mihai Letia, Nuno Preguiça, and Marc Shapiro, which talks about CRDTs (Commutative Replicated Data Type), a data type whose operations commute when they are concurrent . From the introduction, which also serves as a nice concise overview of distributed consistency issues: Shared read-only data is easy to scale by using well-understood replication techniques. However, sharing mutable data at a large scale is a difficult problem, because of the CAP impossibility result [5]. Two approaches dominate in practice. One ensures scalability by giving up consistency guarantees, for instance using the Last-Writer-Wins (LWW) approach [

3 0.12472314 357 high scalability-2008-07-26-Google's Paxos Made Live – An Engineering Perspective

Introduction: This is an unusually well written and useful paper . It talks in detail about experiences implementing a complex project, something we don't see very often. They shockingly even admit that creating a working implementation of Paxos was more difficult than just translating the pseudo code. Imagine that, programmers aren't merely typists! I particularly like the explanation of the Paxos algorithm and why anyone would care about it, working with disk corruption, using leases to support simultaneous reads, using epoch numbers to indicate a new master election, using snapshots to prevent unbounded logs, using MultiOp to implement database transactions, how they tested the system, and their openness with the various problems they had. A lot to learn here. From the paper: We describe our experience building a fault-tolerant data-base using the Paxos consensus algorithm. Despite the existing literature in the field, building such a database proved to be non-trivial. We describe selected alg

4 0.11495657 1146 high scalability-2011-11-23-Paper: Don’t Settle for Eventual: Scalable Causal Consistency for Wide-Area Storage with COPS

Introduction: Teams from Princeton and CMU are working together to solve one of the most difficult problems in the repertoire: scalable geo-distributed data stores. Major companies like Google and Facebook have been working on multiple datacenter database functionality for some time, but there's still a general lack of available systems that work for complex data scenarios. The ideas in this paper-- Don’t Settle for Eventual: Scalable Causal Consistency for Wide-Area Storage with COPS --are different. It's not another eventually consistent system, or a traditional transaction oriented system, or a replication based system, or a system that punts on the issue. It's something new, a causally consistent system that achieves ALPS system properties. Move over CAP, NoSQL, etc, we have another acronym: ALPS - Available (operations always complete successfully), Low-latency (operations complete quickly (single digit milliseconds)), Partition-tolerant (operates with a partition), and Scalable (just a

5 0.11327565 936 high scalability-2010-11-09-Facebook Uses Non-Stored Procedures to Update Social Graphs

Introduction: Facebook's Ryan Mack gave a MySQL Tech Talk  where he talked about using what he called  Non-stored Procedures for adding edges to Facebook's social graph. The question is: how can edges quickly be added to the social graph? The answer is ultimately one of deciding where logic should be executed, especially when locks are kept open during network hops. Ryan explained a key element of the Facebook data model are the connections between people, things they've liked, and places they've checked-in. A lot of their writes are adding edges to the social graph.  Currently this is a two step process, run inside a transaction: add a new edge into the graph if the add was successful then increment the number of edges on a node This approach works until there's a very hot node that is being added to rapidly. For example, a popular game adds a new character and everyone likes it at the same time or a new album comes out and everyone likes it at the same time. They were limited to

6 0.10941105 1022 high scalability-2011-04-13-Paper: NoSQL Databases - NoSQL Introduction and Overview

7 0.10938574 687 high scalability-2009-08-24-How Google Serves Data from Multiple Datacenters

8 0.10791803 139 high scalability-2007-10-30-Paper: Dynamo: Amazon’s Highly Available Key-value Store

9 0.10330026 1041 high scalability-2011-05-15-Building a Database remote availability site

10 0.10254118 1374 high scalability-2012-12-18-Georeplication: When Bad Things Happen to Good Systems

11 0.10108087 1017 high scalability-2011-04-06-Netflix: Run Consistency Checkers All the time to Fixup Transactions

12 0.10018949 1514 high scalability-2013-09-09-Need Help with Database Scalability? Understand I-O

13 0.098940305 1345 high scalability-2012-10-22-Spanner - It's About Programmers Building Apps Using SQL Semantics at NoSQL Scale

14 0.098925032 1153 high scalability-2011-12-08-Update on Scalable Causal Consistency For Wide-Area Storage With COPS

15 0.098318391 1596 high scalability-2014-02-14-Stuff The Internet Says On Scalability For February 14th, 2014

16 0.096066959 925 high scalability-2010-10-22-Paper: Netflix’s Transition to High-Availability Storage Systems

17 0.094737053 195 high scalability-2007-12-28-Amazon's EC2: Pay as You Grow Could Cut Your Costs in Half

18 0.093618207 1604 high scalability-2014-03-03-The “Four Hamiltons” Framework for Mitigating Faults in the Cloud: Avoid it, Mask it, Bound it, Fix it Fast

19 0.092713393 705 high scalability-2009-09-16-Paper: A practical scalable distributed B-tree

20 0.091798216 188 high scalability-2007-12-19-How can I learn to scale my project?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.159), (1, 0.081), (2, 0.008), (3, 0.066), (4, -0.009), (5, 0.09), (6, 0.025), (7, -0.035), (8, -0.063), (9, -0.02), (10, -0.017), (11, 0.047), (12, -0.075), (13, -0.04), (14, 0.026), (15, 0.061), (16, 0.062), (17, 0.009), (18, 0.03), (19, -0.023), (20, 0.077), (21, 0.076), (22, -0.038), (23, 0.046), (24, -0.078), (25, -0.031), (26, -0.009), (27, 0.001), (28, 0.053), (29, -0.045), (30, 0.012), (31, 0.003), (32, -0.038), (33, 0.011), (34, -0.044), (35, -0.033), (36, 0.002), (37, -0.028), (38, 0.034), (39, 0.066), (40, -0.024), (41, 0.034), (42, -0.013), (43, -0.019), (44, 0.032), (45, 0.014), (46, -0.044), (47, 0.02), (48, 0.016), (49, -0.066)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95511556 507 high scalability-2009-02-03-Paper: Optimistic Replication

Introduction: To scale in the large you have to partition. Data has to be spread around, replicated, and kept consistent (keeping replicas sufficiently similar to one another despite operations being submitted independently at different sites). The result is a highly available, well performing, and scalable system. Partitioning is required, but it's a pain to do efficiently and correctly. Until Quantum teleportation becomes a reality how data is kept consistent across a bewildering number of failure scenarios is a key design decision. This excellent paper by Yasushi Saito and Marc Shapiro takes us on a wild ride (OK, maybe not so wild) of different approaches to achieving consistency. What's cool about this paper is they go over some real systems that we are familiar with and cover how they work: DNS (single-master, state-transfer), Usenet (multi-master), PDAs (multi-master, state-transfer, manual or application-specific conflict resolution), Bayou (multi-master, operation-transfer, epidemic

2 0.88070041 963 high scalability-2010-12-23-Paper: CRDTs: Consistency without concurrency control

Introduction: For a great Christmas read forget The Night Before Christmas , a heart warming poem written by Clement Moore for his children, that created the modern idea of Santa Clause we all know and anticipate each Christmas eve. Instead, curl up with a some potent  eggnog , nog being any drink made with rum, and read  CRDTs: Consistency without concurrency control  by Mihai Letia, Nuno Preguiça, and Marc Shapiro, which talks about CRDTs (Commutative Replicated Data Type), a data type whose operations commute when they are concurrent . From the introduction, which also serves as a nice concise overview of distributed consistency issues: Shared read-only data is easy to scale by using well-understood replication techniques. However, sharing mutable data at a large scale is a difficult problem, because of the CAP impossibility result [5]. Two approaches dominate in practice. One ensures scalability by giving up consistency guarantees, for instance using the Last-Writer-Wins (LWW) approach [

3 0.86342382 1273 high scalability-2012-06-27-Paper: Logic and Lattices for Distributed Programming

Introduction: Neil Conway from Berkeley CS is giving an advanced level talk at a meetup today  in San Francisco on a new paper:  Logic and Lattices for Distributed Programming  - extending set logic to support CRDT-style lattices.  The description of the meetup is probably the clearest introduction to the paper: Developers are increasingly choosing datastores that sacrifice strong consistency guarantees in exchange for improved performance and availability. Unfortunately, writing reliable distributed programs without the benefit of strong consistency can be very challenging.   In this talk, I'll discuss work from our group at UC Berkeley that aims to make it easier to write distributed programs without relying on strong consistency. Bloom is a declarative programming language for distributed computing, while CALM is an analysis technique that identifies programs that are guaranteed to be eventually consistent. I'll then discuss our recent work on extending CALM to support a broader range of

4 0.84343457 1374 high scalability-2012-12-18-Georeplication: When Bad Things Happen to Good Systems

Introduction: Georeplication is one of the standard techniques for dealing when bad things--failure and latency--happen to good systems. The problem is always: how do you do that?  Murat Demirbas , Associate Professor at SUNY Buffalo, has a couple of really good posts that can help: MDCC: Multi-Data Center Consistency  and Making Geo-Replicated Systems Fast as Possible, Consistent when Necessary .  In  MDCC: Multi-Data Center Consistency  Murat discusses a paper that says synchronous wide-area replication can be feasible. There's a quick and clear explanation of Paxos and various optimizations that is worth the price of admission. We find that strong consistency doesn't have to be lost across a WAN: The good thing about using Paxos over the WAN is you /almost/ get the full CAP  (all three properties: consistency, availability, and partition-freedom). As we discussed earlier (Paxos taught), Paxos is CP, that is, in the presence of a partition, Paxos keeps consistency over availability. But, P

5 0.82287204 1146 high scalability-2011-11-23-Paper: Don’t Settle for Eventual: Scalable Causal Consistency for Wide-Area Storage with COPS

Introduction: Teams from Princeton and CMU are working together to solve one of the most difficult problems in the repertoire: scalable geo-distributed data stores. Major companies like Google and Facebook have been working on multiple datacenter database functionality for some time, but there's still a general lack of available systems that work for complex data scenarios. The ideas in this paper-- Don’t Settle for Eventual: Scalable Causal Consistency for Wide-Area Storage with COPS --are different. It's not another eventually consistent system, or a traditional transaction oriented system, or a replication based system, or a system that punts on the issue. It's something new, a causally consistent system that achieves ALPS system properties. Move over CAP, NoSQL, etc, we have another acronym: ALPS - Available (operations always complete successfully), Low-latency (operations complete quickly (single digit milliseconds)), Partition-tolerant (operates with a partition), and Scalable (just a

6 0.81157082 890 high scalability-2010-09-01-Paper: The Case for Determinism in Database Systems

7 0.81089723 1243 high scalability-2012-05-10-Paper: Paxos Made Moderately Complex

8 0.77638727 1153 high scalability-2011-12-08-Update on Scalable Causal Consistency For Wide-Area Storage With COPS

9 0.77629656 357 high scalability-2008-07-26-Google's Paxos Made Live – An Engineering Perspective

10 0.77434599 1459 high scalability-2013-05-16-Paper: Warp: Multi-Key Transactions for Key-Value Stores

11 0.76653516 1611 high scalability-2014-03-12-Paper: Scalable Eventually Consistent Counters over Unreliable Networks

12 0.76531821 1450 high scalability-2013-05-01-Myth: Eric Brewer on Why Banks are BASE Not ACID - Availability Is Revenue

13 0.75710118 1629 high scalability-2014-04-10-Paper: Scalable Atomic Visibility with RAMP Transactions - Scale Linearly to 100 Servers

14 0.75252587 1463 high scalability-2013-05-23-Paper: Calvin: Fast Distributed Transactions for Partitioned Database Systems

15 0.74541593 676 high scalability-2009-08-08-Yahoo!'s PNUTS Database: Too Hot, Too Cold or Just Right?

16 0.74078029 705 high scalability-2009-09-16-Paper: A practical scalable distributed B-tree

17 0.72697884 108 high scalability-2007-10-03-Paper: Brewer's Conjecture and the Feasibility of Consistent Available Partition-Tolerant Web Services

18 0.72632563 950 high scalability-2010-11-30-NoCAP – Part III – GigaSpaces clustering explained..

19 0.72605771 529 high scalability-2009-03-10-Paper: Consensus Protocols: Paxos

20 0.72268701 844 high scalability-2010-06-18-Paper: The Declarative Imperative: Experiences and Conjectures in Distributed Logic


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.095), (2, 0.242), (10, 0.061), (17, 0.166), (30, 0.038), (56, 0.01), (61, 0.08), (77, 0.023), (79, 0.087), (85, 0.084), (94, 0.034)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.96724242 956 high scalability-2010-12-08-How To Get Experience Working With Large Datasets

Introduction: I think I have been lucky that several of the projects I been worked on have exposed me to having to manage large volumes of data. The largest dataset was probably at  MailChannels , though  Livedoor.com  also had some sizeable data for their books store and department store. Most of the pain with Livedoor’s data was from it being in Japanese. Other than that, it was pretty static. This was similar to the data I worked with at the BBC. You would be surprised at how much data can be involved with a single episode of a TV show. With any in-house generated data the update size and frequency is much less dramatic, even if the data is being regularly pumped in from 3rd parties. Those Humans The real fun comes when the public (that’s you guys) are generating data to be pumped into the system. MailChannels’ work with email, which is human generated (lies! 95% is actually from spambots). Humans are unpredictable. They suddenly all get excited about the same thing at the same time, they are

2 0.96559417 1467 high scalability-2013-05-30-Google Finds NUMA Up to 20% Slower for Gmail and Websearch

Introduction: When you have a large population of servers you have both the opportunity and the incentive to perform interesting studies. Authors from Google and the University of California in  Optimizing Google’s Warehouse Scale Computers: The NUMA Experience  conducted such a study, taking a look at how jobs run on clusters of machines using a  NUMA  architecture. Since NUMA is common on server class machines it's a topic of general interest for those looking to maximize machine utilization across clusters. Some of the results are surprising: The methodology of how to attribute such fine performance variations to NUMA effects within such a complex system is perhaps more interesting than the results themselves. Well worth reading just for that story. The performance swing due to NUMA is up to 15% on AMD Barcelona for Gmail backend and 20% on Intel Westmere for Web-search frontend.  Memory locality is not always King. Because of the interaction between NUMA and cache sharing/contention it

3 0.93945813 506 high scalability-2009-02-03-10 More Rules for Even Faster Websites

Introduction: Update:  How-To Minimize Load Time for Fast User Experiences . Shows how to analyze the bottlenecks preventing websites and blogs from loading quickly and how to resolve them. 80-90% of the end-user response time is spent on the frontend, so it makes sense to concentrate efforts there before heroically rewriting the backend. Take a shower before buying a Porsche, if you know what I mean. Steve Souders, author of High Performance Websites and Yslow , has ten more best practices to speed up your website : Split the initial payload Load scripts without blocking Don’t scatter scripts Split dominant content domains Make static content cookie-free Reduce cookie weight Minify CSS Optimize images Use iframes sparingly To www or not to www Sadly, according to String Theory, there are only 26.7 rules left, so get them while they're still in our dimension. Here are slides on the first few rules. Love the speeding dog slide. That's exactly what my dog looks like trav

4 0.93733042 1584 high scalability-2014-01-22-How would you build the next Internet? Loons, Drones, Copters, Satellites, or Something Else?

Introduction: If you were going to design a next generation Internet at the physical layer that routes around the current Internet, what would it look like? What should it do? How should it work? Who should own it? How should it be paid for? How would you access it? It has long been said the Internet routes around obstacles. Snowden has revealed some major obstacles. The beauty of the current current app and web system is the physical network doesn't matter. We can just replace it with something else. Something that doesn't flow through choke points like  backhaul networks , under sea cables , and cell towers . What might that something else look like? Google's Loon Project Project Loon was so named because the idea was thought to be loony. Maybe not. The idea is to float high-altitude balloons 20 miles in the air to create an aerial wireless network with up to 3G-like speeds. Signals travel through the balloon network from balloon to balloon, then to a ground-based station conne

5 0.93545336 1225 high scalability-2012-04-09-Why My Slime Mold is Better than Your Hadoop Cluster

Introduction: Update :  Organism without a brain creates external memories for navigation shows slime mold is even cooler than originally thought, storing a record of where it's been using slime: The authors conclude, the slime isn't just the mold's calling card. Instead, it's a way of marking the environment so that the organism can sense where it's been, and not expend effort on searches that won't pay off. Although the situation isn't an exact parallel, the authors make a comparison to the pheromone trails used by ants.    In After Life: The Strange Science Of Decay there’s a truly incredible sequence of gorgeously shot video showing how creeping slime mold solves mazes and performs other other amazing feats of computation. Take a look at what simple one celled organisms can do: The whole video is really well done and shockingly revelatory. It’s the story of decay, how atoms created during the Big Bang and through countless supernova explosions are continually rearranged an

same-blog 6 0.92832774 507 high scalability-2009-02-03-Paper: Optimistic Replication

7 0.92408687 869 high scalability-2010-07-30-Hot Scalability Links for July 30, 2010

8 0.92115265 631 high scalability-2009-06-15-Large-scale Graph Computing at Google

9 0.89992505 1333 high scalability-2012-10-04-LinkedIn Moved from Rails to Node: 27 Servers Cut and Up to 20x Faster

10 0.88883561 765 high scalability-2010-01-25-Let's Welcome our Neo-Feudal Overlords

11 0.86820209 849 high scalability-2010-06-28-VoltDB Decapitates Six SQL Urban Myths and Delivers Internet Scale OLTP in the Process

12 0.865987 1339 high scalability-2012-10-12-Stuff The Internet Says On Scalability For October 12, 2012

13 0.86502075 936 high scalability-2010-11-09-Facebook Uses Non-Stored Procedures to Update Social Graphs

14 0.86302716 1327 high scalability-2012-09-21-Stuff The Internet Says On Scalability For September 21, 2012

15 0.86260194 1080 high scalability-2011-07-15-Stuff The Internet Says On Scalability For July 15, 2011

16 0.86118841 306 high scalability-2008-04-21-The Search for the Source of Data - How SimpleDB Differs from a RDBMS

17 0.85822898 1602 high scalability-2014-02-26-The WhatsApp Architecture Facebook Bought For $19 Billion

18 0.85801971 1637 high scalability-2014-04-25-Stuff The Internet Says On Scalability For April 25th, 2014

19 0.85761619 1475 high scalability-2013-06-13-Busting 4 Modern Hardware Myths - Are Memory, HDDs, and SSDs Really Random Access?

20 0.85747725 1564 high scalability-2013-12-13-Stuff The Internet Says On Scalability For December 13th, 2013