high_scalability high_scalability-2012 high_scalability-2012-1319 knowledge-graph by maker-knowledge-mining

1319 high scalability-2012-09-10-Russ’ 10 Ingredient Recipe for Making 1 Million TPS on $5K Hardware


meta infos for this blog

Source: html

Introduction: My name is Russell Sullivan , I am the author of AlchemyDB: a highly flexible NoSQL/SQL/DocumentStore/GraphDB-datastore built on top of redis. I have spent the last several years trying to find a way to sanely house multiple datastore-genres under one roof while (almost paradoxically) pushing performance to its limits. I recently joined the NoSQL company Aerospike (formerly Citrusleaf) with the goal of incrementally grafting AlchemyDB’s flexible data-modeling capabilities onto Aerospike’s high-velocity horizontally-scalable key-value data-fabric. We recently completed a peak-performance TPS optimization project : starting at 200K TPS, pushing to the recent community edition launch at 500K TPS, and finally arriving at our 2012 goal: 1M TPS on $5K hardware . Getting to one million over-the-wire client-server database-requests per-second on a single machine costing $5K is a balance between trimming overhead on many axes and using a shared nothing architecture to i


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 : Get packet from event loop (event-driven) Parse action Lookup data in memory (this is fast enough to happen in-thread) Form response packet Send packet back via non-blocking call 4 . [sent-24, score-0.844]

2 You can determine which core on which machine a piece of data will be written-to/served-from and the client can map a tcp-port to this core and all lookups go straight to the data. [sent-27, score-0.49]

3 Event loop threads reading and writing tcp packets should each be pinned to their own core and no other threads should be allowed on these cores. [sent-46, score-0.652]

4 These threads are so critical to performance; any context switching on their designated cores will degrade peak-performance significantly. [sent-47, score-0.595]

5 There are different methodologies depending on the number of cores you have: For QuadCore CPUs : round-robin spread IRQ affinity (of the NIC’s Queue’s) to the Network-facing-event-loop-threads (e. [sent-51, score-0.373]

6 8 Queue’s, map 2 Queue’s to each core) On Hexacore (and greater) CPUs : reserve 1+ cores to do nothing but IRQ-processing (i. [sent-53, score-0.314]

7 send IRQ’s to these cores and don’t let any other thread run on these cores) and use ALL other cores for Network-facing-event-loop-threads (similarly running w/o competition on their own designated core). [sent-55, score-0.753]

8 The core receiving the IRQ will then signal the recipient core and the packet has a near 100% chance of being in L3 cache, so the transport of the packet from core to core is near optimal. [sent-56, score-1.362]

9 Avoid inter-CPU communication; it is dog-slow when compared to communication between cores on the same CPU die. [sent-60, score-0.289]

10 The Proof is Always in the Pudding Any 10 step recipe is best illustrated via an example: the client knows (via multiple hashings) that dataX is presently on core8 of ipY, which has a predefined mapping of going to ipY:portZ. [sent-65, score-0.266]

11 NIC2 sends all of its IRQs to CPU2, where the packet gets to core8 w/ minimal hardware/OS overhead. [sent-67, score-0.272]

12 The packet creates an event, which triggers a dedicated thread that runs w/o competition on core8. [sent-68, score-0.287]

13 The packet is parsed; the operation is to look up dataX, which will be in its local NUMA memory pool. [sent-69, score-0.281]

14 The thread then replies with a non-blocking packet that goes back thru only cores on the local CPU2, which sends ALL of its IRQs to NIC2. [sent-71, score-0.642]

15 IRQ affinity insures software interrupts don’t bottleneck on a single core and that they come from and go from/to their designated NIC. [sent-76, score-0.664]

16 TCP packets are processed as events by a single thread running dedicated on its own core. [sent-79, score-0.242]

17 At Aerospike, I knew I had it right when I watched the output of the “top” command, (viewing all cores) and there was near zero idle % cpu and also a very uniform balance across cores. [sent-82, score-0.252]

18 Which is to say software-interrupts from tcp packets were using 22% of the core, context switches passing tcp-packets back and forth from the operating system were taking up 35%, and our software was taking up 39% to do the database transaction. [sent-84, score-0.397]

19 When the perfect balance across cores was achieved optimal performance was achieved, from an architectural standpoint. [sent-85, score-0.384]

20 We can still streamline our software but at least the flow of packets to & fro Aerospike is near optimal. [sent-86, score-0.288]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('nic', 0.252), ('ipy', 0.239), ('designated', 0.226), ('irq', 0.226), ('cores', 0.226), ('packet', 0.212), ('ingredients', 0.187), ('tps', 0.182), ('core', 0.174), ('packets', 0.167), ('affinity', 0.147), ('near', 0.121), ('citrusleaf', 0.119), ('datax', 0.119), ('irqs', 0.119), ('interrupts', 0.117), ('alchemydb', 0.108), ('port', 0.095), ('tcp', 0.095), ('attain', 0.093), ('loop', 0.09), ('aerospike', 0.089), ('achieved', 0.089), ('nothing', 0.088), ('context', 0.08), ('client', 0.079), ('thread', 0.075), ('recipe', 0.075), ('cpus', 0.072), ('peak', 0.07), ('dollar', 0.07), ('local', 0.069), ('balance', 0.069), ('path', 0.068), ('dram', 0.066), ('threads', 0.063), ('communication', 0.063), ('lookups', 0.063), ('cpu', 0.062), ('paths', 0.062), ('event', 0.06), ('sends', 0.06), ('queue', 0.058), ('via', 0.058), ('goal', 0.056), ('switches', 0.055), ('interface', 0.055), ('presently', 0.054), ('grafting', 0.054), ('lockless', 0.054)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 1319 high scalability-2012-09-10-Russ’ 10 Ingredient Recipe for Making 1 Million TPS on $5K Hardware

Introduction: My name is Russell Sullivan , I am the author of AlchemyDB: a highly flexible NoSQL/SQL/DocumentStore/GraphDB-datastore built on top of redis. I have spent the last several years trying to find a way to sanely house multiple datastore-genres under one roof while (almost paradoxically) pushing performance to its limits. I recently joined the NoSQL company Aerospike (formerly Citrusleaf) with the goal of incrementally grafting AlchemyDB’s flexible data-modeling capabilities onto Aerospike’s high-velocity horizontally-scalable key-value data-fabric. We recently completed a peak-performance TPS optimization project : starting at 200K TPS, pushing to the recent community edition launch at 500K TPS, and finally arriving at our 2012 goal: 1M TPS on $5K hardware . Getting to one million over-the-wire client-server database-requests per-second on a single machine costing $5K is a balance between trimming overhead on many axes and using a shared nothing architecture to i

2 0.28698671 1643 high scalability-2014-05-06-The Quest for Database Scale: the 1 M TPS challenge - Three Design Points and Five common Bottlenecks to avoid

Introduction: This a guest post by Rajkumar Iyer , a Member of Technical Staff at Aerospike. About a year ago, Aerospike embarked upon a quest to increase in-memory database performance - 1 Million TPS on a single inexpensive commodity server. NoSQL has the reputation of speed, and we saw great benefit from improving latency and throughput of cacheless architectures. At that time, we took a version of Aerospike delivering about 200K TPS, improved a few things - performance went to 500k TPS - and published the Aerospike 2.0 Community Edition. We then used kernel tuning techniques and published the recipe for how we achieved 1 M TPS on $5k of hardware. This year we continued the quest. Our goal was to achieve 1 Million database transactions per second per server; more than doubling previous performance. This compares to Cassandra’s boast of 1M TPS on over 300 servers in Google Compute Engine - at a cost of $2 million dollars per year. We achieved this without kernel tuning.  This article d

3 0.25590947 1456 high scalability-2013-05-13-The Secret to 10 Million Concurrent Connections -The Kernel is the Problem, Not the Solution

Introduction: Now that we have the C10K concurrent connection problem licked, how do we level up and support 10 million concurrent connections? Impossible you say. Nope, systems right now are delivering 10 million concurrent connections using techniques that are as radical as they may be unfamiliar. To learn how it’s done we turn to Robert Graham , CEO of Errata Security, and his absolutely fantastic talk at Shmoocon 2013 called C10M Defending The Internet At Scale . Robert has a brilliant way of framing the problem that I’ve never heard of before. He starts with a little bit of history, relating how Unix wasn’t originally designed to be a general server OS, it was designed to be a control system for a telephone network. It was the telephone network that actually transported the data so there was a clean separation between the control plane and the data plane. The problem is we now use Unix servers as part of the data plane , which we shouldn’t do at all. If we were des

4 0.24552773 1536 high scalability-2013-10-23-Strategy: Use Linux Taskset to Pin Processes or Let the OS Schedule It?

Introduction: This question comes from Ulysses on an interesting thread from the Mechanical Sympathy news group, especially given how multiple processors are now the norm: Ulysses: On an 8xCPU Linux instance,  is it at all advantageous to use the Linux taskset command to pin an 8xJVM process set (co-ordinated as a www.infinispan.org distributed cache/data grid) to a specific CPU affinity set  (i.e. pin JVM0 process to CPU 0, JVM1 process to CPU1, ...., JVM7process to CPU 7) vs. just letting the Linux OS use its default mechanism for provisioning the 8xJVM process set to the available CPUs? In effrort to seek an optimal point (in the full event space), what are the conceptual trade-offs in considering "searching" each permutation of provisioning an 8xJVM process set to an 8xCPU set via taskset? Given taskset  is they key to the question, it would help to have a definition: Used to set or retrieve the CPU affinity of a running process given its PID or to launch a new COMMAND with

5 0.16725048 1213 high scalability-2012-03-22-Paper: Revisiting Network I-O APIs: The netmap Framework

Introduction: Here's a really good article in the Communications of the ACM on reducing network packet processing overhead by redesigning the network stack:  Revisiting Network I/O APIs: The Netmap Framework  by  Luigi Rizzo . As commodity networking performance increases operating systems need to keep up or all those CPUs will go to waste. How do they make this happen?   Abstract: Today 10-gigabit interfaces are used more and more in datacenters and servers. On these links, packets flow as fast as one every 67.2 nanoseconds, yet modern operating systems can take 10-20 times longer just to move one packet between the wire and the application. We can do much better, not with more powerful hardware but by revising architectural decisions made long ago regarding the design of device drivers and network stacks. The netmap framework is a promising step in this direction. Thanks to a careful design and the engineering of a new packet I/O API, netmap eliminates much unnecessary overhead and moves

6 0.1638201 1246 high scalability-2012-05-16-Big List of 20 Common Bottlenecks

7 0.15263426 636 high scalability-2009-06-23-Learn How to Exploit Multiple Cores for Better Performance and Scalability

8 0.12082934 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

9 0.12054147 1276 high scalability-2012-07-04-Top Features of a Scalable Database

10 0.11389808 1199 high scalability-2012-02-27-Zen and the Art of Scaling - A Koan and Epigram Approach

11 0.11297187 1206 high scalability-2012-03-09-Stuff The Internet Says On Scalability For March 9, 2012

12 0.10958546 1237 high scalability-2012-05-02-12 Ways to Increase Throughput by 32X and Reduce Latency by 20X

13 0.10931762 645 high scalability-2009-06-30-Hot New Trend: Linking Clouds Through Cheap IP VPNs Instead of Private Lines

14 0.10694204 1652 high scalability-2014-05-21-9 Principles of High Performance Programs

15 0.10437069 1218 high scalability-2012-03-29-Strategy: Exploit Processor Affinity for High and Predictable Performance

16 0.10375308 1362 high scalability-2012-11-26-BigData using Erlang, C and Lisp to Fight the Tsunami of Mobile Data

17 0.10263211 459 high scalability-2008-12-03-Java World Interview on Scalability and Other Java Scalability Secrets

18 0.10215349 464 high scalability-2008-12-13-Strategy: Facebook Tweaks to Handle 6 Time as Many Memcached Requests

19 0.10192014 1001 high scalability-2011-03-09-Google and Netflix Strategy: Use Partial Responses to Reduce Request Sizes

20 0.10169069 1425 high scalability-2013-03-18-Beyond Threads and Callbacks - Application Architecture Pros and Cons


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.182), (1, 0.095), (2, -0.025), (3, 0.002), (4, -0.023), (5, 0.017), (6, 0.088), (7, 0.087), (8, -0.143), (9, -0.017), (10, -0.004), (11, -0.02), (12, 0.07), (13, -0.002), (14, -0.054), (15, -0.026), (16, 0.039), (17, 0.029), (18, -0.054), (19, 0.005), (20, 0.012), (21, -0.026), (22, -0.056), (23, -0.01), (24, 0.082), (25, -0.013), (26, 0.011), (27, -0.059), (28, 0.067), (29, 0.052), (30, 0.044), (31, 0.01), (32, -0.002), (33, -0.053), (34, 0.036), (35, 0.004), (36, 0.018), (37, 0.069), (38, -0.045), (39, 0.063), (40, 0.001), (41, 0.018), (42, 0.017), (43, -0.037), (44, -0.0), (45, 0.053), (46, -0.062), (47, -0.037), (48, -0.026), (49, 0.027)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96532333 1319 high scalability-2012-09-10-Russ’ 10 Ingredient Recipe for Making 1 Million TPS on $5K Hardware

Introduction: My name is Russell Sullivan , I am the author of AlchemyDB: a highly flexible NoSQL/SQL/DocumentStore/GraphDB-datastore built on top of redis. I have spent the last several years trying to find a way to sanely house multiple datastore-genres under one roof while (almost paradoxically) pushing performance to its limits. I recently joined the NoSQL company Aerospike (formerly Citrusleaf) with the goal of incrementally grafting AlchemyDB’s flexible data-modeling capabilities onto Aerospike’s high-velocity horizontally-scalable key-value data-fabric. We recently completed a peak-performance TPS optimization project : starting at 200K TPS, pushing to the recent community edition launch at 500K TPS, and finally arriving at our 2012 goal: 1M TPS on $5K hardware . Getting to one million over-the-wire client-server database-requests per-second on a single machine costing $5K is a balance between trimming overhead on many axes and using a shared nothing architecture to i

2 0.87934119 1536 high scalability-2013-10-23-Strategy: Use Linux Taskset to Pin Processes or Let the OS Schedule It?

Introduction: This question comes from Ulysses on an interesting thread from the Mechanical Sympathy news group, especially given how multiple processors are now the norm: Ulysses: On an 8xCPU Linux instance,  is it at all advantageous to use the Linux taskset command to pin an 8xJVM process set (co-ordinated as a www.infinispan.org distributed cache/data grid) to a specific CPU affinity set  (i.e. pin JVM0 process to CPU 0, JVM1 process to CPU1, ...., JVM7process to CPU 7) vs. just letting the Linux OS use its default mechanism for provisioning the 8xJVM process set to the available CPUs? In effrort to seek an optimal point (in the full event space), what are the conceptual trade-offs in considering "searching" each permutation of provisioning an 8xJVM process set to an 8xCPU set via taskset? Given taskset  is they key to the question, it would help to have a definition: Used to set or retrieve the CPU affinity of a running process given its PID or to launch a new COMMAND with

3 0.82999122 1456 high scalability-2013-05-13-The Secret to 10 Million Concurrent Connections -The Kernel is the Problem, Not the Solution

Introduction: Now that we have the C10K concurrent connection problem licked, how do we level up and support 10 million concurrent connections? Impossible you say. Nope, systems right now are delivering 10 million concurrent connections using techniques that are as radical as they may be unfamiliar. To learn how it’s done we turn to Robert Graham , CEO of Errata Security, and his absolutely fantastic talk at Shmoocon 2013 called C10M Defending The Internet At Scale . Robert has a brilliant way of framing the problem that I’ve never heard of before. He starts with a little bit of history, relating how Unix wasn’t originally designed to be a general server OS, it was designed to be a control system for a telephone network. It was the telephone network that actually transported the data so there was a clean separation between the control plane and the data plane. The problem is we now use Unix servers as part of the data plane , which we shouldn’t do at all. If we were des

4 0.80747741 1478 high scalability-2013-06-19-Paper: MegaPipe: A New Programming Interface for Scalable Network I-O

Introduction: The paper  MegaPipe: A New Programming Interface for Scalable Network I/O  ( video , slides ) hits the common theme that if you want to go faster you need a better car design, not just a better driver. So that's why the authors started with a clean-slate and designed a network API from the ground up with support for concurrent I/O, a requirement for achieving high performance while scaling to large numbers of connections per thread, multiple cores, etc.  What they created is MegaPipe, "a new network programming API for message-oriented workloads to avoid the performance issues of BSD Socket API." The result: MegaPipe outperforms baseline Linux between  29% (for long connections)  and  582% (for short connections) . MegaPipe improves the performance of a modified version of  memcached between 15% and 320% . For a workload based on real-world HTTP traces, MegaPipe boosts the throughput of  nginx by 75% . What's this most excellent and interesting paper about? Message-oriented netwo

5 0.77993393 1218 high scalability-2012-03-29-Strategy: Exploit Processor Affinity for High and Predictable Performance

Introduction: Martin Thompson wrote a really interesting article on the beneficial performance impact of taking advantage of  Processor Affinity : The interesting thing I've observed is that the unpinned test will follow a step function of unpredictable performance.  Across many runs I've seen different patterns but all similar in this step function nature.  For the pinned tests I get consistent throughput with no step pattern and always the greatest throughput. The idea is by assigning a thread to a particular CPU that when a thread is rescheduled to run on the same CPU, it can take advantage of the "accumulated  state in the processor, including instructions and data in the cache."  With multi-core chips the norm now, you may want to decide for yourself how to assign work to cores and not let the OS do it for you. The results are surprisingly strong.

6 0.7685414 505 high scalability-2009-02-01-More Chips Means Less Salsa

7 0.74257624 636 high scalability-2009-06-23-Learn How to Exploit Multiple Cores for Better Performance and Scalability

8 0.7415055 1644 high scalability-2014-05-07-Update on Disqus: It's Still About Realtime, But Go Demolishes Python

9 0.73442644 1237 high scalability-2012-05-02-12 Ways to Increase Throughput by 32X and Reduce Latency by 20X

10 0.70387787 1591 high scalability-2014-02-05-Little’s Law, Scalability and Fault Tolerance: The OS is your bottleneck. What you can do?

11 0.7012642 1572 high scalability-2014-01-03-Stuff The Internet Says On Scalability For January 3rd, 2014

12 0.69977474 1314 high scalability-2012-08-30-Dramatically Improving Performance by Debugging Brutally Complex Prolems

13 0.69488126 914 high scalability-2010-10-04-Paper: An Analysis of Linux Scalability to Many Cores

14 0.69351381 1643 high scalability-2014-05-06-The Quest for Database Scale: the 1 M TPS challenge - Three Design Points and Five common Bottlenecks to avoid

15 0.68003327 1454 high scalability-2013-05-08-Typesafe Interview: Scala + Akka is an IaaS for Your Process Architecture

16 0.67610055 1652 high scalability-2014-05-21-9 Principles of High Performance Programs

17 0.67567539 1471 high scalability-2013-06-06-Paper: Memory Barriers: a Hardware View for Software Hackers

18 0.66902167 1213 high scalability-2012-03-22-Paper: Revisiting Network I-O APIs: The netmap Framework

19 0.66249478 1362 high scalability-2012-11-26-BigData using Erlang, C and Lisp to Fight the Tsunami of Mobile Data

20 0.65833336 1255 high scalability-2012-06-01-Stuff The Internet Says On Scalability For June 1, 2012


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.121), (2, 0.22), (10, 0.039), (30, 0.052), (40, 0.013), (59, 0.019), (61, 0.067), (63, 0.012), (77, 0.029), (79, 0.097), (82, 0.176), (85, 0.035), (94, 0.045)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.91758412 1319 high scalability-2012-09-10-Russ’ 10 Ingredient Recipe for Making 1 Million TPS on $5K Hardware

Introduction: My name is Russell Sullivan , I am the author of AlchemyDB: a highly flexible NoSQL/SQL/DocumentStore/GraphDB-datastore built on top of redis. I have spent the last several years trying to find a way to sanely house multiple datastore-genres under one roof while (almost paradoxically) pushing performance to its limits. I recently joined the NoSQL company Aerospike (formerly Citrusleaf) with the goal of incrementally grafting AlchemyDB’s flexible data-modeling capabilities onto Aerospike’s high-velocity horizontally-scalable key-value data-fabric. We recently completed a peak-performance TPS optimization project : starting at 200K TPS, pushing to the recent community edition launch at 500K TPS, and finally arriving at our 2012 goal: 1M TPS on $5K hardware . Getting to one million over-the-wire client-server database-requests per-second on a single machine costing $5K is a balance between trimming overhead on many axes and using a shared nothing architecture to i

2 0.90271401 1045 high scalability-2011-05-20-Stuff The Internet Says On Scalability For May 20, 2011

Introduction: Submitted for your reading pleasure on this beautiful morning:    Group Decision Making in Honey Bee Swarms . In distributed computing systems nodes reach a quorum  when deciding what to do as a group. It turns out bees also use quorum logic when deciding on where to nest! Bees do it a bit differently of course:  A scout bee votes for a site by spending time at it, somehow the scouts act and interact so that their numbers rise faster at superior sites, and somehow the bees at each site monitor their numbers there so that they know whether they've reached the threshold number (quorum) and can proceed to initiating the swarm's move to this site. Ants use similar mechanisms to control foraging.   Distributed systems may share common mechanisms based on their nature as being a distributed system,  the components may not matter that much. Fire! Fire!  Brent Chapman shows how to put that IT fire out in  Incident Command for IT: What We Can Learn from the Fire Department .

3 0.89429212 326 high scalability-2008-05-25-Product: Condor - Compute Intensive Workload Management

Introduction: From their website: Condor is a specialized workload management system for compute-intensive jobs. Like other full-featured batch systems, Condor provides a job queueing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. Users submit their serial or parallel jobs to Condor, Condor places them into a queue, chooses when and where to run the jobs based upon a policy, carefully monitors their progress, and ultimately informs the user upon completion. While providing functionality similar to that of a more traditional batch queueing system, Condor's novel architecture allows it to succeed in areas where traditional scheduling systems fail. Condor can be used to manage a cluster of dedicated compute nodes (such as a "Beowulf" cluster). In addition, unique mechanisms enable Condor to effectively harness wasted CPU power from otherwise idle desktop workstations. For instance, Condor can be configured to only use desktop machines where the keyboard

4 0.89251572 1372 high scalability-2012-12-14-Stuff The Internet Says On Scalability For December 14, 2012

Introduction: In a hole in the Internet there lived HighScalability: $140 Billion : trivial cost of Google fiber everywhere;  5,200 GB : data for every person on Earth; 6 hours : time it takes for a 25-GPU cluster to crack all the passwords;  Quoteable Quotes: hnriot : Good architecture eliminates the need for prayer. @adrianco : we break AWS, they fix it. Stuff that's breaking now is mostly stuff other clouds haven't got to yet. Scalability Rules : Design for 20x capacity. • Implement for 3x capacity. • Deploy for ~1.5x capacity. Fast typing Aaron Delp with his  AWS re:Invent Werner Vogel Keynote Live Blog .  Some key points: Decompose into small loosely coupled, stateless building blocks; Automate your application and processes; Let Business levers control the system; Architect with cost in mind; Protecting your customer is the first priority; In production, deploy to at least two availability zones; Integrate security into your application from the ground up

5 0.88924521 1307 high scalability-2012-08-20-The Performance of Distributed Data-Structures Running on a "Cache-Coherent" In-Memory Data Grid

Introduction: This is a guest post by Ron Pressler, the founder and CEO of Parallel Universe , a Y Combinator company building advanced middleware for real-time applications. A little over a month ago, we open-sourced a new in-memory data grid called Galaxy . An in-memory data grid, or IMDG, is a clustered data storage and processing middleware that uses RAM as the authoritative and primary storage, and distributes data over a cluster for purposes of data and processing scalability and high-availability. A common feature of IMDGs is co-location of code and data, meaning that application code runs on all cluster nodes, each instance processing those data items residing in the local node's RAM. While quite a few commercial and open-source IMDGs are available (like Terracotta, Gigaspaces, Oracle Coherence, GemFire, Websphere eXtreme Scale, Infinispan and Hazelcast), Galaxy has adopted a completely different architecture from all other IMDGs, to service some usage scenarios ill-fitted to the othe

6 0.85894108 129 high scalability-2007-10-23-Hire Facebook, Ning, and Salesforce to Scale for You

7 0.84658897 1602 high scalability-2014-02-26-The WhatsApp Architecture Facebook Bought For $19 Billion

8 0.84588897 494 high scalability-2009-01-16-Reducing Your Website's Bandwidth Usage - how to

9 0.84319109 1634 high scalability-2014-04-18-Stuff The Internet Says On Scalability For April 18th, 2014

10 0.84315604 1171 high scalability-2012-01-09-The Etsy Saga: From Silos to Happy to Billions of Pageviews a Month

11 0.84315389 1637 high scalability-2014-04-25-Stuff The Internet Says On Scalability For April 25th, 2014

12 0.84262592 1596 high scalability-2014-02-14-Stuff The Internet Says On Scalability For February 14th, 2014

13 0.84236926 1600 high scalability-2014-02-21-Stuff The Internet Says On Scalability For February 21st, 2014

14 0.84185725 256 high scalability-2008-02-21-Tracking usage of public resources - throttling accesses per hour

15 0.84162295 674 high scalability-2009-08-07-The Canonical Cloud Architecture

16 0.84151596 1509 high scalability-2013-08-30-Stuff The Internet Says On Scalability For August 30, 2013

17 0.84138066 1586 high scalability-2014-01-28-How Next Big Sound Tracks Over a Trillion Song Plays, Likes, and More Using a Version Control System for Hadoop Data

18 0.84098649 1502 high scalability-2013-08-16-Stuff The Internet Says On Scalability For August 16, 2013

19 0.84064931 671 high scalability-2009-08-05-Stack Overflow Architecture

20 0.84052706 849 high scalability-2010-06-28-VoltDB Decapitates Six SQL Urban Myths and Delivers Internet Scale OLTP in the Process