high_scalability high_scalability-2012 high_scalability-2012-1279 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: This is the third guest post ( part 1 , part 2 ) of a series by Greg Lindahl, CTO of blekko, the spam free search engine. Previously, Greg was Founder and Distinguished Engineer at PathScale, at which he was the architect of the InfiniPath low-latency InfiniBand HCA, used to build tightly-coupled supercomputing clusters. blekko's home-grown NoSQL database was designed from the start to support a web-scale search engine, with 1,000s of servers and petabytes of disk. Data replication is a very important part of keeping the database up and serving queries. Like many NoSQL database authors, we decided to keep R=3 copies of each piece of data in the database, and not use RAID to improve reliability. The key goal we were shooting for was a database which degrades gracefully when there are many small failures over time, without needing human intervention. Why don't we like RAID for big NoSQL databases? Most big storage systems use RAID levels like 3, 4, 5, or 10 to improve relia
sentIndex sentText sentNum sentScore
1 With R=3, my cluster is available as long as I can re-copy data on the failed disks before 3 failures happen. [sent-27, score-0.766]
2 A 5 disk, fully-used RAID-5 volume with a failure needs to read 4 disks completely and write one disk completely to return to full protection and performance. [sent-35, score-0.688]
3 This can take quite a while, and is getting worse over time as disk capacities increase faster than disk bandwidth. [sent-36, score-0.636]
4 In an R=3 cluster, a single disk contains chunks of data replicated to other disks elsewhere in the cluster. [sent-37, score-1.118]
5 For rebuild purposes, it's nice to have at least 10 chunks on a disk, and it's also nice for the 2 replicas for these 10 chunks to be on as many servers as possible. [sent-38, score-0.902]
6 A rebuild then involves copying 1/10 of a disk between 10 pairs of systems, and the entire rebuild can be done in 1/10 the time of reading an entire disk. [sent-39, score-0.826]
7 The rebuild of all the data on a failed server with 10 disks would involve 100 pairs of servers. [sent-40, score-0.783]
8 While we've chosen 10 chunks per disk at blekko, some other NoSQL databases have as many as a million chunks per disk. [sent-42, score-1.0]
9 Having a small number of chunks per disk leads to slow rebuilds that have big performance impact to a small number of nodes, and a performance hot-spot. [sent-43, score-0.739]
10 We call the R-level of a cluster the minimum replica count of any chunk in the cluster, but just to confuse things, we also refer to the R-level goal of the cluster (usually 3) or the R-level of an individual chunk of data. [sent-47, score-0.938]
11 So if our R3 (a goal of 3 copies of everything) cluster has a single chunk that's R2 (2 copies of this chunk), we call the entire cluster R2 (the worst R-level of any chunk in the cluster. [sent-48, score-1.324]
12 If all chunks are at R1, for example, and we return to R3, a lot of data will need to be replicated for all the replica chunks to become current. [sent-53, score-0.84]
13 The information that something is about to fail can come from sources such as disk errors and ECC errors in syslog, SMART daemon complaints, and failed fans or high temperature events reported by the bmc. [sent-65, score-0.798]
14 "draining" chunks off before the disk or server fails completely can reduce the time a cluster spends at R2, which makes it less likely that the cluster will go to R1 or R0 due to an unlucky rapid series of failures. [sent-66, score-1.123]
15 The repair daemon drains a disk or server by making a temporary 4th copy of the chunks involved, and then deleting the chunks on the drained disk or server. [sent-67, score-1.704]
16 When all chunks are removed, the disk or server can be repaired. [sent-68, score-0.739]
17 Failing disks happen often enough in our environment that we have fully automated the decision to drain a disk, or an entire server if the failing disk is the system disk. [sent-69, score-0.846]
18 The same arguments about disk checksums apply to network checksums -- TCP and IP have weak checksums, but with modern systems, checksum offload means that you aren't protected against errors between the checksum offload engine and the CPU. [sent-81, score-1.092]
19 In an R=3 system, it's important that servers with failed disks -- disk failures often happen at boot or reboot -- come up with as many disks as possible, instead of waiting for admin intervention due to a single failed disk. [sent-83, score-1.345]
20 We avoid this issue by renaming files to be deleted into a trash directory, where a dedicated trash daemon for each disk takes one file at a time, truncating it slowly down to nothing. [sent-89, score-0.66]
wordName wordTfidf (topN-words)
[('chunks', 0.341), ('disk', 0.318), ('disks', 0.308), ('raid', 0.262), ('chunk', 0.247), ('cluster', 0.192), ('checksums', 0.162), ('copies', 0.161), ('rebuild', 0.159), ('daemon', 0.146), ('failed', 0.14), ('pdu', 0.132), ('nosql', 0.109), ('checksum', 0.1), ('errors', 0.097), ('degrades', 0.085), ('deleting', 0.085), ('gracefully', 0.084), ('switch', 0.082), ('rebuilds', 0.08), ('server', 0.08), ('failures', 0.078), ('draining', 0.075), ('ecc', 0.075), ('jargon', 0.075), ('repair', 0.075), ('trash', 0.072), ('entire', 0.071), ('blekko', 0.069), ('failing', 0.069), ('leaf', 0.063), ('failure', 0.062), ('complaints', 0.062), ('replicas', 0.061), ('replica', 0.06), ('raw', 0.059), ('cause', 0.057), ('protects', 0.055), ('everything', 0.054), ('single', 0.053), ('files', 0.052), ('network', 0.051), ('racks', 0.051), ('offload', 0.051), ('replicated', 0.05), ('database', 0.049), ('resilience', 0.049), ('pairs', 0.048), ('data', 0.048), ('twice', 0.047)]
simIndex simValue blogId blogTitle
same-blog 1 1.0000006 1279 high scalability-2012-07-09-Data Replication in NoSQL Databases
Introduction: This is the third guest post ( part 1 , part 2 ) of a series by Greg Lindahl, CTO of blekko, the spam free search engine. Previously, Greg was Founder and Distinguished Engineer at PathScale, at which he was the architect of the InfiniPath low-latency InfiniBand HCA, used to build tightly-coupled supercomputing clusters. blekko's home-grown NoSQL database was designed from the start to support a web-scale search engine, with 1,000s of servers and petabytes of disk. Data replication is a very important part of keeping the database up and serving queries. Like many NoSQL database authors, we decided to keep R=3 copies of each piece of data in the database, and not use RAID to improve reliability. The key goal we were shooting for was a database which degrades gracefully when there are many small failures over time, without needing human intervention. Why don't we like RAID for big NoSQL databases? Most big storage systems use RAID levels like 3, 4, 5, or 10 to improve relia
2 0.22902405 1114 high scalability-2011-09-13-Must see: 5 Steps to Scaling MongoDB (Or Any DB) in 8 Minutes
Introduction: Jared Rosoff concisely, effectively, entertainingly, and convincingly gives an 8 minute MongoDB tutorial on scaling MongoDB at Scale Out Camp . The ideas aren't just limited to MongoDB, they work for most any database: Optimize your queries; Know your working set size; Tune your file system; Choose the right disks; Shard. Here's an explanation of all 5 strategies: Optimize your queries . Computer science works. Complexity analysis works. A btree search is faster than a table scan. So analyze your queries. Use explain to see what your query is doing. If it is saying it's using a cursor then it's doing a table scan. That's slow. Look at the number of documents it looks at to satisfy a query. Look at how long it takes. Fix: add indexes. It doesn't matter if you are running on 1 or 100 servers. Know your working set size . Sticking memcache in front of your database is silly. You have lots of RAM, use it. Embed your cache in the database, which is how MongoDB works. Working set
Introduction: “ Data is everywhere, never be at a single location. Not scalable, not maintainable. ” –Alex Szalay While Galileo played life and death doctrinal games over the mysteries revealed by the telescope, another revolution went unnoticed, the microscope gave up mystery after mystery and nobody yet understood how subversive would be what it revealed. For the first time these new tools of perceptual augmentation allowed humans to peek behind the veil of appearance. A new new eye driving human invention and discovery for hundreds of years. Data is another material that hides, revealing itself only when we look at different scales and investigate its underlying patterns. If the universe is truly made of information , then we are looking into truly primal stuff. A new eye is needed for Data and an ambitious project called Data-scope aims to be the lens. A detailed paper on the Data-Scope tells more about what it is: The Data-Scope is a new scientific instrum
4 0.19691832 1233 high scalability-2012-04-25-The Anatomy of Search Technology: blekko’s NoSQL database
Introduction: This is a guest post ( part 2 , part 3 ) by Greg Lindahl, CTO of blekko, the spam free search engine that had over 3.5 million unique visitors in March. Greg Lindahl was Founder and Distinguished Engineer at PathScale, at which he was the architect of the InfiniPath low-latency InfiniBand HCA, used to build tightly-coupled supercomputing clusters. Imagine that you're crazy enough to think about building a search engine. It's a huge task: the minimum index size needed to answer most queries is a few billion webpages. Crawling and indexing a few billion webpages requires a cluster with several petabytes of usable disk -- that's several thousand 1 terabyte disks -- and produces an index that's about 100 terabytes in size. Serving query results quickly involves having most of the index in RAM or on solid state (flash) disk. If you can buy a server with 100 gigabytes of RAM for about $3,000, that's 1,000 servers at a capital cost of $3 million, plus about $1 million per year of serve
5 0.18269499 538 high scalability-2009-03-16-Are Cloud Based Memory Architectures the Next Big Thing?
Introduction: We are on the edge of two potent technological changes: Clouds and Memory Based Architectures. This evolution will rip open a chasm where new players can enter and prosper. Google is the master of disk. You can't beat them at a game they perfected. Disk based databases like SimpleDB and BigTable are complicated beasts, typical last gasp products of any aging technology before a change. The next era is the age of Memory and Cloud which will allow for new players to succeed. The tipping point will be soon. Let's take a short trip down web architecture lane: It's 1993: Yahoo runs on FreeBSD, Apache, Perl scripts and a SQL database It's 1995: Scale-up the database. It's 1998: LAMP It's 1999: Stateless + Load Balanced + Database + SAN It's 2001: In-memory data-grid. It's 2003: Add a caching layer. It's 2004: Add scale-out and partitioning. It's 2005: Add asynchronous job scheduling and maybe a distributed file system. It's 2007: Move it all into the cloud. It's 2008: C
6 0.15874571 1508 high scalability-2013-08-28-Sean Hull's 20 Biggest Bottlenecks that Reduce and Slow Down Scalability
7 0.15498722 920 high scalability-2010-10-15-Troubles with Sharding - What can we learn from the Foursquare Incident?
8 0.14879125 448 high scalability-2008-11-22-Google Architecture
9 0.14208977 53 high scalability-2007-08-01-Product: MogileFS
10 0.13991371 1501 high scalability-2013-08-13-In Memoriam: Lavabit Architecture - Creating a Scalable Email Service
11 0.1398648 383 high scalability-2008-09-10-Shard servers -- go big or small?
12 0.13836516 254 high scalability-2008-02-19-Hadoop Getting Closer to 1.0 Release
13 0.13636379 1207 high scalability-2012-03-12-Google: Taming the Long Latency Tail - When More Machines Equals Worse Results
14 0.13471186 954 high scalability-2010-12-06-What the heck are you actually using NoSQL for?
15 0.13228244 1568 high scalability-2013-12-23-What Happens While Your Brain Sleeps is Surprisingly Like How Computers Stay Sane
16 0.13216104 1473 high scalability-2013-06-10-The 10 Deadly Sins Against Scalability
17 0.13213161 1606 high scalability-2014-03-05-10 Things You Should Know About Running MongoDB at Scale
18 0.13206068 961 high scalability-2010-12-21-SQL + NoSQL = Yes !
19 0.13151719 274 high scalability-2008-03-12-YouTube Architecture
20 0.12722036 1475 high scalability-2013-06-13-Busting 4 Modern Hardware Myths - Are Memory, HDDs, and SSDs Really Random Access?
topicId topicWeight
[(0, 0.214), (1, 0.129), (2, -0.032), (3, -0.009), (4, -0.014), (5, 0.089), (6, 0.081), (7, -0.013), (8, 0.003), (9, -0.033), (10, -0.023), (11, -0.081), (12, -0.029), (13, 0.026), (14, 0.06), (15, 0.074), (16, -0.004), (17, 0.034), (18, -0.086), (19, 0.071), (20, 0.014), (21, 0.015), (22, 0.005), (23, 0.032), (24, -0.035), (25, -0.001), (26, 0.006), (27, -0.032), (28, -0.133), (29, 0.023), (30, -0.032), (31, -0.025), (32, 0.081), (33, -0.047), (34, 0.019), (35, 0.058), (36, 0.002), (37, -0.021), (38, -0.039), (39, -0.041), (40, -0.001), (41, -0.075), (42, -0.07), (43, 0.006), (44, -0.029), (45, 0.0), (46, 0.03), (47, -0.021), (48, 0.021), (49, 0.016)]
simIndex simValue blogId blogTitle
same-blog 1 0.96886563 1279 high scalability-2012-07-09-Data Replication in NoSQL Databases
Introduction: This is the third guest post ( part 1 , part 2 ) of a series by Greg Lindahl, CTO of blekko, the spam free search engine. Previously, Greg was Founder and Distinguished Engineer at PathScale, at which he was the architect of the InfiniPath low-latency InfiniBand HCA, used to build tightly-coupled supercomputing clusters. blekko's home-grown NoSQL database was designed from the start to support a web-scale search engine, with 1,000s of servers and petabytes of disk. Data replication is a very important part of keeping the database up and serving queries. Like many NoSQL database authors, we decided to keep R=3 copies of each piece of data in the database, and not use RAID to improve reliability. The key goal we were shooting for was a database which degrades gracefully when there are many small failures over time, without needing human intervention. Why don't we like RAID for big NoSQL databases? Most big storage systems use RAID levels like 3, 4, 5, or 10 to improve relia
2 0.87393415 1114 high scalability-2011-09-13-Must see: 5 Steps to Scaling MongoDB (Or Any DB) in 8 Minutes
Introduction: Jared Rosoff concisely, effectively, entertainingly, and convincingly gives an 8 minute MongoDB tutorial on scaling MongoDB at Scale Out Camp . The ideas aren't just limited to MongoDB, they work for most any database: Optimize your queries; Know your working set size; Tune your file system; Choose the right disks; Shard. Here's an explanation of all 5 strategies: Optimize your queries . Computer science works. Complexity analysis works. A btree search is faster than a table scan. So analyze your queries. Use explain to see what your query is doing. If it is saying it's using a cursor then it's doing a table scan. That's slow. Look at the number of documents it looks at to satisfy a query. Look at how long it takes. Fix: add indexes. It doesn't matter if you are running on 1 or 100 servers. Know your working set size . Sticking memcache in front of your database is silly. You have lots of RAM, use it. Embed your cache in the database, which is how MongoDB works. Working set
3 0.77360082 1606 high scalability-2014-03-05-10 Things You Should Know About Running MongoDB at Scale
Introduction: Guest post by Asya Kamsky , Principal Solutions Architect at MongoDB. This post outlines ten things you need to know for operating MongoDB at scale based on my experience working with MongoDB customers and open source users: MongoDB requires DevOps, too. MongoDB is a database. Like any other data store, it requires capacity planning, tuning, monitoring, and maintenance. Just because it's easy to install and get started and it fits the developer paradigm more naturally than a relational database, don't assume that MongoDB doesn't need proper care and feeding. And just because it performs super-fast on a small sample dataset in development doesn't mean you can get away without having a good schema and indexing strategy, as well as the right hardware resources in production! But if you prepare well and understand the best practices, operating large MongoDB clusters can be boring instead of nerve-wracking. Successful MongoDB users monitor everything and prepare for growth.
4 0.75806057 1511 high scalability-2013-09-04-Wide Fast SATA: the Recipe for Hot Performance
Introduction: This is a guest post by Brian Bulkowski , CTO and co-founder of Aerospike , a leading clustered NoSQL database, has worked in the area of high performance commodity systems since 1989. This blog post will tell you exactly how to build a multi-terabyte high throughput datacenter server. A fast, reliable multi-terrabyte data tier can be used for recent behavior (messages, tweets, plays, actions), or anywhere that today you use Redis or Memcache. You need to know: Which SSDs work Which chassis work How to configure your RAID cards Intel’s SATA solutions – combined with a high capacity storage server like the Dell R720xd and a host bus adapter based on the LSI 2208, and a Flash optimized database like Aerospike , enables high throughput and low latency. In a wide configuration, with 12 to 20 drives per 2U server, individual servers can cost-effectively serve at high throughput with 16T at $2.50 per GB with the s3700, or $1.25 with the s3500. Other SSD of
5 0.75696784 143 high scalability-2007-11-06-Product: ChironFS
Introduction: If you are trying to create highly available file systems, especially across data centers, then ChironFS is one potential solution. It's relatively new, so there aren't lots of experience reports, but it looks worth considering. What is ChironFS and how does it work? Adapted from the ChironFS website: The Chiron Filesystem is a Fuse based filesystem that frees you from single points of failure. It's main purpose is to guarantee filesystem availability using replication. But it isn't a RAID implementation. RAID replicates DEVICES not FILESYSTEMS. Why not just use RAID over some network block device? Because it is a block device and if one server mounts that device in RW mode, no other server will be able to mount it in RW mode. Any real network may have many servers and offer a variety of services. Keeping everything running can become a real nightmare!
6 0.75619864 103 high scalability-2007-09-28-Kosmos File System (KFS) is a New High End Google File System Option
7 0.74963301 1066 high scalability-2011-06-22-It's the Fraking IOPS - 1 SSD is 44,000 IOPS, Hard Drive is 180
8 0.74922174 98 high scalability-2007-09-18-Sync data on all servers
9 0.74197668 971 high scalability-2011-01-10-Riak's Bitcask - A Log-Structured Hash Table for Fast Key-Value Data
10 0.74092251 112 high scalability-2007-10-04-You Can Now Store All Your Stuff on Your Own Google Like File System
11 0.73901546 1186 high scalability-2012-02-02-The Data-Scope Project - 6PB storage, 500GBytes-sec sequential IO, 20M IOPS, 130TFlops
12 0.73359752 1096 high scalability-2011-08-10-LevelDB - Fast and Lightweight Key-Value Database From the Authors of MapReduce and BigTable
13 0.72486645 1104 high scalability-2011-08-25-Colmux - Finding Memory Leaks, High I-O Wait Times, and Hotness on 3000 Node Clusters
14 0.72295469 920 high scalability-2010-10-15-Troubles with Sharding - What can we learn from the Foursquare Incident?
15 0.71519381 1386 high scalability-2013-01-14-MongoDB and GridFS for Inter and Intra Datacenter Data Replication
16 0.71374905 104 high scalability-2007-10-01-SmugMug Found their Perfect Storage Array
17 0.71072727 197 high scalability-2007-12-31-Product: collectd
18 0.70973188 1297 high scalability-2012-08-03-Stuff The Internet Says On Scalability For August 3, 2012
19 0.70805407 1035 high scalability-2011-05-05-Paper: A Study of Practical Deduplication
20 0.69461852 237 high scalability-2008-02-03-Product: Collectl - Performance Data Collector
topicId topicWeight
[(1, 0.095), (2, 0.227), (10, 0.069), (14, 0.012), (26, 0.014), (30, 0.013), (40, 0.018), (47, 0.012), (61, 0.078), (79, 0.136), (85, 0.052), (93, 0.126), (94, 0.063)]
simIndex simValue blogId blogTitle
1 0.96347874 1513 high scalability-2013-09-06-Stuff The Internet Says On Scalability For September 6, 2013
Introduction: Hey, it's HighScalability time: ( Unidentified Ivy Bridge processor using 22 nanometer Tri-Gate transistors ) Quotable Quotes: @pbailis : Big ups to AWS folks for following up re: all of my questions on cr1 provisioning. We saw a huge win moving from m1.xl to cr1.8xl @rob_carlson : Packet switching via containers --> almost 8X increase in trade; what will #drones bring? What is optimal mesh size? @mrtazz : “an Open Source, Clojure-based DevOps platform” congratulations, I now have no idea what you’re talking about @KentBeck : If you can't make engineering decisions based on data, then make engineering decisions that result in data. @cassandralondon : Cassandra on AWS SSDs - a perfect fit because you don't get write amplification If you think about it, a cloud as a rule driven, capability rich environment, accessible over a large surfaced API, plays the same role as physics in biology. Software must speci
2 0.9616797 349 high scalability-2008-07-10-Can cloud computing smite down evil zombie botnet armies?
Introduction: In the more cool stuff I've never heard of before department is something called Self Cleansing Intrusion Tolerance (SCIT). Botnets are created when vulnerable computers live long enough to become infected with the will to do the evil bidding of their evil masters. Security is almost always about removing vulnerabilities (a process which to outside observers often looks like a dog chasing its tail ). SCIT takes a different approach, it works on the availability angle. Something I never thought of before, but which makes a great deal of sense once I thought about it. With SCIT you stop and restart VM instances every minute (or whatever depending in your desired window vulnerability).... This short exposure window means worms and viri do not have long enough to fully infect a machine and carry out a coordinated attack. A machine is up for a while. Does work. And then is torn down again only to be reborn as a clean VM with no possibility of infection (unless of course the VM
3 0.95895725 1450 high scalability-2013-05-01-Myth: Eric Brewer on Why Banks are BASE Not ACID - Availability Is Revenue
Introduction: In NoSQL: Past, Present, Future Eric Brewer has a particularly fine section on explaining the often hard to understand ideas of BASE (Basically Available, Soft State, Eventually Consistent), ACID (Atomicity, Consistency, Isolation, Durability), CAP (Consistency Availability, Partition Tolerance), in terms of a pernicious long standing myth about the sanctity of consistency in banking. Myth : Money is important, so banks must use transactions to keep money safe and consistent, right? Reality : Banking transactions are inconsistent, particularly for ATMs. ATMs are designed to have a normal case behaviour and a partition mode behaviour. In partition mode Availability is chosen over Consistency. Why? 1) Availability correlates with revenue and consistency generally does not. 2) Historically there was never an idea of perfect communication so everything was partitioned. Your ATM transaction must go through so Availability is more important than
4 0.95316052 168 high scalability-2007-11-30-Strategy: Efficiently Geo-referencing IPs
Introduction: A lot of apps need to map IP addresses to locations. Jeremy Cole in On efficiently geo-referencing IPs with MaxMind GeoIP and MySQL GIS succinctly explains the many uses for such a feature: Geo-referencing IPs is, in a nutshell, converting an IP address, perhaps from an incoming web visitor, a log file, a data file, or some other place, into the name of some entity owning that IP address. There are a lot of reasons you may want to geo-reference IP addresses to country, city, etc., such as in simple ad targeting systems, geographic load balancing, web analytics, and many more applications. This is difficult to do efficiently, at least it gives me a bit of brain freeze. In the same post Jeremy nicely explains where to get the geo-rereferncing data, how to load data, and the performance of different approaches for IP address searching. It's a great practical introduction to the subject.
5 0.94707668 58 high scalability-2007-08-04-Product: Cacti
Introduction: Cacti is a network statistics graphing tool designed as a frontend to RRDtool's data storage and graphing functionality. It is intended to be intuitive and easy to use, as well as robust and scalable. It is generally used to graph time-series data like CPU load and bandwidth use. The frontend is written in PHP; it can handle multiple users, each with their own graph sets, so it is sometimes used by web hosting providers (especially dedicated server, virtual private server, and colocation providers) to display bandwidth statistics for their customers. It can be used to configure the data collection itself, allowing certain setups to be monitored without any manual configuration of RRDtool.
6 0.94100595 166 high scalability-2007-11-27-Solving the Client Side API Scalability Problem with a Little Game Theory
same-blog 7 0.94058549 1279 high scalability-2012-07-09-Data Replication in NoSQL Databases
8 0.9307313 1198 high scalability-2012-02-24-Stuff The Internet Says On Scalability For February 24, 2012
9 0.92840117 1637 high scalability-2014-04-25-Stuff The Internet Says On Scalability For April 25th, 2014
10 0.92710942 1233 high scalability-2012-04-25-The Anatomy of Search Technology: blekko’s NoSQL database
11 0.92001373 1330 high scalability-2012-09-28-Stuff The Internet Says On Scalability For September 28, 2012
12 0.91425586 1112 high scalability-2011-09-07-What Google App Engine Price Changes Say About the Future of Web Architecture
13 0.91291344 1612 high scalability-2014-03-14-Stuff The Internet Says On Scalability For March 14th, 2014
14 0.91037267 716 high scalability-2009-10-06-Building a Unique Data Warehouse
15 0.90999484 1439 high scalability-2013-04-12-Stuff The Internet Says On Scalability For April 12, 2013
16 0.90943331 289 high scalability-2008-03-27-Amazon Announces Static IP Addresses and Multiple Datacenter Operation
17 0.9091171 1649 high scalability-2014-05-16-Stuff The Internet Says On Scalability For May 16th, 2014
18 0.9086436 538 high scalability-2009-03-16-Are Cloud Based Memory Architectures the Next Big Thing?
19 0.90850925 1460 high scalability-2013-05-17-Stuff The Internet Says On Scalability For May 17, 2013
20 0.90788621 849 high scalability-2010-06-28-VoltDB Decapitates Six SQL Urban Myths and Delivers Internet Scale OLTP in the Process