high_scalability high_scalability-2012 high_scalability-2012-1177 knowledge-graph by maker-knowledge-mining

1177 high scalability-2012-01-19-Is it time to get rid of the Linux OS model in the cloud?


meta infos for this blog

Source: html

Introduction: You program in a dynamic language, that runs on a JVM, that runs on a OS designed 40 years ago  for a completely different purpose, that runs on virtualized hardware. Does this make sense? We've talked about this idea before in Machine VM + Cloud API - Rewriting The Cloud From Scratch , where the vision is to treat cloud virtual hardware as a compiler target, and converting high-level language source code directly into kernels that run on it. As new technologies evolve the friction created by our old tool chains and architecture models becomes ever more obvious. Take, for example, what a team at UCSD  is releasing: a phase-change memory prototype   - a   solid state storage device that provides performance thousands of times faster than a conventional hard drive and up to seven times faster than current state-of-the-art solid-state drives (SSDs). However, PCM has access latencies several times slower than DRAM. This technology has obvious mind blowing implications, but an


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 You program in a dynamic language, that runs on a JVM, that runs on a OS designed 40 years ago  for a completely different purpose, that runs on virtualized hardware. [sent-1, score-0.399]

2 We've talked about this idea before in Machine VM + Cloud API - Rewriting The Cloud From Scratch , where the vision is to treat cloud virtual hardware as a compiler target, and converting high-level language source code directly into kernels that run on it. [sent-3, score-0.168]

3 Take, for example, what a team at UCSD  is releasing: a phase-change memory prototype   - a   solid state storage device that provides performance thousands of times faster than a conventional hard drive and up to seven times faster than current state-of-the-art solid-state drives (SSDs). [sent-5, score-0.878]

4 However, PCM has access latencies several times slower than DRAM. [sent-6, score-0.212]

5 Moneta, bypasses a number of functions in the operating system (OS) that typically slow the flow of data to and from storage. [sent-9, score-0.348]

6 These functions were developed years ago to organize data on disk and manage input and output (I/O). [sent-10, score-0.194]

7 But with new technologies such as PCM, which are expected to approach dynamic random-access memory (DRAM) in speed, the delays stand in the way of the technologies' reaching their full potential. [sent-12, score-0.165]

8 The I/O scheduler in Linux performs various functions, such as assuring fair access to resources. [sent-15, score-0.235]

9 5 times faster than a RAID array of conventional disks, 2. [sent-19, score-0.452]

10 8 times faster than a RAID array of flash-based solid-state drives (SSDs), and 2. [sent-20, score-0.317]

11 The next step in the evolution is reduce latency by removing the standard IO calls completely and: Address non-volatile storage directly from my application, just like DRAM. [sent-22, score-0.356]

12 That's the broader vision—a future in which the memory system and the storage system are integrated into one. [sent-23, score-0.197]

13 A great deal of the complexity in database management systems lies in the buffer management and query optimization to minimize I/O, and much of that might be eliminated. [sent-24, score-0.165]

14 2 nanoseconds, yet modern operating systems can take 10-20 times longer just to move one packet between the wire and the application. [sent-29, score-0.346]

15 We can do much better, not with more powerful hardware but by revising architectural decisions made long ago regarding the design of device drivers and network stacks. [sent-30, score-0.587]

16 In current mainstream operating systems (Windows, Linux, BSD and its derivatives), the architecture of the networking code and device drivers is heavily influenced by design decisions made almost 30 years ago. [sent-31, score-0.534]

17 There's a whole "get rid of the layers" meme here based on the idea that we are still using monolithic operating systems from a completely different age of assumptions. [sent-33, score-0.282]

18 Operating systems aren't multi-user anymore, they aren't even generalized containers for running mixed workloads, they are specialized components in an overall distributed architecture running on VMs. [sent-34, score-0.224]

19 We create something specialized in order to achieve the performance and scale that we can't get from standard tools. [sent-37, score-0.215]

20 Exokernel  - The idea behind exokernels is to force as few abstractions as possible on developers, enabling them to make as many decisions as possible about hardware abstractions. [sent-54, score-0.257]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('pcm', 0.25), ('athens', 0.167), ('moneta', 0.167), ('specialized', 0.138), ('times', 0.137), ('bypasses', 0.135), ('gary', 0.135), ('conventional', 0.135), ('device', 0.127), ('storage', 0.124), ('operating', 0.123), ('faster', 0.109), ('layers', 0.105), ('ago', 0.104), ('decisions', 0.1), ('drivers', 0.098), ('rewriting', 0.094), ('linux', 0.092), ('technologies', 0.092), ('functions', 0.09), ('scheduler', 0.087), ('cloud', 0.086), ('systems', 0.086), ('removing', 0.082), ('driver', 0.082), ('hardware', 0.082), ('inherent', 0.08), ('buffer', 0.079), ('need', 0.079), ('ssds', 0.078), ('os', 0.078), ('performs', 0.077), ('standard', 0.077), ('vm', 0.077), ('asvirtual', 0.076), ('specialisation', 0.076), ('ucsd', 0.076), ('luigi', 0.076), ('revising', 0.076), ('spelling', 0.076), ('force', 0.075), ('latencies', 0.075), ('mobile', 0.074), ('runs', 0.074), ('memory', 0.073), ('completely', 0.073), ('layer', 0.072), ('computer', 0.071), ('array', 0.071), ('assuring', 0.071)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999982 1177 high scalability-2012-01-19-Is it time to get rid of the Linux OS model in the cloud?

Introduction: You program in a dynamic language, that runs on a JVM, that runs on a OS designed 40 years ago  for a completely different purpose, that runs on virtualized hardware. Does this make sense? We've talked about this idea before in Machine VM + Cloud API - Rewriting The Cloud From Scratch , where the vision is to treat cloud virtual hardware as a compiler target, and converting high-level language source code directly into kernels that run on it. As new technologies evolve the friction created by our old tool chains and architecture models becomes ever more obvious. Take, for example, what a team at UCSD  is releasing: a phase-change memory prototype   - a   solid state storage device that provides performance thousands of times faster than a conventional hard drive and up to seven times faster than current state-of-the-art solid-state drives (SSDs). However, PCM has access latencies several times slower than DRAM. This technology has obvious mind blowing implications, but an

2 0.23132776 1213 high scalability-2012-03-22-Paper: Revisiting Network I-O APIs: The netmap Framework

Introduction: Here's a really good article in the Communications of the ACM on reducing network packet processing overhead by redesigning the network stack:  Revisiting Network I/O APIs: The Netmap Framework  by  Luigi Rizzo . As commodity networking performance increases operating systems need to keep up or all those CPUs will go to waste. How do they make this happen?   Abstract: Today 10-gigabit interfaces are used more and more in datacenters and servers. On these links, packets flow as fast as one every 67.2 nanoseconds, yet modern operating systems can take 10-20 times longer just to move one packet between the wire and the application. We can do much better, not with more powerful hardware but by revising architectural decisions made long ago regarding the design of device drivers and network stacks. The netmap framework is a promising step in this direction. Thanks to a careful design and the engineering of a new packet I/O API, netmap eliminates much unnecessary overhead and moves

3 0.20140994 923 high scalability-2010-10-21-Machine VM + Cloud API - Rewriting the Cloud from Scratch

Introduction: Write a little "Hello World" program these days and it runs inside a bewildering Russian Doll of nested environments, each layer adding its own special performance and complexity tax. First, a language executes in its own environment of data structure libraries, memory management, and so on. That, more often than not, will run inside a language VM like the JVM, CLR, or V8. The language VM will in-turn run inside a process that runs inside an OS. An application will run in one or more threads inside a process. And the whole thing will run inside a machine sharing VM layer like Xen. And across all of that are frameworks for monitoring, elasticity, storage, and so on. That's a lot of overhead for a such a little program. What if we could remove all these taxes and run directly on the new bare metal, which some consider to be a combination of M achine VM + Cloud API ? That's exactly what a system called Mirage , described in the paper Turning down the LAMP: Software Specialisation for

4 0.19211929 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

Introduction: Update 8 : The Cost of Latency  by James Hamilton. James summarizing some latency info from   Steve Souder , Greg Linden , and Marissa Mayer .   Speed [is] an undervalued and under-discussed asset on the web. Update 7: How do you know when you need more memcache servers? . Dathan Pattishall talks about using memcache not to scale, but to reduce latency and reduce I/O spikes, and how to use stats to know when more servers are needed. Update 6: Stock Traders Find Speed Pays, in Milliseconds . Goldman Sachs is making record profits off a 500 millisecond trading advantage. Yes, latency matters. As an interesting aside, Libet found 500 msecs is about the time it takes the brain to weave together an experience of consciousness from all our sensor inputs. Update 5: Shopzilla's Site Redo - You Get What You Measure . At the Velocity conference Phil Dixon, from Shopzilla, presented data showing a 5 second speed up resulted in a 25% increase in page views, a 10% increas

5 0.18857636 1355 high scalability-2012-11-05-Gone Fishin': Building Super Scalable Systems: Blade Runner Meets Autonomic Computing In The Ambient Cloud

Introduction: All in all this is still my favorite post and I still think it's an accurate vision of a future. Not everyone agrees, but I guess we'll see..."But it is not complicated. [There's] just a lot of it." \--Richard Feynmanon how the immense variety of the world arises from simple rules.Contents:Have We Reached the End of Scaling?Applications Become Black Boxes Using Markets to Scale and Control CostsLet's Welcome our Neo-Feudal OverlordsThe Economic Argument for the Ambient CloudWhat Will Kill the Cloud?The Amazing Collective Compute Power of the Ambient CloudUsing the Ambient Cloud as an Application RuntimeApplications as Virtual StatesConclusionWe have not yet begun to scale. The world is still fundamentally disconnected and for all our wisdom we are still in the earliest days of learning how to build truly large planet-scaling applications.Today 350 million users on Facebook is a lot of users and five million followers on Twitter is a lot of followers. This may seem like a lot now, but c

6 0.18811475 750 high scalability-2009-12-16-Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud

7 0.17172371 1240 high scalability-2012-05-07-Startups are Creating a New System of the World for IT

8 0.16787694 538 high scalability-2009-03-16-Are Cloud Based Memory Architectures the Next Big Thing?

9 0.15225105 1594 high scalability-2014-02-12-Paper: Network Stack Specialization for Performance

10 0.14283553 1116 high scalability-2011-09-15-Paper: It's Time for Low Latency - Inventing the 1 Microsecond Datacenter

11 0.14223854 1456 high scalability-2013-05-13-The Secret to 10 Million Concurrent Connections -The Kernel is the Problem, Not the Solution

12 0.14215158 1291 high scalability-2012-07-25-Vertical Scaling Ascendant - How are SSDs Changing Architectures?

13 0.141761 1207 high scalability-2012-03-12-Google: Taming the Long Latency Tail - When More Machines Equals Worse Results

14 0.13427356 786 high scalability-2010-03-02-Using the Ambient Cloud as an Application Runtime

15 0.13322124 761 high scalability-2010-01-17-Applications Become Black Boxes Using Markets to Scale and Control Costs

16 0.13049991 1369 high scalability-2012-12-10-Switch your databases to Flash storage. Now. Or you're doing it wrong.

17 0.12884489 853 high scalability-2010-07-08-Cloud AWS Infrastructure vs. Physical Infrastructure

18 0.12779079 920 high scalability-2010-10-15-Troubles with Sharding - What can we learn from the Foursquare Incident?

19 0.12688972 823 high scalability-2010-05-05-How will memristors change everything?

20 0.12355581 1118 high scalability-2011-09-19-Big Iron Returns with BigMemory


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.262), (1, 0.105), (2, 0.023), (3, 0.112), (4, -0.074), (5, -0.003), (6, 0.069), (7, 0.059), (8, -0.103), (9, 0.035), (10, -0.015), (11, -0.033), (12, 0.017), (13, 0.052), (14, -0.026), (15, 0.006), (16, 0.012), (17, 0.044), (18, -0.046), (19, -0.008), (20, 0.002), (21, 0.054), (22, -0.045), (23, 0.039), (24, 0.017), (25, 0.041), (26, -0.067), (27, -0.103), (28, -0.081), (29, 0.001), (30, -0.042), (31, 0.041), (32, 0.017), (33, 0.008), (34, 0.007), (35, 0.051), (36, 0.007), (37, 0.042), (38, -0.038), (39, 0.023), (40, -0.064), (41, -0.011), (42, -0.029), (43, 0.026), (44, -0.011), (45, -0.038), (46, -0.047), (47, -0.036), (48, 0.033), (49, 0.059)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97359169 1177 high scalability-2012-01-19-Is it time to get rid of the Linux OS model in the cloud?

Introduction: You program in a dynamic language, that runs on a JVM, that runs on a OS designed 40 years ago  for a completely different purpose, that runs on virtualized hardware. Does this make sense? We've talked about this idea before in Machine VM + Cloud API - Rewriting The Cloud From Scratch , where the vision is to treat cloud virtual hardware as a compiler target, and converting high-level language source code directly into kernels that run on it. As new technologies evolve the friction created by our old tool chains and architecture models becomes ever more obvious. Take, for example, what a team at UCSD  is releasing: a phase-change memory prototype   - a   solid state storage device that provides performance thousands of times faster than a conventional hard drive and up to seven times faster than current state-of-the-art solid-state drives (SSDs). However, PCM has access latencies several times slower than DRAM. This technology has obvious mind blowing implications, but an

2 0.83302772 1213 high scalability-2012-03-22-Paper: Revisiting Network I-O APIs: The netmap Framework

Introduction: Here's a really good article in the Communications of the ACM on reducing network packet processing overhead by redesigning the network stack:  Revisiting Network I/O APIs: The Netmap Framework  by  Luigi Rizzo . As commodity networking performance increases operating systems need to keep up or all those CPUs will go to waste. How do they make this happen?   Abstract: Today 10-gigabit interfaces are used more and more in datacenters and servers. On these links, packets flow as fast as one every 67.2 nanoseconds, yet modern operating systems can take 10-20 times longer just to move one packet between the wire and the application. We can do much better, not with more powerful hardware but by revising architectural decisions made long ago regarding the design of device drivers and network stacks. The netmap framework is a promising step in this direction. Thanks to a careful design and the engineering of a new packet I/O API, netmap eliminates much unnecessary overhead and moves

3 0.77987826 1039 high scalability-2011-05-12-Paper: Mind the Gap: Reconnecting Architecture and OS Research

Introduction: Mind the Gap: Reconnecting Architecture and OS Research  is a paper presented at HotOS XIII , the place where researchers talk about making potential futures happen. For a great overview of the conference take a look at this article by Matt Welsh:  Conference report: HotOS 2011 in Napa . In the VM/cloud age I question the need of having an OS at all, programs can compile directly against "raw" hardware, but the paper does a good job of trying to figure out the new roll operating systems can play in the future. We've been in a long OS holding pattern, so long that we've seen the rise of PaaS vendors skipping the OS level abstraction completely, but there's room for a middle ground between legacy time sharing systems of the past and service level APIs that are but one possible future. Introduction: For too long, operating systems researchers and developers have pretty much taken whatever computer architects have dished out. With occasional exceptions (e.g., virtualization support)

4 0.77044773 1594 high scalability-2014-02-12-Paper: Network Stack Specialization for Performance

Introduction: In the scalability is specialization department here is an interesting paper presented at HotNets '13 on high performance networking:  Network Stack Specialization for Performance . The idea is generalizing a service so it fits in the kernel comes at a high performance cost. So move TCP into user space.  The result is a web server with ~3.5x the throughput of Nginx "while experiencing low CPU utilization, linear scaling on multicore systems, and saturating current NIC hardware." Here's a good description of the paper published on Layer 9 : Traditionally, servers and OSes have been built to be general purpose. However now we have a high degree of specialization. In fact, in a big web service, you might have thousands of machines dedicated to one function. Therefore, there's scope for specialization. This paper looks at a specific opportunity in that space. Network stacks today are good for high throughput with large transfers, but not small files (which are common in web browsi

5 0.76857555 1572 high scalability-2014-01-03-Stuff The Internet Says On Scalability For January 3rd, 2014

Introduction: Hey, it's HighScalability time, can you handle the truth? Should software architectures include parasites ? They increase diversity and complexity in the food web. 10 Million : classic hockey stick growth pattern for GitHub repositories Quotable Quotes: Seymour Cray : A supercomputer is a device for turning compute-bound problems into IO-bound problems. Robert Sapolsky : And why is self-organization so beautiful to my atheistic self? Because if complex, adaptive systems don’t require a blue print, they don’t require a blue print maker. If they don’t require lightning bolts, they don’t require Someone hurtling lightning bolts. @swardley : Asked for a history of PaaS? From memory, public launch - Zimki ('06), BungeeLabs ('06), Heroku ('07), GAE ('08), CloudFoundry ('11) ... @neil_conway : If you're designing scalable systems, you should understand backpressure and build mechanisms to support it. Scott Aaronson ...the

6 0.75935185 1316 high scalability-2012-09-04-Changing Architectures: New Datacenter Networks Will Set Your Code and Data Free

7 0.75924009 823 high scalability-2010-05-05-How will memristors change everything?

8 0.75594908 1234 high scalability-2012-04-26-Akaros - an open source operating system for manycore architectures

9 0.75389981 1545 high scalability-2013-11-08-Stuff The Internet Says On Scalability For November 8th, 2013

10 0.75056684 1581 high scalability-2014-01-17-Stuff The Internet Says On Scalability For January 17th, 2014

11 0.74925601 923 high scalability-2010-10-21-Machine VM + Cloud API - Rewriting the Cloud from Scratch

12 0.74788862 1214 high scalability-2012-03-23-Stuff The Internet Says On Scalability For March 23, 2012

13 0.74359518 1419 high scalability-2013-03-07-It's a VM Wasteland - A Near Optimal Packing of VMs to Machines Reduces TCO by 22%

14 0.74166352 826 high scalability-2010-05-12-The Rise of the Virtual Cellular Machines

15 0.73725331 147 high scalability-2007-11-09-Paper: Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors

16 0.72751158 1451 high scalability-2013-05-03-Stuff The Internet Says On Scalability For May 3, 2013

17 0.72144824 953 high scalability-2010-12-03-GPU vs CPU Smackdown : The Rise of Throughput-Oriented Architectures

18 0.71700966 1600 high scalability-2014-02-21-Stuff The Internet Says On Scalability For February 21st, 2014

19 0.71041197 1479 high scalability-2013-06-21-Stuff The Internet Says On Scalability For June 21, 2013

20 0.70776278 1207 high scalability-2012-03-12-Google: Taming the Long Latency Tail - When More Machines Equals Worse Results


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.121), (2, 0.211), (4, 0.025), (5, 0.013), (10, 0.074), (15, 0.015), (44, 0.103), (61, 0.097), (77, 0.028), (79, 0.121), (85, 0.07), (94, 0.035)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.95094025 660 high scalability-2009-07-21-Paper: Parallelizing the Web Browser

Introduction: There have been reports that software engineering is dead . Maybe, like the future, software engineering is simply not evenly distributed? When you read this paper I think you'll agree there is some real engineering going on, it's just that most of the things we need to build do not require real engineering. Much like my old childhood tree fort could be patched together and was "good enough." This brings to mind the old joke: If a software tree falls in the woods would anyone hear it fall? Only if it tweeted on the way down... What this paper really showed me is we need not only to change programming practices and constructs, but we also need to design solutions that allow for deep parallelism to begin with. Grafting parallelism on later is difficult. Parallel execution requires knowing precisely how components are dependent on each other and that level of precision tends to go far beyond the human attention span. In particular this paper deals with how to parallelize the browser on

same-blog 2 0.94847435 1177 high scalability-2012-01-19-Is it time to get rid of the Linux OS model in the cloud?

Introduction: You program in a dynamic language, that runs on a JVM, that runs on a OS designed 40 years ago  for a completely different purpose, that runs on virtualized hardware. Does this make sense? We've talked about this idea before in Machine VM + Cloud API - Rewriting The Cloud From Scratch , where the vision is to treat cloud virtual hardware as a compiler target, and converting high-level language source code directly into kernels that run on it. As new technologies evolve the friction created by our old tool chains and architecture models becomes ever more obvious. Take, for example, what a team at UCSD  is releasing: a phase-change memory prototype   - a   solid state storage device that provides performance thousands of times faster than a conventional hard drive and up to seven times faster than current state-of-the-art solid-state drives (SSDs). However, PCM has access latencies several times slower than DRAM. This technology has obvious mind blowing implications, but an

3 0.93895727 817 high scalability-2010-04-29-Product: SciDB - A Science-Oriented DBMS at 100 Petabytes

Introduction: Scientists are doing it for themselves. Doing what? Databases. The idea is that most databases are designed to meet the needs of businesses, not science, so scientists are banding together at scidb.org to create their own Domain Specific Database, for science. The goal is to be able to handle datasets in the 100PB range and larger. SciDB, Inc. is building an open source database technology product designed specifically to satisfy the demands of data-intensive scientific problems. With the advice of the world's leading scientists across a variety of disciplines including astronomy, biology, physics, oceanography, atmospheric sciences, and climatology, our computer scientists are currently designing and prototyping this technology The scientists that are participating in our open source project believe that the SciDB database — when completed — will dramatically impact their ability to conduct their experiments faster and more efficiently and further improve the qual

4 0.93620652 271 high scalability-2008-03-08-Product: DRBD - Distributed Replicated Block Device

Introduction: From their website: DRBD is a block device which is designed to build high availability clusters. This is done by mirroring a whole block device via (a dedicated) network. You could see it as a network raid-1. DRBD takes over the data, writes it to the local disk and sends it to the other host. On the other host, it takes it to the disk there. The other components needed are a cluster membership service, which is supposed to be heartbeat, and some kind of application that works on top of a block device. Examples: A filesystem & fsck. A journaling FS. A database with recovery capabilities. Each device (DRBD provides more than one of these devices) has a state, which can be 'primary' or 'secondary'. On the node with the primary device the application is supposed to run and to access the device (/dev/drbdX). Every write is sent to the local 'lower level block device' and to the node with the device in 'secondary' state. The secondary device simply writes the data to its lowe

5 0.93182451 857 high scalability-2010-07-13-DbShards Part Deux - The Internals

Introduction: This is a follow up article by Cory Isaacson  to the first article on DbShards,  Product: dbShards - Share Nothing. Shard Everything , describing some of the details about how DbShards works on the inside. The dbShards architecture is a true “shared nothing” implementation of Database Sharding. The high-level view of dbShards is shown here: The above diagram shows how dbShards works for achieving massive database scalability across multiple database servers, using native DBMS engines and our dbShards components. The important components are: dbS/Client : A design goal of dbShards is to make database sharding as seamless as possible to an application, so that application developers can write the same type of code they always have. A key component to making this possible is the dbShards Client. The dbShards Client is our intelligent driver that is an exact API emulation of a given vendor’s database driver. For example, with MySQL we have full support for JDBC, and the the

6 0.9317776 498 high scalability-2009-01-20-Product: Amazon's SimpleDB

7 0.92768592 366 high scalability-2008-08-17-Many updates against MySQL

8 0.92730683 1600 high scalability-2014-02-21-Stuff The Internet Says On Scalability For February 21st, 2014

9 0.92675006 716 high scalability-2009-10-06-Building a Unique Data Warehouse

10 0.92570788 1382 high scalability-2013-01-07-Analyzing billions of credit card transactions and serving low-latency insights in the cloud

11 0.92441511 671 high scalability-2009-08-05-Stack Overflow Architecture

12 0.92387378 1626 high scalability-2014-04-04-Stuff The Internet Says On Scalability For April 4th, 2014

13 0.92373484 1389 high scalability-2013-01-18-Stuff The Internet Says On Scalability For January 18, 2013

14 0.92371458 1007 high scalability-2011-03-18-Stuff The Internet Says On Scalability For March 18, 2011

15 0.92297548 1112 high scalability-2011-09-07-What Google App Engine Price Changes Say About the Future of Web Architecture

16 0.92220724 1637 high scalability-2014-04-25-Stuff The Internet Says On Scalability For April 25th, 2014

17 0.92165202 1148 high scalability-2011-11-29-DataSift Architecture: Realtime Datamining at 120,000 Tweets Per Second

18 0.92129302 1020 high scalability-2011-04-12-Caching and Processing 2TB Mozilla Crash Reports in memory with Hazelcast

19 0.92129296 847 high scalability-2010-06-23-Product: dbShards - Share Nothing. Shard Everything.

20 0.92106342 1109 high scalability-2011-09-02-Stuff The Internet Says On Scalability For September 2, 2011