high_scalability high_scalability-2013 high_scalability-2013-1413 knowledge-graph by maker-knowledge-mining

1413 high scalability-2013-02-27-42 Monster Problems that Attack as Loads Increase


meta infos for this blog

Source: html

Introduction: For solutions take a look at: 7 Life Saving Scalability Defenses Against Load Monster Attacks . This is a look at all the bad things that can happen to your carefully crafted program as loads increase: all hell breaks lose. Sure, you can scale out or scale up, but you can also choose to program better. Make your system handle larger loads. This saves money because fewer boxes are needed and it will make the entire application more reliable and have better response times. And it can be quite satisfying as a programmer. Large Number Of Objects We usually get into scaling problems when the number of objects gets larger. Clearly resource usage of all types is stressed as the number of objects grow. Continuous Failures Makes An Infinite Event Stream During large network failure scenarios there is never time for the system recover. We are in a continual state of stress. Lots of High Priority Work For example, rerouting is a high priority activity. If there is a large amount


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Clearly resource usage of all types is stressed as the number of objects grow. [sent-8, score-0.553]

2 Lots of High Priority Work For example, rerouting is a high priority activity. [sent-11, score-0.485]

3 If there is a large amount of rerouting work that can't be shed or squelched then resources will be continuously consumed to support the high priority work. [sent-12, score-0.723]

4 Out of Memory The base line memory usage increases and the spike memory usage increases. [sent-37, score-0.637]

5 This can because the base line amount of work is high or certain scenarios cause a lot of high priority work to be done. [sent-43, score-0.875]

6 If you are storing an object in two different lists then increasing the number of objects increases memory usage by two times. [sent-47, score-0.635]

7 Isn't the real problem that we use CPU availability and task priority as a simple cognitive shorthand for architecting a system rather than having to understand our system's low level work streams and using that information to make specific scheduling decisions? [sent-66, score-0.663]

8 Task Priorities are Shown to be Wrong Task priority schemes that may have worked under a lite load can cause problems under heavier load. [sent-72, score-0.732]

9 In particular, when there is a poor flow control mechanism, a high priority task feeding work to a lower priority task causes drops and spike memory usage because the lower priority task will get very little chance to run. [sent-73, score-2.034]

10 Queue Sizes Not Big Enough A larger number of objects imply more simultaneous operations can be made which means queues sizes will probably need to increase. [sent-74, score-0.681]

11 Under scale replies will drop, ARP requests will drop, the file system may show certain errors, messages may drop, replies may drop, etc. [sent-82, score-0.687]

12 A protocol for exchanging data may occur quickly with a small data set, which means it has a smaller change for seeing a reboot or timeout, but with larger scales the windows increase which means new problems may be seen for the first time. [sent-84, score-0.586]

13 Priority Inheritance Locks that are at large scope are held for longer which makes for a better chance for seeing priority inheritance problems. [sent-90, score-0.676]

14 Slow Memory Leaks Become Fast Leaks A memory leak that went unnoticed in a smaller scale may become significant at larger scales. [sent-98, score-0.628]

15 Missed Locks Become Noticed A lock that should be in place, but is not, can go unnoticed in smaller scale system because a thread may never give up the processor right before the instruction that will cause the problem. [sent-99, score-0.665]

16 In larger scale systems there will be more preemption which means there is more of a chance of seeing simultaneous access to data by different threads. [sent-100, score-0.489]

17 If socked descriptors are taken out of the file descriptor pool, then a design with a large number of connections (ftp, com, booting, clients, etc) will cause problems. [sent-111, score-0.603]

18 As scale increases drops increase because the amount of buffer space to receive messages is not enough to keep up with the load. [sent-116, score-0.485]

19 This is also related to priority because a task may not have enough priority to read data out of the socket. [sent-117, score-0.941]

20 On the sender side a high priority task may overwhelm with messages a task with lower priority. [sent-118, score-0.874]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('priority', 0.331), ('cpu', 0.209), ('cause', 0.177), ('objects', 0.161), ('number', 0.158), ('usage', 0.156), ('may', 0.15), ('task', 0.129), ('retries', 0.127), ('descriptor', 0.122), ('larger', 0.121), ('chance', 0.115), ('starvation', 0.114), ('drop', 0.104), ('scale', 0.102), ('clients', 0.102), ('arps', 0.1), ('longerthe', 0.1), ('amount', 0.093), ('poller', 0.091), ('sizes', 0.089), ('memory', 0.088), ('timeouts', 0.086), ('smaller', 0.086), ('booting', 0.085), ('rerouting', 0.085), ('logger', 0.081), ('unnoticed', 0.081), ('drops', 0.081), ('queues', 0.08), ('cards', 0.079), ('seeing', 0.079), ('resource', 0.078), ('large', 0.077), ('spike', 0.077), ('tiered', 0.076), ('longer', 0.074), ('load', 0.074), ('simultaneous', 0.072), ('increases', 0.072), ('leaks', 0.072), ('buffer', 0.071), ('high', 0.069), ('system', 0.069), ('wires', 0.069), ('descriptors', 0.069), ('work', 0.068), ('ftp', 0.068), ('messages', 0.066), ('streams', 0.066)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999958 1413 high scalability-2013-02-27-42 Monster Problems that Attack as Loads Increase

Introduction: For solutions take a look at: 7 Life Saving Scalability Defenses Against Load Monster Attacks . This is a look at all the bad things that can happen to your carefully crafted program as loads increase: all hell breaks lose. Sure, you can scale out or scale up, but you can also choose to program better. Make your system handle larger loads. This saves money because fewer boxes are needed and it will make the entire application more reliable and have better response times. And it can be quite satisfying as a programmer. Large Number Of Objects We usually get into scaling problems when the number of objects gets larger. Clearly resource usage of all types is stressed as the number of objects grow. Continuous Failures Makes An Infinite Event Stream During large network failure scenarios there is never time for the system recover. We are in a continual state of stress. Lots of High Priority Work For example, rerouting is a high priority activity. If there is a large amount

2 0.38550764 1425 high scalability-2013-03-18-Beyond Threads and Callbacks - Application Architecture Pros and Cons

Introduction: There's not a lot  of talk about application architectures at the process level. You have your threads, pools of threads, and you have your callback models. That's about it. Languages/frameworks making a virtue out of simple models, like Go and Erlang, do so at the price of control. It's difficult to make a low latency well conditioned application when a power full tool, like work scheduling, is taken out of the hands of the programmer. But that's not all there is my friend. We'll dive into different ways an application can be composed across threads of control. Your favorite language may not give you access to all the capabilities we are going to talk about, but lately there has been a sort of revival in considering performance important, especially for controlling  latency variance , so I think it's time to talk about these kind of issues. When it was do everything in the thread of a web server thread pool none of these issues really mattered. But now that developers are creating

3 0.3679876 1429 high scalability-2013-03-25-AppBackplane - A Framework for Supporting Multiple Application Architectures

Introduction: Hidden in every computer is a hardware backplane for moving signals around. Hidden in every application are ways of moving messages around and giving code CPU time to process them. Unhiding those capabilities and making them first class facilities for the programmer to control is the idea behind AppBackplane. This goes directly against the trend of hiding everything from the programmer and doing it all automagically. Which is great, until it doesn't work. Then it sucks. And the approach of giving the programmer all the power also sucks, until it's tuned to work together and performance is incredible even under increasing loads. Then it's great. These are two different curves going in opposite directions. You need to decide for your application which curve you need to be on. AppBackplane is an example framework supporting the multiple application architectures we talked about in Beyond Threads And Callbacks . It provides a scheduling system that supports continuous and high loa

4 0.30998394 1418 high scalability-2013-03-06-Low Level Scalability Solutions - The Aggregation Collection

Introduction: What good are problems without solutions? In  42 Monster Problems That Attack As Loads Increase  we talked about problems. In this first post (OK, there was an earlier post, but I'm doing some reorganizing), we'll cover what I call aggregation  strategies. Keep in mind these are low level architecture type suggestions of how to structure the components of your code and how they interact. We're not talking about massive scale-out clusters here, but of what your applications might like like internally, way below the service level interface level. There's a lot more to the world than evented architectures. Aggregation simply means we aren't using stupid queues. Our queues will be smart. We are deeply aware of queues as containers of work that eventually dictate how the entire system performs. As work containers we know intimately what requests and data sit in our queues and we can use that intelligence to our great advantage. Prioritize Work The key idea to it all is an almost mi

5 0.30132183 1421 high scalability-2013-03-11-Low Level Scalability Solutions - The Conditioning Collection

Introduction: We talked about  42 Monster Problems That Attack As Loads Increase . And in The Aggregation Collection  we talked about the value of prioritizing work and making smart queues as a way of absorbing and not reflecting traffic spikes. Now we move on to our next batch of strategies where the theme is conditioning , which is the idea of shaping and controlling flows of work within your application... Use Resources Proportional To a Fixed Limit This is probably the most important rule for achieving scalability within an application. What it means: Find the resource that has a fixed limit that you know you can support. For example, a guarantee to handle a certain number of objects in memory. So if we always use resources proportional to the number of objects it is likely we can prevent resource exhaustion. Devise ways of tying what you need to do to the individual resources. Some examples: Keep a list of purchase orders with line items over $20 (or whatever). Do not keep

6 0.22051235 920 high scalability-2010-10-15-Troubles with Sharding - What can we learn from the Foursquare Incident?

7 0.20626104 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

8 0.1915437 464 high scalability-2008-12-13-Strategy: Facebook Tweaks to Handle 6 Time as Many Memcached Requests

9 0.1850863 1568 high scalability-2013-12-23-What Happens While Your Brain Sleeps is Surprisingly Like How Computers Stay Sane

10 0.18179747 750 high scalability-2009-12-16-Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud

11 0.18159889 1355 high scalability-2012-11-05-Gone Fishin': Building Super Scalable Systems: Blade Runner Meets Autonomic Computing In The Ambient Cloud

12 0.17616795 960 high scalability-2010-12-20-Netflix: Use Less Chatty Protocols in the Cloud - Plus 26 Fixes

13 0.17258638 406 high scalability-2008-10-08-Strategy: Flickr - Do the Essential Work Up-front and Queue the Rest

14 0.17100047 1038 high scalability-2011-05-11-Troubleshooting response time problems – why you cannot trust your system metrics

15 0.17063135 538 high scalability-2009-03-16-Are Cloud Based Memory Architectures the Next Big Thing?

16 0.17028663 954 high scalability-2010-12-06-What the heck are you actually using NoSQL for?

17 0.16841698 1622 high scalability-2014-03-31-How WhatsApp Grew to Nearly 500 Million Users, 11,000 cores, and 70 Million Messages a Second

18 0.16826116 1266 high scalability-2012-06-18-Google on Latency Tolerant Systems: Making a Predictable Whole Out of Unpredictable Parts

19 0.16620542 1112 high scalability-2011-09-07-What Google App Engine Price Changes Say About the Future of Web Architecture

20 0.16460346 761 high scalability-2010-01-17-Applications Become Black Boxes Using Markets to Scale and Control Costs


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.308), (1, 0.193), (2, -0.02), (3, -0.057), (4, -0.049), (5, 0.003), (6, 0.163), (7, 0.113), (8, -0.179), (9, -0.102), (10, 0.0), (11, 0.063), (12, 0.03), (13, -0.011), (14, 0.026), (15, -0.036), (16, 0.053), (17, 0.001), (18, -0.021), (19, 0.07), (20, -0.015), (21, -0.045), (22, 0.039), (23, -0.027), (24, 0.077), (25, -0.002), (26, 0.094), (27, 0.069), (28, 0.077), (29, -0.035), (30, 0.07), (31, -0.026), (32, 0.131), (33, 0.01), (34, 0.029), (35, 0.039), (36, -0.038), (37, -0.005), (38, -0.022), (39, -0.0), (40, 0.036), (41, 0.007), (42, 0.03), (43, 0.038), (44, -0.025), (45, -0.056), (46, -0.008), (47, 0.019), (48, -0.004), (49, 0.016)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97110146 1413 high scalability-2013-02-27-42 Monster Problems that Attack as Loads Increase

Introduction: For solutions take a look at: 7 Life Saving Scalability Defenses Against Load Monster Attacks . This is a look at all the bad things that can happen to your carefully crafted program as loads increase: all hell breaks lose. Sure, you can scale out or scale up, but you can also choose to program better. Make your system handle larger loads. This saves money because fewer boxes are needed and it will make the entire application more reliable and have better response times. And it can be quite satisfying as a programmer. Large Number Of Objects We usually get into scaling problems when the number of objects gets larger. Clearly resource usage of all types is stressed as the number of objects grow. Continuous Failures Makes An Infinite Event Stream During large network failure scenarios there is never time for the system recover. We are in a continual state of stress. Lots of High Priority Work For example, rerouting is a high priority activity. If there is a large amount

2 0.93839169 1421 high scalability-2013-03-11-Low Level Scalability Solutions - The Conditioning Collection

Introduction: We talked about  42 Monster Problems That Attack As Loads Increase . And in The Aggregation Collection  we talked about the value of prioritizing work and making smart queues as a way of absorbing and not reflecting traffic spikes. Now we move on to our next batch of strategies where the theme is conditioning , which is the idea of shaping and controlling flows of work within your application... Use Resources Proportional To a Fixed Limit This is probably the most important rule for achieving scalability within an application. What it means: Find the resource that has a fixed limit that you know you can support. For example, a guarantee to handle a certain number of objects in memory. So if we always use resources proportional to the number of objects it is likely we can prevent resource exhaustion. Devise ways of tying what you need to do to the individual resources. Some examples: Keep a list of purchase orders with line items over $20 (or whatever). Do not keep

3 0.9065128 1429 high scalability-2013-03-25-AppBackplane - A Framework for Supporting Multiple Application Architectures

Introduction: Hidden in every computer is a hardware backplane for moving signals around. Hidden in every application are ways of moving messages around and giving code CPU time to process them. Unhiding those capabilities and making them first class facilities for the programmer to control is the idea behind AppBackplane. This goes directly against the trend of hiding everything from the programmer and doing it all automagically. Which is great, until it doesn't work. Then it sucks. And the approach of giving the programmer all the power also sucks, until it's tuned to work together and performance is incredible even under increasing loads. Then it's great. These are two different curves going in opposite directions. You need to decide for your application which curve you need to be on. AppBackplane is an example framework supporting the multiple application architectures we talked about in Beyond Threads And Callbacks . It provides a scheduling system that supports continuous and high loa

4 0.90000069 1418 high scalability-2013-03-06-Low Level Scalability Solutions - The Aggregation Collection

Introduction: What good are problems without solutions? In  42 Monster Problems That Attack As Loads Increase  we talked about problems. In this first post (OK, there was an earlier post, but I'm doing some reorganizing), we'll cover what I call aggregation  strategies. Keep in mind these are low level architecture type suggestions of how to structure the components of your code and how they interact. We're not talking about massive scale-out clusters here, but of what your applications might like like internally, way below the service level interface level. There's a lot more to the world than evented architectures. Aggregation simply means we aren't using stupid queues. Our queues will be smart. We are deeply aware of queues as containers of work that eventually dictate how the entire system performs. As work containers we know intimately what requests and data sit in our queues and we can use that intelligence to our great advantage. Prioritize Work The key idea to it all is an almost mi

5 0.89599848 1415 high scalability-2013-03-04-7 Life Saving Scalability Defenses Against Load Monster Attacks

Introduction: We talked about 42 Monster Problems That Attack As Loads Increase . Here are a few ways you can defend yourself, secrets revealed by scaling masters across the ages. Note that these are low level programming level moves, not large architecture type strategies. Use Resources Proportional To a Fixed Limit This is probably the most important rule for achieving scalability within an application. What it means: Find the resource that has a fixed limit that you know you can support. For example, a guarantee to handle a certain number of objects in memory. So if we always use resources proportional to the number of objects it is likely we can prevent resource exhaustion. Devise ways of tying what you need to do to the individual resources. Some examples: Keep a list of purchase orders with line items over $20 (or whatever). Do not keep a list of the line items because the number of items can be much larger than the number of purchase orders. You have kept the resource usage

6 0.89321685 1425 high scalability-2013-03-18-Beyond Threads and Callbacks - Application Architecture Pros and Cons

7 0.82952583 406 high scalability-2008-10-08-Strategy: Flickr - Do the Essential Work Up-front and Queue the Rest

8 0.82478011 1591 high scalability-2014-02-05-Little’s Law, Scalability and Fault Tolerance: The OS is your bottleneck. What you can do?

9 0.81059366 1454 high scalability-2013-05-08-Typesafe Interview: Scala + Akka is an IaaS for Your Process Architecture

10 0.80175519 1622 high scalability-2014-03-31-How WhatsApp Grew to Nearly 500 Million Users, 11,000 cores, and 70 Million Messages a Second

11 0.79409581 960 high scalability-2010-12-20-Netflix: Use Less Chatty Protocols in the Cloud - Plus 26 Fixes

12 0.79221231 1373 high scalability-2012-12-17-11 Uses For the Humble Presents Queue, er, Message Queue

13 0.78188384 1237 high scalability-2012-05-02-12 Ways to Increase Throughput by 32X and Reduce Latency by 20X

14 0.77972376 772 high scalability-2010-02-05-High Availability Principle : Concurrency Control

15 0.77044928 1314 high scalability-2012-08-30-Dramatically Improving Performance by Debugging Brutally Complex Prolems

16 0.76716673 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

17 0.76603824 1266 high scalability-2012-06-18-Google on Latency Tolerant Systems: Making a Predictable Whole Out of Unpredictable Parts

18 0.76362789 1568 high scalability-2013-12-23-What Happens While Your Brain Sleeps is Surprisingly Like How Computers Stay Sane

19 0.75993168 1038 high scalability-2011-05-11-Troubleshooting response time problems – why you cannot trust your system metrics

20 0.75807327 1456 high scalability-2013-05-13-The Secret to 10 Million Concurrent Connections -The Kernel is the Problem, Not the Solution


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.083), (2, 0.315), (10, 0.09), (26, 0.093), (30, 0.017), (40, 0.019), (47, 0.039), (61, 0.091), (77, 0.018), (79, 0.098), (85, 0.019), (94, 0.045)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97151202 1568 high scalability-2013-12-23-What Happens While Your Brain Sleeps is Surprisingly Like How Computers Stay Sane

Introduction: There's a deep similarity between how long running systems like our brains and computers accumulate errors and repair themselves.  Reboot it. Isn’t that the common treatment for most computer ailments? And you may have noticed now that your iPhone supports background processing it reboots a lot more often? Your DVR, phone, computer, router, car, and an untold number of long running computer systems all suffer from a nasty problem: over time they accumulate flaws and die or go crazy. Now think about your brain. It’s a long running program running on very complex and error prone hardware. How does your brain keep itself sane over time? The answer may be found in something we spend a third of our lives doing. Sleep. There’s new research out on how our brains are cleansed during sleep that has some interesting parallels to how we keep long running hardware-software systems up and running properly. This is a fun topic. Let’s explore it a little more. One of the most frustrating

same-blog 2 0.96981573 1413 high scalability-2013-02-27-42 Monster Problems that Attack as Loads Increase

Introduction: For solutions take a look at: 7 Life Saving Scalability Defenses Against Load Monster Attacks . This is a look at all the bad things that can happen to your carefully crafted program as loads increase: all hell breaks lose. Sure, you can scale out or scale up, but you can also choose to program better. Make your system handle larger loads. This saves money because fewer boxes are needed and it will make the entire application more reliable and have better response times. And it can be quite satisfying as a programmer. Large Number Of Objects We usually get into scaling problems when the number of objects gets larger. Clearly resource usage of all types is stressed as the number of objects grow. Continuous Failures Makes An Infinite Event Stream During large network failure scenarios there is never time for the system recover. We are in a continual state of stress. Lots of High Priority Work For example, rerouting is a high priority activity. If there is a large amount

3 0.9579587 1207 high scalability-2012-03-12-Google: Taming the Long Latency Tail - When More Machines Equals Worse Results

Introduction: Likewise the current belief that, in the case of artificial machines the very large and the very small are equally feasible and lasting is a manifest error. Thus, for example, a small obelisk or column or other solid figure can certainly be laid down or set up without danger of breaking, while the large ones will go to pieces under the slightest provocation, and that purely on account of their own weight. -- Galileo Galileo observed how things broke if they were naively scaled up. Interestingly, Google noticed a similar pattern when building larger software systems using the same techniques used to build smaller systems.  Luiz André Barroso , Distinguished Engineer at Google, talks about this fundamental property of scaling systems in his fascinating talk,  Warehouse-Scale Computing: Entering the Teenage Decade . Google found the larger the scale the greater the impact of latency variability. When a request is implemented by work done in parallel, as is common with today's service

4 0.95410323 1418 high scalability-2013-03-06-Low Level Scalability Solutions - The Aggregation Collection

Introduction: What good are problems without solutions? In  42 Monster Problems That Attack As Loads Increase  we talked about problems. In this first post (OK, there was an earlier post, but I'm doing some reorganizing), we'll cover what I call aggregation  strategies. Keep in mind these are low level architecture type suggestions of how to structure the components of your code and how they interact. We're not talking about massive scale-out clusters here, but of what your applications might like like internally, way below the service level interface level. There's a lot more to the world than evented architectures. Aggregation simply means we aren't using stupid queues. Our queues will be smart. We are deeply aware of queues as containers of work that eventually dictate how the entire system performs. As work containers we know intimately what requests and data sit in our queues and we can use that intelligence to our great advantage. Prioritize Work The key idea to it all is an almost mi

5 0.95382851 960 high scalability-2010-12-20-Netflix: Use Less Chatty Protocols in the Cloud - Plus 26 Fixes

Introduction: In  5 Lessons We’ve Learned Using AWS , Netflix's John Ciancutti says one of the big lessons they've learned is to create less chatty protocols : In the Netflix data centers, we have a high capacity, super fast, highly reliable network. This has afforded us the luxury of designing around chatty APIs to remote systems. AWS networking has more variable latency. We’ve had to be much more structured about “over the wire” interactions, even as we’ve transitioned to a more highly distributed architecture. There's not a lot of advice out there on how to create protocols. Combine that with a rush to the cloud and you have a perfect storm for chatty applications crushing application performance. Netflix is far from the first to be surprised by the less than stellar networks inside AWS.  A chatty protocol is one where a client makes a series of requests to a server and the client must wait on each reply before sending the next request. On a LAN this can work great. LAN's are typically

6 0.95332497 221 high scalability-2008-01-24-Mailinator Architecture

7 0.95158011 1475 high scalability-2013-06-13-Busting 4 Modern Hardware Myths - Are Memory, HDDs, and SSDs Really Random Access?

8 0.95127839 1429 high scalability-2013-03-25-AppBackplane - A Framework for Supporting Multiple Application Architectures

9 0.95095682 1421 high scalability-2013-03-11-Low Level Scalability Solutions - The Conditioning Collection

10 0.9507488 533 high scalability-2009-03-11-The Implications of Punctuated Scalabilium for Website Architecture

11 0.95056045 306 high scalability-2008-04-21-The Search for the Source of Data - How SimpleDB Differs from a RDBMS

12 0.95002031 1010 high scalability-2011-03-24-Strategy: Disk Backup for Speed, Tape Backup to Save Your Bacon, Just Ask Google

13 0.94986331 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

14 0.94952101 1246 high scalability-2012-05-16-Big List of 20 Common Bottlenecks

15 0.94932145 1266 high scalability-2012-06-18-Google on Latency Tolerant Systems: Making a Predictable Whole Out of Unpredictable Parts

16 0.94920826 1425 high scalability-2013-03-18-Beyond Threads and Callbacks - Application Architecture Pros and Cons

17 0.94920623 76 high scalability-2007-08-29-Skype Failed the Boot Scalability Test: Is P2P fundamentally flawed?

18 0.94825757 1456 high scalability-2013-05-13-The Secret to 10 Million Concurrent Connections -The Kernel is the Problem, Not the Solution

19 0.94763523 943 high scalability-2010-11-16-Facebook's New Real-time Messaging System: HBase to Store 135+ Billion Messages a Month

20 0.94761831 1204 high scalability-2012-03-06-Ask For Forgiveness Programming - Or How We'll Program 1000 Cores