high_scalability high_scalability-2009 high_scalability-2009-505 knowledge-graph by maker-knowledge-mining

505 high scalability-2009-02-01-More Chips Means Less Salsa


meta infos for this blog

Source: html

Introduction: Yes, I just got through watching the Superbowl so chips and salsa are on my mind and in my stomach. In recreational eating more chips requires downing more salsa. With mulitcore chips it turns out as cores go up salsa goes down, salsa obviously being a metaphor for speed. Sandia National Laboratories found in their simulations: a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores causes a decrease in speed. Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added. The problem is the lack of memory bandwidth as well as contention between processors over the memory bus available to each processor. The implication for those following a diagonal scaling strategy is to work like heck to make your system fit within eight multicores. After that you'll need to consider some sort of partitioning strategy. What's interesti


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Yes, I just got through watching the Superbowl so chips and salsa are on my mind and in my stomach. [sent-1, score-0.944]

2 In recreational eating more chips requires downing more salsa. [sent-2, score-0.409]

3 With mulitcore chips it turns out as cores go up salsa goes down, salsa obviously being a metaphor for speed. [sent-3, score-1.664]

4 Sandia National Laboratories found in their simulations: a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. [sent-4, score-0.961]

5 Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added. [sent-6, score-1.128]

6 The problem is the lack of memory bandwidth as well as contention between processors over the memory bus available to each processor. [sent-7, score-0.501]

7 The implication for those following a diagonal scaling strategy is to work like heck to make your system fit within eight multicores. [sent-8, score-0.779]

8 After that you'll need to consider some sort of partitioning strategy. [sent-9, score-0.162]

9 What's interesting is the research on where the cutoff point will be. [sent-10, score-0.058]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('salsa', 0.498), ('multicores', 0.451), ('eight', 0.284), ('chips', 0.257), ('diagonal', 0.15), ('laboratories', 0.15), ('insignificant', 0.15), ('steep', 0.15), ('sixteen', 0.141), ('superbowl', 0.141), ('decline', 0.13), ('simulations', 0.13), ('four', 0.123), ('national', 0.122), ('exceeding', 0.119), ('cores', 0.114), ('metaphor', 0.108), ('eating', 0.107), ('barely', 0.098), ('heck', 0.096), ('implication', 0.096), ('bus', 0.088), ('registered', 0.086), ('decrease', 0.086), ('watching', 0.085), ('increase', 0.084), ('obviously', 0.077), ('contention', 0.073), ('causes', 0.071), ('yes', 0.068), ('turns', 0.068), ('lack', 0.066), ('processors', 0.066), ('significant', 0.063), ('partitioning', 0.06), ('memory', 0.058), ('research', 0.058), ('fit', 0.055), ('mind', 0.055), ('following', 0.052), ('sort', 0.052), ('well', 0.051), ('two', 0.05), ('consider', 0.05), ('got', 0.049), ('perform', 0.048), ('strategy', 0.046), ('requires', 0.045), ('goes', 0.044), ('bandwidth', 0.041)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 505 high scalability-2009-02-01-More Chips Means Less Salsa

Introduction: Yes, I just got through watching the Superbowl so chips and salsa are on my mind and in my stomach. In recreational eating more chips requires downing more salsa. With mulitcore chips it turns out as cores go up salsa goes down, salsa obviously being a metaphor for speed. Sandia National Laboratories found in their simulations: a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores causes a decrease in speed. Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added. The problem is the lack of memory bandwidth as well as contention between processors over the memory bus available to each processor. The implication for those following a diagonal scaling strategy is to work like heck to make your system fit within eight multicores. After that you'll need to consider some sort of partitioning strategy. What's interesti

2 0.10368659 939 high scalability-2010-11-09-The Tera-Scale Effect

Introduction: In the past year, Intel issued a series of powerful chips under the new  Nehalem microarchitecture , with large numbers of cores and extensive memory capacity. This new class of chips is is part of a bigger Intel initiative referred to as  Tera-Scale Computing . Cisco has released their  Unified Computing System  (UCS) equipped with a unique extended memory and high speed network within the box, which is specifically geared to take advantage of this type of CPU architecture . This new class of hardware has the potential to revolutionize the IT landscape as we know it. In  this post, I want to focus primarily on the potential implications on application architecture, more specifically on the application platform landscape.   more...

3 0.09288767 534 high scalability-2009-03-12-Google TechTalk: Amdahl's Law in the Multicore Era

Introduction: Over the last several decades computer architects have been phenomenally successful turning the transistor bounty provided by Moore's Law into chips with ever increasing single-threaded performance. During many of these successful years, however, many researchers paid scant attention to multiprocessor work. Now as vendors turn to multicore chips, researchers are reacting with more papers on multi-threaded systems. While this is good, we are concerned that further work on single-thread performance will be squashed. To help understand future high-level trade-offs, we develop a corollary to Amdahl's Law for multicore chips [Hill & Marty, IEEE Computer 2008]. It models fixed chip resources for alternative designs that use symmetric cores, asymmetric cores, or dynamic techniques that allow cores to work together on sequential execution. Our results encourage multicore designers to view performance of the entire chip rather than focus on core efficiencies. Moreover, we observe that obtai

4 0.069961995 636 high scalability-2009-06-23-Learn How to Exploit Multiple Cores for Better Performance and Scalability

Introduction: InfoQueue has this excellent talk by Brian Goetz on the new features being added to Java SE 7 that will allow programmers to fully exploit our massively multi-processor future. While the talk is about Java it's really more general than that and there's a lot to learn here for everyone. Brian starts with a short, coherent, and compelling explanation of why programmers can't expect to be saved by ever faster CPUs and why we must learn to exploit the strengths of multiple core computers to make our software go faster. Some techniques for exploiting multiple cores are given in an equally short, coherent, and compelling explanation of why divide and conquer as the secret to multi-core bliss, fork-join, how the Java approach differs from map-reduce, and lots of other juicy topics. The multi-core "problem" is only going to get worse. Tilera founder Anant Agarwal estimates by 2017 embedded processors could have 4,096 cores, server CPUs might have 512 cores and desktop chips could use

5 0.062580608 1204 high scalability-2012-03-06-Ask For Forgiveness Programming - Or How We'll Program 1000 Cores

Introduction: The argument for a massively multicore future is now familiar: while clock speeds have leveled off, device density is increasing, so the future is cheap chips with hundreds and thousands of cores. That’s the inexorable logic behind our multicore future. The unsolved question that lurks deep in the dark part of a programmer’s mind is: how on earth are we to program these things? For problems that aren’t embarrassingly parallel , we really have no idea. IBM Research’s David Ungar has an idea. And it’s radical in the extreme... Grace Hopper once advised “It's easier to ask for forgiveness than it is to get permission.” I wonder if she had any idea that her strategy for dealing with human bureaucracy would the same strategy David Ungar thinks will help us tame  the technological bureaucracy of 1000+ core systems? You may recognize David as the co-creator of the Self programming language, inspiration for the HotSpot technology in the JVM and the prototype model u

6 0.06153113 1019 high scalability-2011-04-08-Stuff The Internet Says On Scalability For April 8, 2011

7 0.059514299 142 high scalability-2007-11-05-Strategy: Diagonal Scaling - Don't Forget to Scale Out AND Up

8 0.059306763 959 high scalability-2010-12-17-Stuff the Internet Says on Scalability For December 17th, 2010

9 0.054747507 768 high scalability-2010-02-01-What Will Kill the Cloud?

10 0.051889919 1218 high scalability-2012-03-29-Strategy: Exploit Processor Affinity for High and Predictable Performance

11 0.050856017 1652 high scalability-2014-05-21-9 Principles of High Performance Programs

12 0.05022449 1117 high scalability-2011-09-16-Stuff The Internet Says On Scalability For September 16, 2011

13 0.049506377 1186 high scalability-2012-02-02-The Data-Scope Project - 6PB storage, 500GBytes-sec sequential IO, 20M IOPS, 130TFlops

14 0.048683606 465 high scalability-2008-12-14-Scaling MySQL on a 256-way T5440 server using Solaris ZFS and Java 1.7

15 0.047418877 778 high scalability-2010-02-15-The Amazing Collective Compute Power of the Ambient Cloud

16 0.046244152 1276 high scalability-2012-07-04-Top Features of a Scalable Database

17 0.045908835 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

18 0.045405336 496 high scalability-2009-01-17-Scaling in Games & Virtual Worlds

19 0.044440996 1509 high scalability-2013-08-30-Stuff The Internet Says On Scalability For August 30, 2013

20 0.04349909 421 high scalability-2008-10-17-A High Performance Memory Database for Web Application Caches


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.052), (1, 0.041), (2, 0.007), (3, 0.016), (4, -0.017), (5, 0.012), (6, 0.004), (7, 0.032), (8, -0.047), (9, 0.001), (10, -0.002), (11, -0.029), (12, 0.02), (13, 0.033), (14, -0.003), (15, -0.004), (16, 0.016), (17, 0.006), (18, -0.031), (19, 0.025), (20, -0.018), (21, -0.012), (22, -0.015), (23, -0.002), (24, 0.011), (25, -0.013), (26, 0.014), (27, -0.018), (28, 0.036), (29, 0.018), (30, 0.017), (31, 0.004), (32, 0.005), (33, -0.008), (34, 0.025), (35, -0.019), (36, 0.006), (37, -0.011), (38, -0.005), (39, -0.013), (40, -0.002), (41, 0.013), (42, 0.042), (43, -0.015), (44, 0.012), (45, 0.006), (46, -0.013), (47, -0.01), (48, 0.002), (49, 0.022)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95355451 505 high scalability-2009-02-01-More Chips Means Less Salsa

Introduction: Yes, I just got through watching the Superbowl so chips and salsa are on my mind and in my stomach. In recreational eating more chips requires downing more salsa. With mulitcore chips it turns out as cores go up salsa goes down, salsa obviously being a metaphor for speed. Sandia National Laboratories found in their simulations: a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores causes a decrease in speed. Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added. The problem is the lack of memory bandwidth as well as contention between processors over the memory bus available to each processor. The implication for those following a diagonal scaling strategy is to work like heck to make your system fit within eight multicores. After that you'll need to consider some sort of partitioning strategy. What's interesti

2 0.73036331 534 high scalability-2009-03-12-Google TechTalk: Amdahl's Law in the Multicore Era

Introduction: Over the last several decades computer architects have been phenomenally successful turning the transistor bounty provided by Moore's Law into chips with ever increasing single-threaded performance. During many of these successful years, however, many researchers paid scant attention to multiprocessor work. Now as vendors turn to multicore chips, researchers are reacting with more papers on multi-threaded systems. While this is good, we are concerned that further work on single-thread performance will be squashed. To help understand future high-level trade-offs, we develop a corollary to Amdahl's Law for multicore chips [Hill & Marty, IEEE Computer 2008]. It models fixed chip resources for alternative designs that use symmetric cores, asymmetric cores, or dynamic techniques that allow cores to work together on sequential execution. Our results encourage multicore designers to view performance of the entire chip rather than focus on core efficiencies. Moreover, we observe that obtai

3 0.71165419 1319 high scalability-2012-09-10-Russ’ 10 Ingredient Recipe for Making 1 Million TPS on $5K Hardware

Introduction: My name is Russell Sullivan , I am the author of AlchemyDB: a highly flexible NoSQL/SQL/DocumentStore/GraphDB-datastore built on top of redis. I have spent the last several years trying to find a way to sanely house multiple datastore-genres under one roof while (almost paradoxically) pushing performance to its limits. I recently joined the NoSQL company Aerospike (formerly Citrusleaf) with the goal of incrementally grafting AlchemyDB’s flexible data-modeling capabilities onto Aerospike’s high-velocity horizontally-scalable key-value data-fabric. We recently completed a peak-performance TPS optimization project : starting at 200K TPS, pushing to the recent community edition launch at 500K TPS, and finally arriving at our 2012 goal: 1M TPS on $5K hardware . Getting to one million over-the-wire client-server database-requests per-second on a single machine costing $5K is a balance between trimming overhead on many axes and using a shared nothing architecture to i

4 0.70462435 1218 high scalability-2012-03-29-Strategy: Exploit Processor Affinity for High and Predictable Performance

Introduction: Martin Thompson wrote a really interesting article on the beneficial performance impact of taking advantage of  Processor Affinity : The interesting thing I've observed is that the unpinned test will follow a step function of unpredictable performance.  Across many runs I've seen different patterns but all similar in this step function nature.  For the pinned tests I get consistent throughput with no step pattern and always the greatest throughput. The idea is by assigning a thread to a particular CPU that when a thread is rescheduled to run on the same CPU, it can take advantage of the "accumulated  state in the processor, including instructions and data in the cache."  With multi-core chips the norm now, you may want to decide for yourself how to assign work to cores and not let the OS do it for you. The results are surprisingly strong.

5 0.70053154 1536 high scalability-2013-10-23-Strategy: Use Linux Taskset to Pin Processes or Let the OS Schedule It?

Introduction: This question comes from Ulysses on an interesting thread from the Mechanical Sympathy news group, especially given how multiple processors are now the norm: Ulysses: On an 8xCPU Linux instance,  is it at all advantageous to use the Linux taskset command to pin an 8xJVM process set (co-ordinated as a www.infinispan.org distributed cache/data grid) to a specific CPU affinity set  (i.e. pin JVM0 process to CPU 0, JVM1 process to CPU1, ...., JVM7process to CPU 7) vs. just letting the Linux OS use its default mechanism for provisioning the 8xJVM process set to the available CPUs? In effrort to seek an optimal point (in the full event space), what are the conceptual trade-offs in considering "searching" each permutation of provisioning an 8xJVM process set to an 8xCPU set via taskset? Given taskset  is they key to the question, it would help to have a definition: Used to set or retrieve the CPU affinity of a running process given its PID or to launch a new COMMAND with

6 0.68435055 1471 high scalability-2013-06-06-Paper: Memory Barriers: a Hardware View for Software Hackers

7 0.67964429 636 high scalability-2009-06-23-Learn How to Exploit Multiple Cores for Better Performance and Scalability

8 0.67647785 939 high scalability-2010-11-09-The Tera-Scale Effect

9 0.63328022 1237 high scalability-2012-05-02-12 Ways to Increase Throughput by 32X and Reduce Latency by 20X

10 0.62672204 1456 high scalability-2013-05-13-The Secret to 10 Million Concurrent Connections -The Kernel is the Problem, Not the Solution

11 0.62340635 1572 high scalability-2014-01-03-Stuff The Internet Says On Scalability For January 3rd, 2014

12 0.61379629 1588 high scalability-2014-01-31-Stuff The Internet Says On Scalability For January 31st, 2014

13 0.61335921 826 high scalability-2010-05-12-The Rise of the Virtual Cellular Machines

14 0.60378146 1475 high scalability-2013-06-13-Busting 4 Modern Hardware Myths - Are Memory, HDDs, and SSDs Really Random Access?

15 0.60373437 1652 high scalability-2014-05-21-9 Principles of High Performance Programs

16 0.59962147 1246 high scalability-2012-05-16-Big List of 20 Common Bottlenecks

17 0.59509277 914 high scalability-2010-10-04-Paper: An Analysis of Linux Scalability to Many Cores

18 0.59235972 953 high scalability-2010-12-03-GPU vs CPU Smackdown : The Rise of Throughput-Oriented Architectures

19 0.58597517 1127 high scalability-2011-09-28-Pursue robust indefinite scalability with the Movable Feast Machine

20 0.57840151 1195 high scalability-2012-02-17-Stuff The Internet Says On Scalability For February 17, 2012


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.091), (2, 0.105), (10, 0.098), (43, 0.509), (61, 0.045), (85, 0.014)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.81720626 505 high scalability-2009-02-01-More Chips Means Less Salsa

Introduction: Yes, I just got through watching the Superbowl so chips and salsa are on my mind and in my stomach. In recreational eating more chips requires downing more salsa. With mulitcore chips it turns out as cores go up salsa goes down, salsa obviously being a metaphor for speed. Sandia National Laboratories found in their simulations: a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores causes a decrease in speed. Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added. The problem is the lack of memory bandwidth as well as contention between processors over the memory bus available to each processor. The implication for those following a diagonal scaling strategy is to work like heck to make your system fit within eight multicores. After that you'll need to consider some sort of partitioning strategy. What's interesti

2 0.67647785 893 high scalability-2010-09-03-Hot Scalability Links For Sep 3, 2010

Introduction: With summer almost gone, it's time to fall into some good links... Hibari - distributed, fault tolerant, highly available key-value store  written in Erlang. In this video Scott Lystig Fritchie gives a very good overview of the newest key-value store.  Tweets of Gold lenidot : with 12 staff, @ tumblr  serves 1.5billion pageviews/month and 25,000 signups/day. Now that's scalability! jmtan24 : Funny that whenever a high scalability article comes out, it always mention the shared nothing approach mfeathers : When life gives you lemons, you can have decades-long conquest to convert lemons to oranges, or you can make lemonade. OyvindIsene : Met an old man with mustache today, he had no opinion on  #noSQL . Note to myself: Don't grow a mustache, now or later.  vlad003 : Isn't it interesting how P2P distributes data while Cloud Computing centralizes it? And they're both said to be the future. You may be interested in a new DevOps Meetup organized by Dave

3 0.57910866 1624 high scalability-2014-04-01-The Mullet Cloud Selection Pattern

Introduction: In a recent thread on Hacker News  one of the commenters mentioned that they use Digital Ocean for personal stuff, but use AWS for business. This DO for personal and AWS for business split has become popular enough that we can now give it a name: the  Mullet Cloud Selection Pattern -  business on the front and party on the back. Providers like DO are cheap and the lightweight composable container model has an aesthetic appeal to developers. Even though it seems like much of the VM infrastructure has to be reinvented for containers, the industry often follows the lead of developer preference. The mullet is dead. Long live the mullet! Developers are ever restless, always eager to move onto something new.

4 0.56078452 726 high scalability-2009-10-22-Paper: The Case for RAMClouds: Scalable High-Performance Storage Entirely in DRAM

Introduction: Stanford Info Lab is taking pains to document a direction we've been moving for a while now, using RAM not just as a cache, but as the primary storage medium. Many quality products  have built on this model. Even if the vision isn't radical, the paper does produce a lot of data backing up the transition, which is in itself helpful. From the The Abstract: Disk-oriented approaches to online storage are becoming increasingly problematic: they do not scale grace-fully to meet the needs of large-scale Web applications, and improvements in disk capacity have far out-stripped improvements in access latency and bandwidth. This paper argues for a new approach to datacenter storage called RAMCloud, where information is kept entirely in DRAM and large-scale systems are created by aggregating the main memories of thousands of commodity servers. We believe that RAMClouds can provide durable and available storage with 100-1000x the throughput of disk-based systems and 100-1000x lower access lat

5 0.54548025 470 high scalability-2008-12-18-Risk Analysis on the Cloud (Using Excel and GigaSpaces)

Introduction: Every day brings news of either more failures of the financial systems or out-right fraud, with the $50 billion Bernard Madoff Ponzi scheme being the latest, breaking all records. This post provide a technical overview of a solution that was implemented for one of the largest banks in China. The solution illustrate how one can use Excel as a front end client and at the same time leverage cloud computing model and mapreduce as well as other patterns to scale-out risk calculations. I'm hoping that this type of approach will reduce the chances for seeing this type of fraud from happening in the future.

6 0.53424495 1336 high scalability-2012-10-09-Batoo JPA - The new JPA Implementation that runs over 15 times faster...

7 0.53226292 1182 high scalability-2012-01-27-Stuff The Internet Says On Scalability For January 27, 2012

8 0.51991117 37 high scalability-2007-07-28-Product: Web Log Storming

9 0.50160807 54 high scalability-2007-08-02-Multilanguage Website

10 0.47253382 1088 high scalability-2011-07-27-Making Hadoop 1000x Faster for Graph Problems

11 0.46479669 937 high scalability-2010-11-09-Paper: Hyder - Scaling Out without Partitioning

12 0.40311736 1603 high scalability-2014-02-28-Stuff The Internet Says On Scalability For February 28th, 2014

13 0.39497492 342 high scalability-2008-06-08-Search fast in million rows

14 0.37746266 720 high scalability-2009-10-12-High Performance at Massive Scale – Lessons learned at Facebook

15 0.3722012 1123 high scalability-2011-09-23-The Real News is Not that Facebook Serves Up 1 Trillion Pages a Month…

16 0.3678109 798 high scalability-2010-03-22-7 Secrets to Successfully Scaling with Scalr (on Amazon) by Sebastian Stadil

17 0.36430955 1475 high scalability-2013-06-13-Busting 4 Modern Hardware Myths - Are Memory, HDDs, and SSDs Really Random Access?

18 0.34604323 1131 high scalability-2011-10-24-StackExchange Architecture Updates - Running Smoothly, Amazon 4x More Expensive

19 0.3436774 339 high scalability-2008-06-04-LinkedIn Architecture

20 0.34292844 36 high scalability-2007-07-28-Product: Web Log Expert