high_scalability high_scalability-2012 high_scalability-2012-1213 knowledge-graph by maker-knowledge-mining

1213 high scalability-2012-03-22-Paper: Revisiting Network I-O APIs: The netmap Framework


meta infos for this blog

Source: html

Introduction: Here's a really good article in the Communications of the ACM on reducing network packet processing overhead by redesigning the network stack:  Revisiting Network I/O APIs: The Netmap Framework  by  Luigi Rizzo . As commodity networking performance increases operating systems need to keep up or all those CPUs will go to waste. How do they make this happen?   Abstract: Today 10-gigabit interfaces are used more and more in datacenters and servers. On these links, packets flow as fast as one every 67.2 nanoseconds, yet modern operating systems can take 10-20 times longer just to move one packet between the wire and the application. We can do much better, not with more powerful hardware but by revising architectural decisions made long ago regarding the design of device drivers and network stacks. The netmap framework is a promising step in this direction. Thanks to a careful design and the engineering of a new packet I/O API, netmap eliminates much unnecessary overhead and moves


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Here's a really good article in the Communications of the ACM on reducing network packet processing overhead by redesigning the network stack:  Revisiting Network I/O APIs: The Netmap Framework  by  Luigi Rizzo . [sent-1, score-0.708]

2 As commodity networking performance increases operating systems need to keep up or all those CPUs will go to waste. [sent-2, score-0.109]

3 On these links, packets flow as fast as one every 67. [sent-5, score-0.173]

4 2 nanoseconds, yet modern operating systems can take 10-20 times longer just to move one packet between the wire and the application. [sent-6, score-0.637]

5 We can do much better, not with more powerful hardware but by revising architectural decisions made long ago regarding the design of device drivers and network stacks. [sent-7, score-0.329]

6 The netmap framework is a promising step in this direction. [sent-8, score-0.675]

7 Thanks to a careful design and the engineering of a new packet I/O API, netmap eliminates much unnecessary overhead and moves traffic up to 40 times faster than existing operating systems. [sent-9, score-1.295]

8 Most importantly, netmap is largely compatible with existing applications, so it can be incrementally deployed. [sent-10, score-0.635]

9 The per-byte cost comes from data manipulation (copying, checksum computation, encryption) and is proportional to the amount of traffic processed. [sent-12, score-0.482]

10 The per-packet cost comes from the manipulation of descriptors (allocation and destruction, metadata management) and the execution of system calls, interrupts, and device-driver functions. [sent-13, score-0.4]

11 Per-packet cost depends on how the data stream is split into packets: the larger the packet, the smaller the component. [sent-14, score-0.096]

12 The minimum packet size is 64 bytes or 512 bits, surrounded by an additional 160 bits of inter-packet gap and preambles. [sent-16, score-0.805]

13 At 10 Gbit/s, this translates into one packet every 67. [sent-17, score-0.459]

14 At the maximum Ethernet frame size (1,518 bytes plus framing), the transmission time becomes 1. [sent-20, score-0.412]

15 23 microseconds, for a frame rate of about 812 Kpps. [sent-21, score-0.259]

16 This is about 20 times lower than the peak rate, but still quite challenging, and it is a regimen that needs to be sustained if TCP is to saturate a 10-Gbit/s link. [sent-22, score-0.25]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('netmap', 0.501), ('packet', 0.382), ('nanoseconds', 0.173), ('packets', 0.173), ('manipulation', 0.152), ('frame', 0.147), ('ethernet', 0.133), ('bytes', 0.118), ('rate', 0.112), ('mpps', 0.112), ('luigi', 0.112), ('revising', 0.112), ('revisiting', 0.112), ('framework', 0.109), ('operating', 0.109), ('articlesvideo', 0.105), ('destruction', 0.105), ('bits', 0.104), ('framing', 0.096), ('saturate', 0.096), ('cost', 0.096), ('checksum', 0.093), ('overhead', 0.091), ('redesigning', 0.091), ('descriptors', 0.085), ('acm', 0.082), ('transmission', 0.082), ('interrupts', 0.081), ('microseconds', 0.081), ('sustained', 0.078), ('translates', 0.077), ('times', 0.076), ('proportional', 0.074), ('gap', 0.073), ('drivers', 0.073), ('network', 0.072), ('regarding', 0.072), ('copying', 0.071), ('encryption', 0.07), ('wire', 0.07), ('existing', 0.069), ('communications', 0.067), ('comes', 0.067), ('unnecessary', 0.067), ('size', 0.065), ('promising', 0.065), ('incrementally', 0.065), ('minimum', 0.063), ('allocation', 0.063), ('constraints', 0.062)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 1213 high scalability-2012-03-22-Paper: Revisiting Network I-O APIs: The netmap Framework

Introduction: Here's a really good article in the Communications of the ACM on reducing network packet processing overhead by redesigning the network stack:  Revisiting Network I/O APIs: The Netmap Framework  by  Luigi Rizzo . As commodity networking performance increases operating systems need to keep up or all those CPUs will go to waste. How do they make this happen?   Abstract: Today 10-gigabit interfaces are used more and more in datacenters and servers. On these links, packets flow as fast as one every 67.2 nanoseconds, yet modern operating systems can take 10-20 times longer just to move one packet between the wire and the application. We can do much better, not with more powerful hardware but by revising architectural decisions made long ago regarding the design of device drivers and network stacks. The netmap framework is a promising step in this direction. Thanks to a careful design and the engineering of a new packet I/O API, netmap eliminates much unnecessary overhead and moves

2 0.23169738 1456 high scalability-2013-05-13-The Secret to 10 Million Concurrent Connections -The Kernel is the Problem, Not the Solution

Introduction: Now that we have the C10K concurrent connection problem licked, how do we level up and support 10 million concurrent connections? Impossible you say. Nope, systems right now are delivering 10 million concurrent connections using techniques that are as radical as they may be unfamiliar. To learn how it’s done we turn to Robert Graham , CEO of Errata Security, and his absolutely fantastic talk at Shmoocon 2013 called C10M Defending The Internet At Scale . Robert has a brilliant way of framing the problem that I’ve never heard of before. He starts with a little bit of history, relating how Unix wasn’t originally designed to be a general server OS, it was designed to be a control system for a telephone network. It was the telephone network that actually transported the data so there was a clean separation between the control plane and the data plane. The problem is we now use Unix servers as part of the data plane , which we shouldn’t do at all. If we were des

3 0.23132776 1177 high scalability-2012-01-19-Is it time to get rid of the Linux OS model in the cloud?

Introduction: You program in a dynamic language, that runs on a JVM, that runs on a OS designed 40 years ago  for a completely different purpose, that runs on virtualized hardware. Does this make sense? We've talked about this idea before in Machine VM + Cloud API - Rewriting The Cloud From Scratch , where the vision is to treat cloud virtual hardware as a compiler target, and converting high-level language source code directly into kernels that run on it. As new technologies evolve the friction created by our old tool chains and architecture models becomes ever more obvious. Take, for example, what a team at UCSD  is releasing: a phase-change memory prototype   - a   solid state storage device that provides performance thousands of times faster than a conventional hard drive and up to seven times faster than current state-of-the-art solid-state drives (SSDs). However, PCM has access latencies several times slower than DRAM. This technology has obvious mind blowing implications, but an

4 0.16725048 1319 high scalability-2012-09-10-Russ’ 10 Ingredient Recipe for Making 1 Million TPS on $5K Hardware

Introduction: My name is Russell Sullivan , I am the author of AlchemyDB: a highly flexible NoSQL/SQL/DocumentStore/GraphDB-datastore built on top of redis. I have spent the last several years trying to find a way to sanely house multiple datastore-genres under one roof while (almost paradoxically) pushing performance to its limits. I recently joined the NoSQL company Aerospike (formerly Citrusleaf) with the goal of incrementally grafting AlchemyDB’s flexible data-modeling capabilities onto Aerospike’s high-velocity horizontally-scalable key-value data-fabric. We recently completed a peak-performance TPS optimization project : starting at 200K TPS, pushing to the recent community edition launch at 500K TPS, and finally arriving at our 2012 goal: 1M TPS on $5K hardware . Getting to one million over-the-wire client-server database-requests per-second on a single machine costing $5K is a balance between trimming overhead on many axes and using a shared nothing architecture to i

5 0.12757841 645 high scalability-2009-06-30-Hot New Trend: Linking Clouds Through Cheap IP VPNs Instead of Private Lines

Introduction: You might think major Internet companies have a latency, availability, and bandwidth advantage because they can afford expensive dedicated point-to-point private line networks between their data centers. And you would be right. It's a great advantage. Or it at least it was a great advantage. Cost is the great equalizer and companies are now scrambling for ways to cut costs. Many of the most recognizable Internet companies are moving to IP VPNs (Virtual Private Networks) as a much cheaper alternative to private lines. This is a strategy you can effectively use too. This trend has historical precedent in the data center. In the same way leading edge companies moved early to virtualize their data centers, leading edge companies are now virtualizing their networks using IP VPNs to build inexpensive private networks over a shared public network. In kindergarten we learned sharing was polite, it turns out sharing can also save a lot of money in both the data center and on the network. The

6 0.11424159 1048 high scalability-2011-05-27-Stuff The Internet Says On Scalability For May 27, 2011

7 0.11329864 1051 high scalability-2011-06-01-Why is your network so slow? Your switch should tell you.

8 0.10898954 371 high scalability-2008-08-24-A Scalable, Commodity Data Center Network Architecture

9 0.1062152 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

10 0.10146786 1001 high scalability-2011-03-09-Google and Netflix Strategy: Use Partial Responses to Reduce Request Sizes

11 0.097545139 1460 high scalability-2013-05-17-Stuff The Internet Says On Scalability For May 17, 2013

12 0.094870538 1202 high scalability-2012-03-01-Grace Hopper to Programmers: Mind Your Nanoseconds!

13 0.091103166 1106 high scalability-2011-08-26-Stuff The Internet Says On Scalability For August 26, 2011

14 0.087896265 1362 high scalability-2012-11-26-BigData using Erlang, C and Lisp to Fight the Tsunami of Mobile Data

15 0.086684391 1560 high scalability-2013-12-09-In Memory: Grace Hopper to Programmers: Mind Your Nanoseconds!

16 0.083845913 603 high scalability-2009-05-19-Scaling Memcached: 500,000+ Operations-Second with a Single-Socket UltraSPARC T2

17 0.08361163 1027 high scalability-2011-04-20-Packet Pushers: How to Build a Low Cost Data Center

18 0.083040483 914 high scalability-2010-10-04-Paper: An Analysis of Linux Scalability to Many Cores

19 0.082674004 761 high scalability-2010-01-17-Applications Become Black Boxes Using Markets to Scale and Control Costs

20 0.082630895 750 high scalability-2009-12-16-Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.121), (1, 0.07), (2, -0.003), (3, 0.035), (4, -0.057), (5, -0.012), (6, 0.048), (7, 0.074), (8, -0.062), (9, 0.007), (10, 0.003), (11, -0.028), (12, 0.015), (13, 0.022), (14, 0.004), (15, 0.03), (16, 0.017), (17, 0.029), (18, -0.04), (19, -0.024), (20, 0.021), (21, 0.043), (22, -0.013), (23, -0.002), (24, 0.039), (25, 0.03), (26, -0.054), (27, -0.039), (28, 0.004), (29, 0.023), (30, -0.004), (31, 0.024), (32, -0.008), (33, 0.039), (34, 0.001), (35, 0.064), (36, 0.033), (37, 0.05), (38, -0.028), (39, 0.037), (40, 0.006), (41, 0.035), (42, 0.008), (43, 0.011), (44, 0.017), (45, 0.021), (46, -0.042), (47, -0.001), (48, -0.037), (49, 0.015)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96054691 1213 high scalability-2012-03-22-Paper: Revisiting Network I-O APIs: The netmap Framework

Introduction: Here's a really good article in the Communications of the ACM on reducing network packet processing overhead by redesigning the network stack:  Revisiting Network I/O APIs: The Netmap Framework  by  Luigi Rizzo . As commodity networking performance increases operating systems need to keep up or all those CPUs will go to waste. How do they make this happen?   Abstract: Today 10-gigabit interfaces are used more and more in datacenters and servers. On these links, packets flow as fast as one every 67.2 nanoseconds, yet modern operating systems can take 10-20 times longer just to move one packet between the wire and the application. We can do much better, not with more powerful hardware but by revising architectural decisions made long ago regarding the design of device drivers and network stacks. The netmap framework is a promising step in this direction. Thanks to a careful design and the engineering of a new packet I/O API, netmap eliminates much unnecessary overhead and moves

2 0.77764928 645 high scalability-2009-06-30-Hot New Trend: Linking Clouds Through Cheap IP VPNs Instead of Private Lines

Introduction: You might think major Internet companies have a latency, availability, and bandwidth advantage because they can afford expensive dedicated point-to-point private line networks between their data centers. And you would be right. It's a great advantage. Or it at least it was a great advantage. Cost is the great equalizer and companies are now scrambling for ways to cut costs. Many of the most recognizable Internet companies are moving to IP VPNs (Virtual Private Networks) as a much cheaper alternative to private lines. This is a strategy you can effectively use too. This trend has historical precedent in the data center. In the same way leading edge companies moved early to virtualize their data centers, leading edge companies are now virtualizing their networks using IP VPNs to build inexpensive private networks over a shared public network. In kindergarten we learned sharing was polite, it turns out sharing can also save a lot of money in both the data center and on the network. The

3 0.77464163 371 high scalability-2008-08-24-A Scalable, Commodity Data Center Network Architecture

Introduction: Looks interesting... Abstract: Today’s data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Nonuniform bandwidth among data center nodes complicates application design and limits overall system performance. In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodi

4 0.74373716 1177 high scalability-2012-01-19-Is it time to get rid of the Linux OS model in the cloud?

Introduction: You program in a dynamic language, that runs on a JVM, that runs on a OS designed 40 years ago  for a completely different purpose, that runs on virtualized hardware. Does this make sense? We've talked about this idea before in Machine VM + Cloud API - Rewriting The Cloud From Scratch , where the vision is to treat cloud virtual hardware as a compiler target, and converting high-level language source code directly into kernels that run on it. As new technologies evolve the friction created by our old tool chains and architecture models becomes ever more obvious. Take, for example, what a team at UCSD  is releasing: a phase-change memory prototype   - a   solid state storage device that provides performance thousands of times faster than a conventional hard drive and up to seven times faster than current state-of-the-art solid-state drives (SSDs). However, PCM has access latencies several times slower than DRAM. This technology has obvious mind blowing implications, but an

5 0.72930169 1594 high scalability-2014-02-12-Paper: Network Stack Specialization for Performance

Introduction: In the scalability is specialization department here is an interesting paper presented at HotNets '13 on high performance networking:  Network Stack Specialization for Performance . The idea is generalizing a service so it fits in the kernel comes at a high performance cost. So move TCP into user space.  The result is a web server with ~3.5x the throughput of Nginx "while experiencing low CPU utilization, linear scaling on multicore systems, and saturating current NIC hardware." Here's a good description of the paper published on Layer 9 : Traditionally, servers and OSes have been built to be general purpose. However now we have a high degree of specialization. In fact, in a big web service, you might have thousands of machines dedicated to one function. Therefore, there's scope for specialization. This paper looks at a specific opportunity in that space. Network stacks today are good for high throughput with large transfers, but not small files (which are common in web browsi

6 0.72866517 1316 high scalability-2012-09-04-Changing Architectures: New Datacenter Networks Will Set Your Code and Data Free

7 0.72289938 1572 high scalability-2014-01-03-Stuff The Internet Says On Scalability For January 3rd, 2014

8 0.72093308 374 high scalability-2008-08-30-Paper: GargantuanComputing—GRIDs and P2P

9 0.71711046 1256 high scalability-2012-06-04-OpenFlow-SDN is Not a Silver Bullet for Network Scalability

10 0.71240509 1051 high scalability-2011-06-01-Why is your network so slow? Your switch should tell you.

11 0.70096397 1140 high scalability-2011-11-10-Kill the Telcos Save the Internet - The Unsocial Network

12 0.69631451 1105 high scalability-2011-08-25-The Cloud and The Consumer: The Impact on Bandwidth and Broadband

13 0.69575667 1267 high scalability-2012-06-18-The Clever Ways Chrome Hides Latency by Anticipating Your Every Need

14 0.68272507 1039 high scalability-2011-05-12-Paper: Mind the Gap: Reconnecting Architecture and OS Research

15 0.67775267 1048 high scalability-2011-05-27-Stuff The Internet Says On Scalability For May 27, 2011

16 0.67774916 1319 high scalability-2012-09-10-Russ’ 10 Ingredient Recipe for Making 1 Million TPS on $5K Hardware

17 0.67710924 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

18 0.67098695 1362 high scalability-2012-11-26-BigData using Erlang, C and Lisp to Fight the Tsunami of Mobile Data

19 0.6698733 1116 high scalability-2011-09-15-Paper: It's Time for Low Latency - Inventing the 1 Microsecond Datacenter

20 0.66891539 1214 high scalability-2012-03-23-Stuff The Internet Says On Scalability For March 23, 2012


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.056), (2, 0.198), (4, 0.267), (10, 0.045), (61, 0.071), (77, 0.022), (79, 0.108), (85, 0.076), (94, 0.059)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.95028889 469 high scalability-2008-12-17-Scalability Strategies Primer: Database Sharding

Introduction: This article is a primer, intended to shine some much needed light on the logical, process oriented implementations of database scalability strategies in the form of a broad introduction. More specifically, the intent is to elaborate on the majority of these implementations by example.

same-blog 2 0.85172278 1213 high scalability-2012-03-22-Paper: Revisiting Network I-O APIs: The netmap Framework

Introduction: Here's a really good article in the Communications of the ACM on reducing network packet processing overhead by redesigning the network stack:  Revisiting Network I/O APIs: The Netmap Framework  by  Luigi Rizzo . As commodity networking performance increases operating systems need to keep up or all those CPUs will go to waste. How do they make this happen?   Abstract: Today 10-gigabit interfaces are used more and more in datacenters and servers. On these links, packets flow as fast as one every 67.2 nanoseconds, yet modern operating systems can take 10-20 times longer just to move one packet between the wire and the application. We can do much better, not with more powerful hardware but by revising architectural decisions made long ago regarding the design of device drivers and network stacks. The netmap framework is a promising step in this direction. Thanks to a careful design and the engineering of a new packet I/O API, netmap eliminates much unnecessary overhead and moves

3 0.8189193 1164 high scalability-2011-12-27-PlentyOfFish Update - 6 Billion Pageviews and 32 Billion Images a Month

Introduction: Markus has a short  update  on their PlentyOfFish Architecture . Impressive November statistics: 6 billion pageviews served 32 billion images served 6 million logins i n one day IM servers handle about 30 billion pageviews 11 webservers (5 of which could be dropped) Hired first DBA in July . They currently have a handful of employees . All hosting/cdn costs  combined are under $70k/month. Lesson : small organization, simple architecture, on raw hardware is still plenty profitable for PlentyOfFish. Related Articles On HackerNews 32 Billion images a month by Markus Frind.

4 0.79575354 1619 high scalability-2014-03-26-Oculus Causes a Rift, but the Facebook Deal Will Avoid a Scaling Crisis for Virtual Reality

Introduction: Facebook has been teasing us. While many of their recent acquisitions have been surprising, shocking is the only word adequately describing Facebook's 5 day whirlwind acquisition of Oculus , immersive virtual reality visionaries, for a now paltry sounding $2 billion. The backlash is a pandemic, jumping across social networks with the speed only a meme powered by the directly unaffected can generate. For more than 30 years VR has been the dream burning in the heart of every science fiction fan. Now that this future might finally be here, Facebook’s ownage makes it seem like a wonderful and hopeful timeline has been choked off, killing the Metaverse before it even had a chance to begin. For the many who voted for an open future with their Kickstarter dollars , there’s a deep and personal sense of betrayal, despite Facebook’s promise to leave Oculus alone. The intensity of the reaction is because Oculus matters to people. It's new, it's different, it create

5 0.76619154 12 high scalability-2007-07-15-Isilon Clustred Storage System

Introduction: The Isilon IQ family of clustered storage systems was designed from the ground up to meet the needs of data-intensive enterprises and high-performance computing environments. By combining Isilon's OneFS® operating system software with the latest advances in industry-standard hardware, Isilon delivers modular, pay-as-you-grow, enterprise-class clustered storage systems. OneFS, with TrueScale™ technology, powers the industry's first and only storage system that enables linear or independent scaling of performance and capacity. This new flexible and tunable system, featuring a robust suite of clustered storage software applications, provides customers with an "out of the box" solution that is fully optimized for the widest range of applications and workflow needs. * Scales from 4 TB ti 1 PB * Throughput of up to 10 GB per seond * Linear scaling * Easy to manage Related Articles   Inside Skinny On Isilon by StorageMojo

6 0.76594931 919 high scalability-2010-10-14-I, Cloud

7 0.75451678 670 high scalability-2009-08-05-Anti-RDBMS: A list of distributed key-value stores

8 0.75109464 916 high scalability-2010-10-07-Hot Scalability Links For Oct 8, 2010

9 0.7393167 282 high scalability-2008-03-18-Database War Stories #3: Flickr

10 0.73906678 1157 high scalability-2011-12-14-Virtualization and Cloud Computing is Changing the Network to East-West Routing

11 0.73034114 1436 high scalability-2013-04-05-Stuff The Internet Says On Scalability For April 5, 2013

12 0.7300058 1343 high scalability-2012-10-18-Save up to 30% by Selecting Better Performing Amazon Instances

13 0.7176435 1620 high scalability-2014-03-27-Strategy: Cache Stored Procedure Results

14 0.70522243 1589 high scalability-2014-02-03-How Google Backs Up the Internet Along With Exabytes of Other Data

15 0.69934559 309 high scalability-2008-04-23-Behind The Scenes of Google Scalability

16 0.6905399 79 high scalability-2007-09-01-On-Demand Infinitely Scalable Database Seed the Amazon EC2 Cloud

17 0.68980497 1538 high scalability-2013-10-28-Design Decisions for Scaling Your High Traffic Feeds

18 0.68741429 1010 high scalability-2011-03-24-Strategy: Disk Backup for Speed, Tape Backup to Save Your Bacon, Just Ask Google

19 0.68428391 1094 high scalability-2011-08-08-Tagged Architecture - Scaling to 100 Million Users, 1000 Servers, and 5 Billion Page Views

20 0.68261176 188 high scalability-2007-12-19-How can I learn to scale my project?