high_scalability high_scalability-2010 high_scalability-2010-839 knowledge-graph by maker-knowledge-mining

839 high scalability-2010-06-09-Paper: Propagation Networks: A Flexible and Expressive Substrate for Computation


meta infos for this blog

Source: html

Introduction: Alexey Radul in his fascinating 174 page dissertation  Propagation Networks: A Flexible and Expressive Substrate for Computation , offers to help us  break free of the tyranny of linear time by arranging computation as a network of autonomous but interconnected machines .  We can do this by organizing computation as a network of interconnected machines of some kind, each of which is free to run when it pleases, propagating  information around the network as proves possible. The consequence of this freedom is that the structure of the aggregate does not impose an order of time. The abstract from his thesis is : In this dissertation I propose a shift in the foundations of computation. Modern programming systems are not expressive enough. The traditional image of a single computer that has global effects on a large memory is too restrictive. The propagation paradigm replaces this with computing by networks of local, independent, stateless machines interconnected with stateful storage


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Alexey Radul in his fascinating 174 page dissertation  Propagation Networks: A Flexible and Expressive Substrate for Computation , offers to help us  break free of the tyranny of linear time by arranging computation as a network of autonomous but interconnected machines . [sent-1, score-0.592]

2 We can do this by organizing computation as a network of interconnected machines of some kind, each of which is free to run when it pleases, propagating  information around the network as proves possible. [sent-2, score-0.441]

3 The consequence of this freedom is that the structure of the aggregate does not impose an order of time. [sent-3, score-0.24]

4 The abstract from his thesis is : In this dissertation I propose a shift in the foundations of computation. [sent-4, score-0.431]

5 The traditional image of a single computer that has global effects on a large memory is too restrictive. [sent-6, score-0.059]

6 The propagation paradigm replaces this with computing by networks of local, independent, stateless machines interconnected with stateful storage cells. [sent-7, score-0.985]

7 In so doing, it offers great flexibility and expressive power, and has therefore been much studied, but has not yet been tamed for general-purpose computation. [sent-8, score-0.314]

8 The novel insight that should finally permit computing with general-purpose propagation is that a cell should not be seen as storing a value, but as accumulating information about a value. [sent-9, score-0.937]

9 Various forms of the general idea of propagation have been used with great success for various special purposes; perhaps the most immediate example is constraint propagation in constraint satisfaction systems. [sent-10, score-1.517]

10 This success is evidence both that traditional linear computation is not expressive enough, and that propagation is more expressive. [sent-11, score-0.773]

11 These special-purpose systems, however, are all complex and all different, and neither compose well, nor interoperate well, nor generalize well. [sent-12, score-0.345]

12 I present in this dissertation the design and implementation of a prototype general-purpose propagation system. [sent-14, score-0.849]

13 I argue that the structure of the prototype follows from the overarching principle of computing by propagation and of storage by accumulating information—there are no important arbitrary decisions. [sent-15, score-1.085]

14 I reflect on the new light the propagation perspective sheds on the deep nature of computation. [sent-17, score-0.608]

15 I really like the sound of all this, it seems like a good match for large scale distributed systems, but I have to admit in the end I didn't get it. [sent-18, score-0.064]

16 Does anyone know of a more basic primer on this area of study? [sent-19, score-0.074]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('propagation', 0.608), ('expressive', 0.314), ('dissertation', 0.241), ('interconnected', 0.195), ('accumulating', 0.189), ('computation', 0.183), ('interoperate', 0.143), ('compose', 0.128), ('constraint', 0.123), ('arbitrary', 0.094), ('offers', 0.093), ('prototype', 0.092), ('ect', 0.086), ('exibility', 0.086), ('linear', 0.084), ('sheds', 0.08), ('nally', 0.08), ('permit', 0.08), ('studied', 0.077), ('foundational', 0.077), ('success', 0.076), ('propagating', 0.074), ('overarching', 0.074), ('primer', 0.074), ('substrate', 0.074), ('foundations', 0.074), ('alexey', 0.074), ('tyranny', 0.074), ('generalize', 0.074), ('recovers', 0.072), ('bene', 0.072), ('ts', 0.07), ('replaces', 0.068), ('impose', 0.066), ('admit', 0.064), ('proves', 0.063), ('illustrate', 0.061), ('structure', 0.06), ('computing', 0.06), ('thesis', 0.06), ('traditional', 0.059), ('consequence', 0.058), ('evidence', 0.057), ('autonomous', 0.056), ('freedom', 0.056), ('propose', 0.056), ('various', 0.055), ('derived', 0.055), ('re', 0.054), ('stateful', 0.054)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9999997 839 high scalability-2010-06-09-Paper: Propagation Networks: A Flexible and Expressive Substrate for Computation

Introduction: Alexey Radul in his fascinating 174 page dissertation  Propagation Networks: A Flexible and Expressive Substrate for Computation , offers to help us  break free of the tyranny of linear time by arranging computation as a network of autonomous but interconnected machines .  We can do this by organizing computation as a network of interconnected machines of some kind, each of which is free to run when it pleases, propagating  information around the network as proves possible. The consequence of this freedom is that the structure of the aggregate does not impose an order of time. The abstract from his thesis is : In this dissertation I propose a shift in the foundations of computation. Modern programming systems are not expressive enough. The traditional image of a single computer that has global effects on a large memory is too restrictive. The propagation paradigm replaces this with computing by networks of local, independent, stateless machines interconnected with stateful storage

2 0.15084456 850 high scalability-2010-06-30-Paper: GraphLab: A New Framework For Parallel Machine Learning

Introduction: In the never ending quest to figure out how to do something useful with never ending streams of data,  GraphLab: A New Framework For Parallel Machine Learning  wants to go beyond low-level programming, MapReduce, and dataflow languages with  a new parallel framework for ML (machine learning) which exploits the sparse structure and common computational patterns of ML algorithms. GraphLab enables ML experts to easily design and implement efficient scalable parallel algorithms by composing problem specific computation, data-dependencies, and scheduling .   Our main contributions include:  A graph-based data model which simultaneously represents data and computational dependencies.  A set of concurrent access models which provide a range of sequential-consistency guarantees.  A sophisticated modular scheduling mechanism.  An aggregation framework to manage global state.  From the abstract: Designing and implementing efficient, provably correct parallel machine lear

3 0.11870165 1116 high scalability-2011-09-15-Paper: It's Time for Low Latency - Inventing the 1 Microsecond Datacenter

Introduction: In  It's Time for Low Latency   Stephen Rumble et al. explore the idea that it's time to rearchitect our stack to live in the modern era of low-latency datacenter instead of high-latency WANs. The implications for program architectures will be revolutionary .   Luiz André Barroso , Distinguished Engineer at Google, sees ultra low latency as a way to make computer resources, to be as much as possible, fungible, that is they are interchangeable and location independent, effectively turning a datacenter into single computer.  Abstract from the paper: The operating systems community has ignored network latency for too long. In the past, speed-of-light delays in wide area networks and unoptimized network hardware have made sub-100µs round-trip times impossible. However, in the next few years datacenters will be deployed with low-latency Ethernet. Without the burden of propagation delays in the datacenter campus and network delays in the Ethernet devices, it will be up to us to finish

4 0.080908217 1435 high scalability-2013-04-04-Paper: A Web of Things Application Architecture - Integrating the Real-World into the Web

Introduction: How do you layer a programmable Internet of smart things on top of the web? That's the question addressed by  Dominique Guinard  in his ambitious dissertation:  A Web of Things Application Architecture - Integrating the Real-World  ( slides ). With the continued siloing of content, perhaps we can keep our things open and talking to each other? In the architecture things are modeled using REST, they will be findable via search, they will be social via a social access controller, and they will be mashupable. Here's great graphical overview of the entire system:   Abstract: A central concern in the area of pervasive computing has been the integration of digital artifactsts with the physical world and vice-versa. Recent developments in the fi eld of embedded devices have led to smart things increasingly populating our daily life. We de fine smart things as digitally enhanced physical objectsts and devices that have communication capabilities. Application domains are for i

5 0.080532387 852 high scalability-2010-07-07-Strategy: Recompute Instead of Remember Big Data

Introduction: Professor Lance Fortnow, in his blog post  Drowning in Data , says complexity has taught him this lesson: When storage is expensive, it is cheaper to recompute what you've already computed. And that's the world we now live in: Storage is pretty cheap but data acquisition and computation are even cheaper. Jouni, one of the commenters, thinks the opposite is true: storage is cheap, but computation is expensive. When you are dealing with massive data, the size of the data set is very often determined by the amount of computing power available for a certain price . With such data, a linear-time algorithm takes O(1) seconds to finish, while a quadratic-time algorithm requires O(n) seconds. But as computing power increases exponentially over time, the quadratic algorithm gets exponentially slower . For me it's not a matter of which is true, both positions can be true, but what's interesting is to think that storage and computation are in some cases fungible. Your architecture can dec

6 0.080091596 1599 high scalability-2014-02-19-Planetary-Scale Computing Architectures for Electronic Trading and How Algorithms Shape Our World

7 0.069971189 1611 high scalability-2014-03-12-Paper: Scalable Eventually Consistent Counters over Unreliable Networks

8 0.064427949 1355 high scalability-2012-11-05-Gone Fishin': Building Super Scalable Systems: Blade Runner Meets Autonomic Computing In The Ambient Cloud

9 0.06438905 750 high scalability-2009-12-16-Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud

10 0.062379852 507 high scalability-2009-02-03-Paper: Optimistic Replication

11 0.061515309 984 high scalability-2011-02-04-Stuff The Internet Says On Scalability For February 4, 2011

12 0.060667399 1428 high scalability-2013-03-22-Stuff The Internet Says On Scalability For March 22, 2013

13 0.058414623 768 high scalability-2010-02-01-What Will Kill the Cloud?

14 0.056895372 661 high scalability-2009-07-25-Latency is Everywhere and it Costs You Sales - How to Crush it

15 0.056797158 1335 high scalability-2012-10-08-How UltraDNS Handles Hundreds of Thousands of Zones and Tens of Millions of Records

16 0.053579021 371 high scalability-2008-08-24-A Scalable, Commodity Data Center Network Architecture

17 0.051076334 1436 high scalability-2013-04-05-Stuff The Internet Says On Scalability For April 5, 2013

18 0.050437681 538 high scalability-2009-03-16-Are Cloud Based Memory Architectures the Next Big Thing?

19 0.050263699 1302 high scalability-2012-08-10-Stuff The Internet Says On Scalability For August 10, 2012

20 0.050233431 853 high scalability-2010-07-08-Cloud AWS Infrastructure vs. Physical Infrastructure


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.083), (1, 0.029), (2, 0.02), (3, 0.057), (4, -0.032), (5, 0.012), (6, 0.005), (7, 0.016), (8, -0.026), (9, 0.041), (10, 0.008), (11, 0.007), (12, -0.008), (13, 0.001), (14, 0.018), (15, 0.012), (16, 0.001), (17, -0.007), (18, 0.003), (19, -0.01), (20, -0.031), (21, 0.027), (22, -0.05), (23, 0.026), (24, -0.012), (25, -0.007), (26, -0.004), (27, -0.014), (28, -0.008), (29, 0.007), (30, -0.019), (31, 0.023), (32, -0.004), (33, 0.03), (34, -0.014), (35, -0.036), (36, 0.032), (37, -0.007), (38, 0.011), (39, 0.028), (40, 0.005), (41, -0.015), (42, -0.027), (43, -0.013), (44, 0.007), (45, 0.03), (46, -0.012), (47, -0.004), (48, 0.013), (49, 0.001)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95113999 839 high scalability-2010-06-09-Paper: Propagation Networks: A Flexible and Expressive Substrate for Computation

Introduction: Alexey Radul in his fascinating 174 page dissertation  Propagation Networks: A Flexible and Expressive Substrate for Computation , offers to help us  break free of the tyranny of linear time by arranging computation as a network of autonomous but interconnected machines .  We can do this by organizing computation as a network of interconnected machines of some kind, each of which is free to run when it pleases, propagating  information around the network as proves possible. The consequence of this freedom is that the structure of the aggregate does not impose an order of time. The abstract from his thesis is : In this dissertation I propose a shift in the foundations of computation. Modern programming systems are not expressive enough. The traditional image of a single computer that has global effects on a large memory is too restrictive. The propagation paradigm replaces this with computing by networks of local, independent, stateless machines interconnected with stateful storage

2 0.80553013 826 high scalability-2010-05-12-The Rise of the Virtual Cellular Machines

Introduction: My apologies if you were looking for a post about cell phones. This post is about high density nanodevices. It's a follow up to How will memristors change everything?  for those wishing to pursue these revolutionary ideas in more depth. This is one of those areas where if you are in the space then there's a lot of available information and if you are on the outside then it doesn't even seem to exist. Fortunately, Ben Chandler from  The SyNAPSE Project , was kind enough to point me to a great set of presentations given at the 12th IEEE CNNA - International Workshop on Cellular Nanoscale Networks and their Applications - Towards Megaprocessor Computing. WARNING: these papers contain extreme technical content. If you are like me and you aren't an electrical engineer, much of it may make a sort of surface sense, but the deep and twisty details will fly over head. For the more software minded there are a couple more accessible presentations: Intelligent Machines built with Memristiv

3 0.74935144 1127 high scalability-2011-09-28-Pursue robust indefinite scalability with the Movable Feast Machine

Introduction: And now for something completely different, brought to you by David Ackley and Daniel Cannon in their playfully thought provoking paper:  Pursue robust indefinite scalability , wherein they try to take a fresh look at neural networks, starting from scratch. What is this strange thing called indefinite scalability ? They sound like words that don't really go together: Indefinite scalability is the property that the design can support open-ended computational growth without substantial re-engineering, in as strict as sense as can be managed. By comparison, many computer, algorithm, and network designs -- even those that address scalability -- are only finitely scalable because their scalability occurs within some finite space. For example, an in-core sorting algorithm for a 32 bit machine can only scale to billions of numbers before address space is exhausted and then that algorithm must be re-engineered. Our idea is to expose indefinitely scalable computational power to program

4 0.73581272 850 high scalability-2010-06-30-Paper: GraphLab: A New Framework For Parallel Machine Learning

Introduction: In the never ending quest to figure out how to do something useful with never ending streams of data,  GraphLab: A New Framework For Parallel Machine Learning  wants to go beyond low-level programming, MapReduce, and dataflow languages with  a new parallel framework for ML (machine learning) which exploits the sparse structure and common computational patterns of ML algorithms. GraphLab enables ML experts to easily design and implement efficient scalable parallel algorithms by composing problem specific computation, data-dependencies, and scheduling .   Our main contributions include:  A graph-based data model which simultaneously represents data and computational dependencies.  A set of concurrent access models which provide a range of sequential-consistency guarantees.  A sophisticated modular scheduling mechanism.  An aggregation framework to manage global state.  From the abstract: Designing and implementing efficient, provably correct parallel machine lear

5 0.73321086 844 high scalability-2010-06-18-Paper: The Declarative Imperative: Experiences and Conjectures in Distributed Logic

Introduction: The Declarative Imperative: Experiences and Conjectures in Distributed Logic  is written by UC Berkeley's  Joseph Hellerstein for a keynote speech he gave at  PODS . The video version of the talk is  here . You may have heard about Mr. Hellerstein through the Berkeley Orders Of Magnitude  project ( BOOM ), whose purpose is to help  people build systems that are OOM (orders of magnitude) bigger than are building today, with OOM less effort than traditional programming methodologies . A noble goal which may be why BOOM was rated as a top 10 emerging technology for 2010 by MIT Technology Review . Quite an honor. The motivation for the talk is a familiar one: it's a dark period for computer programming and if we don't learn how to write parallel programs the children of Moore's law will destroy us all. We have more and more processors, yet we are stuck on figuring out how the average programmer can exploit them. The BOOM solution is the Bloom language which is based on Dedalus:

6 0.69658387 581 high scalability-2009-04-26-Map-Reduce for Machine Learning on Multicore

7 0.66658056 1641 high scalability-2014-05-01-Paper: Can Programming Be Liberated From The Von Neumann Style?

8 0.6646927 1234 high scalability-2012-04-26-Akaros - an open source operating system for manycore architectures

9 0.6503365 147 high scalability-2007-11-09-Paper: Container-based Operating System Virtualization: A Scalable, High-performance Alternative to Hypervisors

10 0.65008342 1509 high scalability-2013-08-30-Stuff The Internet Says On Scalability For August 30, 2013

11 0.64740831 1505 high scalability-2013-08-22-The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines, Second edition

12 0.64375621 1572 high scalability-2014-01-03-Stuff The Internet Says On Scalability For January 3rd, 2014

13 0.64314348 1039 high scalability-2011-05-12-Paper: Mind the Gap: Reconnecting Architecture and OS Research

14 0.64170808 882 high scalability-2010-08-18-Misco: A MapReduce Framework for Mobile Systems - Start of the Ambient Cloud?

15 0.63922882 823 high scalability-2010-05-05-How will memristors change everything?

16 0.63831478 1273 high scalability-2012-06-27-Paper: Logic and Lattices for Distributed Programming

17 0.63813245 1227 high scalability-2012-04-13-Stuff The Internet Says On Scalability For April 13, 2012

18 0.63593924 592 high scalability-2009-05-06-DyradLINQ

19 0.63533705 534 high scalability-2009-03-12-Google TechTalk: Amdahl's Law in the Multicore Era

20 0.63457221 400 high scalability-2008-10-01-The Pattern Bible for Distributed Computing


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.053), (2, 0.179), (30, 0.034), (61, 0.062), (77, 0.033), (79, 0.092), (85, 0.039), (92, 0.353), (94, 0.036)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.85002309 839 high scalability-2010-06-09-Paper: Propagation Networks: A Flexible and Expressive Substrate for Computation

Introduction: Alexey Radul in his fascinating 174 page dissertation  Propagation Networks: A Flexible and Expressive Substrate for Computation , offers to help us  break free of the tyranny of linear time by arranging computation as a network of autonomous but interconnected machines .  We can do this by organizing computation as a network of interconnected machines of some kind, each of which is free to run when it pleases, propagating  information around the network as proves possible. The consequence of this freedom is that the structure of the aggregate does not impose an order of time. The abstract from his thesis is : In this dissertation I propose a shift in the foundations of computation. Modern programming systems are not expressive enough. The traditional image of a single computer that has global effects on a large memory is too restrictive. The propagation paradigm replaces this with computing by networks of local, independent, stateless machines interconnected with stateful storage

2 0.81248295 1636 high scalability-2014-04-23-Here's a 1300 Year Old Solution to Resilience - Rebuild, Rebuild, Rebuild

Introduction: How is it possible that a wooden Shinto shrine built in the 7th century is still standing? The answer depends on how you answer this philosophical head scratcher: With nearly every cell in your body continually being replaced, are you still the same person? The  Ise Grand Shrine  has been in continuous existence for over 1300 years because every twenty years an exact replica has been rebuilt on an adjacent footprint. The former temple is then dismantled. Now that's resilience. If you want something to last make it a living part of a culture. It's not so much the building that is remade, what is rebuilt and passed down from generation to generation is the meme that the shrine is important and worth preserving. The rest is an unfolding of that imperative. You can see echoes of this same process in Open Source projects like Linux and the libraries and frameworks that get themselves reconstructed in each new environment. The patterns of recurrence in software are the result of Darw

3 0.76592469 352 high scalability-2008-07-18-Robert Scoble's Rules for Successfully Scaling Startups

Introduction: Robert Scoble in an often poignant FriendFeed thread commiserating PodTech's unfortunate end, shared what he learned about creating a successful startup. Here's a summary of a Robert's rules and why Machiavelli just may agree with them: Have a story. Have everyone on board with that story. If anyone goes off of that story, make sure they get on board immediately or fire them. Make sure people are judged by the revenues they bring in. Those that bring in revenues should get to run the place. People who don't bring in revenues should get fewer and fewer responsibilities, not more and more. Work ONLY for a leader who will make the tough decisions. Build a place where excellence is expected, allowed, and is enabled. Fire idiots quickly. If your engineering team can't give a media team good measurements, the entire company is in trouble. Only things that are measured ever get improved. When your stars aren't listened to the company is in trouble. Getting rid of t

4 0.71860206 1234 high scalability-2012-04-26-Akaros - an open source operating system for manycore architectures

Introduction: If you are interested in future foward OS designs then you might find  Akaros  worth a look. It's an operating system designed for many-core architectures and large-scale SMP systems, with the goals of: Providing better support for parallel and high-performance applications Scaling the operating system to a large number of cores  A more indepth explanation of the motiviation behind Akaros can be found in Improving Per-Node Efficiency in the Datacenter with NewOS Abstractions  by Barret Rhoden, Kevin Klues, David Zhu, and Eric Brewer. The abstract: We believe datacenters can benefit from more focus on per-node efficiency, performance, and predictability, versus the more common focus so far on scalability to a large number of nodes. Improving per-node efficiency decreases costs and fault recovery because fewer nodes are required for the same amount of work. We believe that the use of complex, general-purpose operating systems is a key contributing factor to these inefficiencies

5 0.67776722 532 high scalability-2009-03-11-Sharding and Connection Pools

Introduction: Hi we are looking at sharding our existing Java/Oracle based application. We are looking to make the app servers able to process requests for multiple (any?) shard. The concern that has come up is the amount of memory that would be consumed by having so many connection pools on one app server. Additionally there is concern about having so many physical connections to the database server coming from all the various app servers that may talk to that particular shard. I was wondering if anyone else has dealt with this issue and how you resolved it? Thanks, Scott

6 0.65481257 528 high scalability-2009-03-06-Product: Lightcloud - Key-Value Database

7 0.64398307 885 high scalability-2010-08-23-Building a Scalable Key-Value Database: Project Hydracus

8 0.61026895 988 high scalability-2011-02-11-Stuff The Internet Says On Scalability For February 11, 2011

9 0.59817338 157 high scalability-2007-11-16-Product: lbpool - Load Balancing JDBC Pool

10 0.58776134 357 high scalability-2008-07-26-Google's Paxos Made Live – An Engineering Perspective

11 0.58418536 850 high scalability-2010-06-30-Paper: GraphLab: A New Framework For Parallel Machine Learning

12 0.58152723 928 high scalability-2010-10-26-Scaling DISQUS to 75 Million Comments and 17,000 RPS

13 0.52956384 76 high scalability-2007-08-29-Skype Failed the Boot Scalability Test: Is P2P fundamentally flawed?

14 0.52808887 1561 high scalability-2013-12-09-Site Moves from PHP to Facebook's HipHop, Now Pages Load in .6 Seconds Instead of Five

15 0.5272761 1439 high scalability-2013-04-12-Stuff The Internet Says On Scalability For April 12, 2013

16 0.52726012 978 high scalability-2011-01-26-Google Pro Tip: Use Back-of-the-envelope-calculations to Choose the Best Design

17 0.52705282 1460 high scalability-2013-05-17-Stuff The Internet Says On Scalability For May 17, 2013

18 0.52675605 1180 high scalability-2012-01-24-The State of NoSQL in 2012

19 0.52672356 1020 high scalability-2011-04-12-Caching and Processing 2TB Mozilla Crash Reports in memory with Hazelcast

20 0.5255301 295 high scalability-2008-04-02-Product: Supervisor - Monitor and Control Your Processes