high_scalability high_scalability-2010 high_scalability-2010-851 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: What says 4th of July like Nathan's ultimate scalable hot dog eating contest? This totally requires a scale-up strategy. Facebook at 60,000 servers and counting. Deepak Singh has collected some impressive massive data stats on extreme Hadoop usage: Facebook : 36 PB of uncompressed data, 2250 machines, 23,000 cores, 32 GB of RAM per machine, processing 80-90TB/day; Yahoo : 70 PB of data in HDFS, 170 PB spread across the globe, 34000 servers, Processing 3 PB per day, 120 TB flow through Hadoop every day; Twitter : 7 TB/day into HDFS; LinkedIn: 120 Billion relationships; 82 Hadoop jobs daily (IIRC); 16 TB of intermedia data. Who knew DevOps could be so funny? Adam Jacob, CTO of Opscode, gave a hilarious talk at the Velocity conference on the true nature of DevOps. Warning: your neck may get sore from nodding in agreement so much and your belly may ache from laughing so much. Pig at LinkedIn . Not your average article: For me, understanding my work over the last year b
sentIndex sentText sentNum sentScore
1 What says 4th of July like Nathan's ultimate scalable hot dog eating contest? [sent-1, score-0.079]
2 Adam Jacob, CTO of Opscode, gave a hilarious talk at the Velocity conference on the true nature of DevOps. [sent-6, score-0.144]
3 Warning: your neck may get sore from nodding in agreement so much and your belly may ache from laughing so much. [sent-7, score-0.08]
4 Not your average article: For me, understanding my work over the last year by understanding Pig was profound. [sent-9, score-0.172]
5 It gave it more meaning, because strangely enough Pig has become a big part of my life. [sent-10, score-0.227]
6 It ran on 16 cores, sustaining over 2000 writes per second for over 8 days. [sent-19, score-0.216]
7 thinks NoSQL solutions should really be called NoJoin because they are mostly defined by avoidance of the join operation . [sent-23, score-0.086]
8 Y ou can’t just slap some basic consistent hashing on top of several single-machine data stores and claim to be in the same league as some of the real distributed data stores I’ve mentioned. [sent-26, score-0.368]
9 Rich Miller has a great summary of a capacity planning panel talk at Structure 2010 featuring folks from Zynga, Facebook, Yahoo, PayPal, and Engine Yard. [sent-29, score-0.181]
10 SQLite explains how they take advantage of Write-Ahead Logging , replacing their previous rollback journal for atomic commit and rollback. [sent-30, score-0.162]
wordName wordTfidf (topN-words)
[('pb', 0.303), ('nojoin', 0.245), ('pig', 0.237), ('gave', 0.144), ('hdfs', 0.129), ('velocity', 0.124), ('tb', 0.115), ('ran', 0.112), ('hadoop', 0.112), ('diggwho', 0.111), ('iirc', 0.111), ('rauser', 0.111), ('slap', 0.104), ('innosql', 0.104), ('oranges', 0.104), ('sustaining', 0.104), ('singh', 0.1), ('jacob', 0.1), ('administratorget', 0.1), ('league', 0.1), ('notch', 0.1), ('yahoo', 0.099), ('tcp', 0.097), ('slatkin', 0.096), ('uncompressed', 0.096), ('opscode', 0.096), ('serversand', 0.093), ('planning', 0.091), ('featuring', 0.09), ('titled', 0.088), ('contest', 0.088), ('focussing', 0.088), ('avoidance', 0.086), ('miller', 0.086), ('understanding', 0.086), ('buzz', 0.084), ('journal', 0.084), ('toadvertisea', 0.084), ('usfor', 0.084), ('cores', 0.084), ('portability', 0.083), ('globe', 0.083), ('strangely', 0.083), ('stores', 0.082), ('paypal', 0.081), ('agreement', 0.08), ('object', 0.08), ('eating', 0.079), ('rollback', 0.078), ('pleasecontact', 0.075)]
simIndex simValue blogId blogTitle
same-blog 1 1.0 851 high scalability-2010-07-02-Hot Scalability Links for July 2, 2010
Introduction: What says 4th of July like Nathan's ultimate scalable hot dog eating contest? This totally requires a scale-up strategy. Facebook at 60,000 servers and counting. Deepak Singh has collected some impressive massive data stats on extreme Hadoop usage: Facebook : 36 PB of uncompressed data, 2250 machines, 23,000 cores, 32 GB of RAM per machine, processing 80-90TB/day; Yahoo : 70 PB of data in HDFS, 170 PB spread across the globe, 34000 servers, Processing 3 PB per day, 120 TB flow through Hadoop every day; Twitter : 7 TB/day into HDFS; LinkedIn: 120 Billion relationships; 82 Hadoop jobs daily (IIRC); 16 TB of intermedia data. Who knew DevOps could be so funny? Adam Jacob, CTO of Opscode, gave a hilarious talk at the Velocity conference on the true nature of DevOps. Warning: your neck may get sore from nodding in agreement so much and your belly may ache from laughing so much. Pig at LinkedIn . Not your average article: For me, understanding my work over the last year b
2 0.21191956 443 high scalability-2008-11-14-Paper: Pig Latin: A Not-So-Foreign Language for Data Processing
Introduction: Yahoo has developed a new language called Pig Latin that fit in a sweet spot between high-level declarative querying in the spirit of SQL, and low-level, procedural programming `a la map-reduce and combines best of both worlds. The accompanying system, Pig, is fully implemented, and compiles Pig Latin into physical plans that are executed over Hadoop, an open-source, map-reduce implementation. Pig has just graduated from the Apache Incubator and joined Hadoop as a subproject. The paper has a few examples of how engineers at Yahoo! are using Pig to dramatically reduce the time required for the development and execution of their data analysis tasks, compared to using Hadoop directly. References: Apache Pig Wiki
3 0.18286459 601 high scalability-2009-05-17-Product: Hadoop
Introduction: Update 5: Hadoop Sorts a Petabyte in 16.25 Hours and a Terabyte in 62 Seconds and has its green cred questioned because it took 40 times the number of machines Greenplum used to do the same work. Update 4: Introduction to Pig . Pig allows you to skip programming Hadoop at the low map-reduce level. You don't have to know Java. Using the Pig Latin language, which is a scripting data flow language, you can think about your problem as a data flow program. 10 lines of Pig Latin = 200 lines of Java. Update 3 : Scaling Hadoop to 4000 nodes at Yahoo! . 30,000 cores with nearly 16PB of raw disk; sorted 6TB of data completed in 37 minutes; 14,000 map tasks writes (reads) 360 MB (about 3 blocks) of data into a single file with a total of 5.04 TB for the whole job. Update 2 : Hadoop Summit and Data-Intensive Computing Symposium Videos and Slides . Topics include: Pig, JAQL, Hbase, Hive, Data-Intensive Scalable Computing, Clouds and ManyCore: The Revolution, Simplicity and Complexity
4 0.12231576 780 high scalability-2010-02-19-Twitter’s Plan to Analyze 100 Billion Tweets
Introduction: If Twitter is the “nervous system of the web” as some people think, then what is the brain that makes sense of all those signals (tweets) from the nervous system? That brain is the Twitter Analytics System and Kevin Weil, as Analytics Lead at Twitter, is the homunculus within in charge of figuring out what those over 100 billion tweets (approximately the number of neurons in the human brain) mean. Twitter has only 10% of the expected 100 billion tweets now, but a good brain always plans ahead. Kevin gave a talk, Hadoop and Protocol Buffers at Twitter , at the Hadoop Meetup , explaining how Twitter plans to use all that data to an answer key business questions. What type of questions is Twitter interested in answering? Questions that help them better understand Twitter. Questions like: How many requests do we serve in a day? What is the average latency? How many searches happen in day? How many unique queries, how many unique users, what is their geographic dist
5 0.12159979 854 high scalability-2010-07-09-Hot Scalability Links for July 9, 2010
Introduction: Facebook serves 3 billion Like buttons a day says VentureBeat. CloudScaling reports: Rumor Mill: Google EC2 Competitor Coming in 2010? It looks like GAE for PaaS and an EC2 clone for IaaS. Tweets of gold: alandipert : scalability is a drug seldo : Scalability lesson #23: if any part of your system involves a list that gets bigger over time, eventually that list will become too big. obfuscurity : Her: "Go look at the pictures on the database." Me: "You mean our fileserver?" Her: "Whatever." luiscab : Ouch, I just read on an Info Mgmt rag that Hadoop could easily be an acronym for "Heck, Another Darn Obscure Open-source Project." sanity : Depressed about how much time I've had to spend searching for the right database solution for a new project. Each has it's flaws ioshints : You cannot take a car, grow it 10 times and expect to get a mining truck. A contentious thread on Hacker News: Mong
7 0.11758536 720 high scalability-2009-10-12-High Performance at Massive Scale – Lessons learned at Facebook
8 0.11169873 1000 high scalability-2011-03-08-Medialets Architecture - Defeating the Daunting Mobile Device Data Deluge
9 0.11056703 848 high scalability-2010-06-25-Hot Scalability Links for June 25, 2010
10 0.10590978 211 high scalability-2008-01-13-Google Reveals New MapReduce Stats
11 0.10544322 627 high scalability-2009-06-11-Yahoo! Distribution of Hadoop
12 0.10194964 750 high scalability-2009-12-16-Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud
14 0.098405801 954 high scalability-2010-12-06-What the heck are you actually using NoSQL for?
15 0.098231971 811 high scalability-2010-04-16-Hot Scalability Links for April 16, 2010
16 0.097551912 1084 high scalability-2011-07-22-Stuff The Internet Says On Scalability For July 22, 2011
17 0.097039394 849 high scalability-2010-06-28-VoltDB Decapitates Six SQL Urban Myths and Delivers Internet Scale OLTP in the Process
18 0.096601553 1137 high scalability-2011-11-04-Stuff The Internet Says On Scalability For November 4, 2011
19 0.096382529 1048 high scalability-2011-05-27-Stuff The Internet Says On Scalability For May 27, 2011
20 0.096283637 1240 high scalability-2012-05-07-Startups are Creating a New System of the World for IT
topicId topicWeight
[(0, 0.198), (1, 0.064), (2, 0.034), (3, 0.049), (4, 0.001), (5, 0.023), (6, -0.021), (7, 0.03), (8, 0.087), (9, 0.061), (10, 0.037), (11, -0.048), (12, 0.043), (13, -0.04), (14, -0.014), (15, -0.015), (16, -0.034), (17, -0.029), (18, -0.046), (19, 0.047), (20, 0.027), (21, 0.066), (22, 0.033), (23, 0.018), (24, 0.054), (25, -0.008), (26, 0.018), (27, 0.004), (28, 0.045), (29, 0.063), (30, 0.112), (31, 0.037), (32, -0.012), (33, 0.011), (34, 0.026), (35, 0.051), (36, -0.03), (37, 0.049), (38, -0.028), (39, -0.045), (40, 0.002), (41, 0.042), (42, 0.003), (43, -0.043), (44, -0.009), (45, 0.003), (46, 0.0), (47, 0.044), (48, -0.014), (49, 0.054)]
simIndex simValue blogId blogTitle
same-blog 1 0.95354778 851 high scalability-2010-07-02-Hot Scalability Links for July 2, 2010
Introduction: What says 4th of July like Nathan's ultimate scalable hot dog eating contest? This totally requires a scale-up strategy. Facebook at 60,000 servers and counting. Deepak Singh has collected some impressive massive data stats on extreme Hadoop usage: Facebook : 36 PB of uncompressed data, 2250 machines, 23,000 cores, 32 GB of RAM per machine, processing 80-90TB/day; Yahoo : 70 PB of data in HDFS, 170 PB spread across the globe, 34000 servers, Processing 3 PB per day, 120 TB flow through Hadoop every day; Twitter : 7 TB/day into HDFS; LinkedIn: 120 Billion relationships; 82 Hadoop jobs daily (IIRC); 16 TB of intermedia data. Who knew DevOps could be so funny? Adam Jacob, CTO of Opscode, gave a hilarious talk at the Velocity conference on the true nature of DevOps. Warning: your neck may get sore from nodding in agreement so much and your belly may ache from laughing so much. Pig at LinkedIn . Not your average article: For me, understanding my work over the last year b
2 0.81321466 1265 high scalability-2012-06-15-Stuff The Internet Says On Scalability For June 15, 2012
Introduction: It's HighScalability Time: 100PB : Facebook HDFS Cluster; One Trillion : Objects in S3 Quotable quotes: @mwinkle : Listening to NASA big data challenges at #hadoopSummit, the square kilometer array project will produce 700tb per second. TB. Per second. @imrantech : #hadoopsummit @twitter - 400M tweets, 80-100TB per day @r39132 : At Netflix talk at #hadoopsummit : 2 B hours streamed in Q4 2011, 75% of the 30M daily movie starts are sourced from recommendations @nattybnatkins : Run job. Identify bottleneck. Address bottleneck. Repeat. Sage wisdom from @tlipcon on optimizing MR jobs #HadoopSummit @chiradeep : mainframe cost of operation - $5k per MIP per year #hadoopsummit @MCanalytics : #hadoopsummit Yahoo metrics - 140pb on 42k nodes with 500 users on 360k Hadoop jobs for 100b events/day Holy smokes! @M_Wein : Domain expertise is the wave of the future: it's more about "Hadoop and Healthcare" than "Using Baye
3 0.75034136 707 high scalability-2009-09-17-Hot Links for 2009-9-17
Introduction: Save 25% on Hadoop Conference Tickets Apache Hadoop is a hot technology getting traction all over the enterprise and in the Web 2.0 world. Now, there's going to be a conference dedicated to learning more about Hadoop. It'll be Friday, October 2 at the Roosevelt Hotel in New York City. Hadoop World, as it's being called, will be the first Hadoop event on the east coast. Morning sessions feature talks by Amazon, Cloudera, Facebook, IBM, and Yahoo! Then it breaks out into three tracks: applications, development / administration, and extensions / ecosystems. In addition to the conference itself, there will also be 3 days of training prior to the event for those looking to go deeper. In addition to general sessions speakers, presenters include Hadoop project creator Doug Cutting, as well as experts on large-scale data from Intel, Rackspace, Softplayer, eHarmony, Supermicro, Impetus, Booz Allen Hamilton, Vertica, About.com, and other companies. Readers get a 25% discount if you register b
4 0.74483258 601 high scalability-2009-05-17-Product: Hadoop
Introduction: Update 5: Hadoop Sorts a Petabyte in 16.25 Hours and a Terabyte in 62 Seconds and has its green cred questioned because it took 40 times the number of machines Greenplum used to do the same work. Update 4: Introduction to Pig . Pig allows you to skip programming Hadoop at the low map-reduce level. You don't have to know Java. Using the Pig Latin language, which is a scripting data flow language, you can think about your problem as a data flow program. 10 lines of Pig Latin = 200 lines of Java. Update 3 : Scaling Hadoop to 4000 nodes at Yahoo! . 30,000 cores with nearly 16PB of raw disk; sorted 6TB of data completed in 37 minutes; 14,000 map tasks writes (reads) 360 MB (about 3 blocks) of data into a single file with a total of 5.04 TB for the whole job. Update 2 : Hadoop Summit and Data-Intensive Computing Symposium Videos and Slides . Topics include: Pig, JAQL, Hbase, Hive, Data-Intensive Scalable Computing, Clouds and ManyCore: The Revolution, Simplicity and Complexity
5 0.74475765 1076 high scalability-2011-07-08-Stuff The Internet Says On Scalability For July 8, 2011
Introduction: Submitted for your scaling pleasure: Facebook confirms 750 million users, sharing 4 billion items daily ; Yahoo: 42,000 Hadoop nodes storing 180-200 petabytes ; Formspring hits 25 million users . Zynga's Cadir Lee : It’s not the amount of hardware that matters. It’s the architecture of the application. You have to work at making your app architecture so that it takes advantage of Amazon. You have to have complete fluidity with the storage tier, the web tier. We are running our own data centers. We are looking more at doing our own data centers with more of a private cloud. Love the sensing making described by Hunch’s Infographic on their Taste Graph . 500 million people, 200 million items, 30 billion edges. 48 processors. 1 TB RAM. Is MongoDB is the New MySQL? Stephen O'Grady thinks so using a worse is better argument: wide adoption by applications, enterprise inroads, simple feature set, and the number complainers. Who plays PostgreSQL in this movie? Java is the
6 0.7229259 819 high scalability-2010-04-30-Hot Scalability Links for April 30, 2010
7 0.72165811 848 high scalability-2010-06-25-Hot Scalability Links for June 25, 2010
8 0.70781642 443 high scalability-2008-11-14-Paper: Pig Latin: A Not-So-Foreign Language for Data Processing
9 0.70024574 862 high scalability-2010-07-20-Strategy: Consider When a Service Starts Billing in Your Algorithm Cost
10 0.70003659 1414 high scalability-2013-03-01-Stuff The Internet Says On Scalability For February 29, 2013
11 0.6959089 968 high scalability-2011-01-04-Map-Reduce With Ruby Using Hadoop
12 0.69397736 254 high scalability-2008-02-19-Hadoop Getting Closer to 1.0 Release
13 0.68936723 1445 high scalability-2013-04-24-Strategy: Using Lots of RAM Often Cheaper than Using a Hadoop Cluster
14 0.68662786 1292 high scalability-2012-07-27-Stuff The Internet Says On Scalability For July 27, 2012
15 0.68442202 1174 high scalability-2012-01-13-Stuff The Internet Says On Scalability For January 13, 2012
16 0.68068844 1109 high scalability-2011-09-02-Stuff The Internet Says On Scalability For September 2, 2011
17 0.67789304 883 high scalability-2010-08-20-Hot Scalability Links For Aug 20, 2010
18 0.67425913 1113 high scalability-2011-09-09-Stuff The Internet Says On Scalability For September 9, 2011
19 0.67094821 1214 high scalability-2012-03-23-Stuff The Internet Says On Scalability For March 23, 2012
20 0.67053545 627 high scalability-2009-06-11-Yahoo! Distribution of Hadoop
topicId topicWeight
[(1, 0.104), (2, 0.133), (10, 0.049), (27, 0.019), (30, 0.038), (40, 0.028), (43, 0.02), (56, 0.043), (61, 0.085), (77, 0.046), (79, 0.156), (80, 0.118), (85, 0.034), (94, 0.065)]
simIndex simValue blogId blogTitle
same-blog 1 0.92699587 851 high scalability-2010-07-02-Hot Scalability Links for July 2, 2010
Introduction: What says 4th of July like Nathan's ultimate scalable hot dog eating contest? This totally requires a scale-up strategy. Facebook at 60,000 servers and counting. Deepak Singh has collected some impressive massive data stats on extreme Hadoop usage: Facebook : 36 PB of uncompressed data, 2250 machines, 23,000 cores, 32 GB of RAM per machine, processing 80-90TB/day; Yahoo : 70 PB of data in HDFS, 170 PB spread across the globe, 34000 servers, Processing 3 PB per day, 120 TB flow through Hadoop every day; Twitter : 7 TB/day into HDFS; LinkedIn: 120 Billion relationships; 82 Hadoop jobs daily (IIRC); 16 TB of intermedia data. Who knew DevOps could be so funny? Adam Jacob, CTO of Opscode, gave a hilarious talk at the Velocity conference on the true nature of DevOps. Warning: your neck may get sore from nodding in agreement so much and your belly may ache from laughing so much. Pig at LinkedIn . Not your average article: For me, understanding my work over the last year b
2 0.91050124 1170 high scalability-2012-01-06-Stuff The Internet Says On Scalability For January 6, 2012
Introduction: OMG, it's 2012: Harry Bombarda Twilight ; 200 Million : Chinese online shoppers; Quantum 150 qubit computer : all the power of today's supercomputers; Sperm : two aspirins worth could repopulate the world; 1 Billion : the number of iOS and Android apps downloaded in a week; Watson : 250 Servers, 2,880 cores, 10 racks, 16 Terabytes RAM, 80 Teraflops; Reddit: 2 Billion Pageviews Quotable Quotes: Robert Martin : The hallmark of a really good architecture is that it allows major decisions to be deferred. Building Memory-efficient Java Applications: Practices and Challenges : More abstractions = less awareness of costs. Ian Muir : When we do something that Microsoft did not anticipate, it's nothing but pain. @kekline : Want to know a secret - NoSQL's rapid growth is really about NoNormalization Jeremy Zawodny : The fact that I can look back on code I wrote a few years ago and identify ways that I’d do it better is good. It means I’m
3 0.90232652 1012 high scalability-2011-03-28-Aztec Empire Strategy: Use Dual Pipes in Your Aqueduct for High Availability
Introduction: With the Chapultepec aqueduct , also named the great aqueduct , the Aztecs built a novel uninterruptible water supply for providing fresh water to Tenochtitlan , their fast growing jewel of a capital city. A section of the aqueduct is still around today: It's fun to think about how even 600 years ago how it was built with high availability in mind. We find engineers being engineers , no matter the age: It consisted of a twin pipe distribution system made in part of compacted soil and in part of wood for the crossings of the aqueduct over the bridges built to allow the passage of the canoes. It was finished around 1466 AD, and the main purpose was to supply fresh water to Mexico-Tenochtitlan, to mitigate its thirst. The main source for the aqueduct was the spring of Chapultepec and the purpose of the twin pipes was to ease the maintenance of the system, because the water was conveyed through one pipe, and when it got dirty, the water was diverted to the other pipe
4 0.89860302 1028 high scalability-2011-04-22-Stuff The Internet Says On Scalability For April 22, 2011
Introduction: Submitted for your reading pleasure on the day, deep breath, before Dr. Who invades the USA... The Great SkyNet Day Amazon Downtime roundup: Detailed thread on Hacker News ; Who is affected by EC2? , Amazon Web Services Starting to Come Back Online but Problems Persist and Questions Unfold , AWS is down: Why the sky is falling , Amazon confirms the cause is not SkyNet, Mayan prophecy still in play; Major Amazon Outage Ripples Across Web ; Amazon Server Troubles Take Down Reddit, Foursquare & HootSuite ; Working around the EC2 outage ; Many AWS Sites Recover, Some Face Longer Wait , Amazon.com’s real problem isn’t the outage, it’s the communication , Developer pain revealed on the Amazon Developer Forum . Poll results are in for How Much Do You Consider Scalability When Building a New Application? 15.6% say have a years worth of food at home and a bugout location selected, 34.86% say spend a few spare brain cycles, 26.61% say work first scale later, 19.27% are happy
5 0.87050247 275 high scalability-2008-03-14-Problem: Mobbing the Least Used Resource Error
Introduction: A thoughtful reader recently suggested creating a series of posts based on real-life problems people have experienced and the solutions they've created to slay the little beasties. It's a great idea. Often we learn best from great trials and tribulations. I'll start off the new "Problem Report" feature with a diabolical little problem I dubbed the "Mobbing the Least Used Resource Error." Please post your own. And if you know someone with an interesting problem report, please tag them too. It could be a lot of fun. Of course, feel free to scrub your posts of all embarrassing details, but be sure to keep the heroic parts in :-) The Problem There's an unexpected and frequently fatal type of error that can happen when new resources are added to a horizontally scaled architecture. Because the new resource has the least of something, load or connections or whatever, a load balancer configured with a least metric will instantaneously direct all new traffic to that new resource. And
6 0.86434305 763 high scalability-2010-01-22-How BuddyPoke Scales on Facebook Using Google App Engine
7 0.86430782 1274 high scalability-2012-06-29-Stuff The Internet Says On Scalability For June 29, 2012 - The Velocity Edition
8 0.86206585 888 high scalability-2010-08-27-OpenStack - The Answer to: How do We Compete with Amazon?
10 0.85960925 1630 high scalability-2014-04-11-Stuff The Internet Says On Scalability For April 11th, 2014
11 0.85950243 750 high scalability-2009-12-16-Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud
12 0.85814828 1107 high scalability-2011-08-29-The Three Ages of Google - Batch, Warehouse, Instant
13 0.85792911 1382 high scalability-2013-01-07-Analyzing billions of credit card transactions and serving low-latency insights in the cloud
14 0.85724652 706 high scalability-2009-09-16-The VeriScale Architecture - Elasticity and efficiency for private clouds
15 0.85704607 289 high scalability-2008-03-27-Amazon Announces Static IP Addresses and Multiple Datacenter Operation
17 0.85670137 1036 high scalability-2011-05-06-Stuff The Internet Says On Scalability For May 6th, 2011
18 0.85659665 1647 high scalability-2014-05-14-Google Says Cloud Prices Will Follow Moore’s Law: Are We All Renters Now?
19 0.85598689 576 high scalability-2009-04-21-What CDN would you recommend?
20 0.85593098 1448 high scalability-2013-04-29-AWS v GCE Face-off and Why Innovation Needs Lower Cost Infrastructures