high_scalability high_scalability-2011 high_scalability-2011-1011 knowledge-graph by maker-knowledge-mining

1011 high scalability-2011-03-25-Did the Microsoft Stack Kill MySpace?


meta infos for this blog

Source: html

Introduction: Robert Scoble wrote a fascinating case study, MySpace’s death spiral: insiders say it’s due to bets on Los Angeles and Microsoft , where he reports MySpace insiders blame the Microsoft stack on why they lost the great social network race to Facebook.   Does anyone know if this is true? What's the real story? I was wondering because it doesn't seem to track with the MySpace Architecture  post that I did in 2009, where they seem happy with their choices and had stats to back up their improvements. Why this matters is it's a fascinating model for startups to learn from. What does it really take to succeed? Is it the people or the stack? Is it the organization or the technology? Is it the process or the competition? Is the quality of the site or the love of the users? So much to consider and learn from. Some conjectures from the article: Myspace didn't have programming talent capable of scaling the site to compete with Facebook. Choosing the Microsoft stack made it difficul


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Robert Scoble wrote a fascinating case study, MySpace’s death spiral: insiders say it’s due to bets on Los Angeles and Microsoft , where he reports MySpace insiders blame the Microsoft stack on why they lost the great social network race to Facebook. [sent-1, score-0.206]

2 Some conjectures from the article: Myspace didn't have programming talent capable of scaling the site to compete with Facebook. [sent-12, score-0.242]

3 Choosing the Microsoft stack made it difficult to hire people capable of competing with Facebook. [sent-13, score-0.242]

4 Because of their infrastructure MySpace can’t change their technology to make new features work or make dramatically new experiences. [sent-17, score-0.299]

5 (duh) Los Angeles doesn't have startup talent capable of producing a scalable social network system. [sent-19, score-0.195]

6 u_fail : All this sounds like is business people with little to no understanding of technology blaming technology , when all along it was lack of innovation and lack of technology people at the top making the decisions. [sent-42, score-0.837]

7 Robert Scoble : MySpace's architecture made it very difficult to ship new features and "pivot" once it was clear they were getting their ass kicked. [sent-45, score-0.218]

8 If the talent had stayed, such as Richard Rosenblatt and they hadn't sold themselves off for $670 mm, then many of these issues could have been the excuse. [sent-70, score-0.201]

9 Sean Scott : Even if technology and a lack of talent were part of the issue and MySpace's inability to respond to Facebook, ultimately the user experience is what cost them. [sent-73, score-0.376]

10 Lack of innovation, dismissiveness of the competition, limited product roll-outs, too much turmoil at the top, technical issues, but don't blame it on the lack of technical talent here in L. [sent-81, score-0.46]

11 I actually had no idea it was Microsoft technology and probably would have never guessed based on what they were doing in their architecture. [sent-112, score-0.197]

12 As engineers you and I both know that the right mindset along with strong leadership can take pretty much any technology to the highest levels out there. [sent-113, score-0.241]

13 There are plenty of high traffic Microsoft sites out there so I don't think the technology CHOICES are as much of an issue as Scoble makes it out to sound, and based on what I saw engineering leadership at MySpace was actually pretty bad ass. [sent-116, score-0.29]

14 Many startups we worked with, Silicon Valley or not Silicon Valley, could not imagine scaling their stuff to that load--many vendors of data systems required many patches to their stuff before we could use it (if at all). [sent-164, score-0.271]

15 One of the issues that stemmed from this was lack of respect for technology in the sense that no one at the higher levels saw the company as a technology company. [sent-190, score-0.546]

16 " sriramk : I have a bunch of friends from MySpace and one common theme I do hear is that they feel that a better architecture (not connected to the stack) would have let them ship stuff quicker. [sent-217, score-0.191]

17 As ideas struck the 'web' infrastructure folks they would would immediately implement some sort of prototype or test case, then rapid fire come up with features the infrastructure needed to support to maximize this feature. [sent-227, score-0.277]

18 But sometimes they would go long periods, months, where nothing would change (in the infrastructure side, web pages, UX, etc sure but same servers and storage assets). [sent-228, score-0.227]

19 What happens if you try to change faster than the infrastructure can change, is that you end up hacking around the limits, and that builds up technical scar tissue that over time slows your mobility still further. [sent-230, score-0.187]

20 Maybe being an "entertainment" company rather than a technology company fosters that sort of approach. [sent-259, score-0.401]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('myspace', 0.656), ('talent', 0.149), ('technology', 0.139), ('company', 0.131), ('microsoft', 0.128), ('scoble', 0.103), ('leadership', 0.102), ('coast', 0.095), ('debt', 0.095), ('la', 0.09), ('lack', 0.088), ('facebook', 0.087), ('silicon', 0.085), ('entertainment', 0.083), ('spiral', 0.081), ('management', 0.08), ('stack', 0.078), ('ship', 0.077), ('technical', 0.076), ('people', 0.071), ('blame', 0.071), ('oss', 0.07), ('fox', 0.061), ('would', 0.058), ('music', 0.058), ('valley', 0.057), ('death', 0.057), ('innovation', 0.057), ('infrastructure', 0.056), ('stuff', 0.056), ('change', 0.055), ('infighting', 0.055), ('worked', 0.055), ('tier', 0.055), ('feature', 0.052), ('wanted', 0.052), ('could', 0.052), ('competencies', 0.05), ('cadence', 0.05), ('competence', 0.05), ('pivot', 0.05), ('features', 0.049), ('saw', 0.049), ('bet', 0.049), ('made', 0.047), ('loosing', 0.047), ('site', 0.047), ('capable', 0.046), ('ass', 0.045), ('top', 0.045)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999923 1011 high scalability-2011-03-25-Did the Microsoft Stack Kill MySpace?

Introduction: Robert Scoble wrote a fascinating case study, MySpace’s death spiral: insiders say it’s due to bets on Los Angeles and Microsoft , where he reports MySpace insiders blame the Microsoft stack on why they lost the great social network race to Facebook.   Does anyone know if this is true? What's the real story? I was wondering because it doesn't seem to track with the MySpace Architecture  post that I did in 2009, where they seem happy with their choices and had stats to back up their improvements. Why this matters is it's a fascinating model for startups to learn from. What does it really take to succeed? Is it the people or the stack? Is it the organization or the technology? Is it the process or the competition? Is the quality of the site or the love of the users? So much to consider and learn from. Some conjectures from the article: Myspace didn't have programming talent capable of scaling the site to compete with Facebook. Choosing the Microsoft stack made it difficul

2 0.5841471 1014 high scalability-2011-03-31-8 Lessons We Can Learn from the MySpace Incident - Balance, Vision, Fearlessness

Introduction: A surprising amount of heat and light was generated by the whole Micrsoft vs MySpace discussion. Why people feel so passionate about this I'm not quite sure, but fortunately for us, in the best sense of the web, it generated an amazing number of insightful comments and observations. If we stand back and take a look at the whole incident, what can we take a way that might help us in the future? All computer companies are technology companies first.   A repeated theme was that you can't be an entertainment company first. You are a technology company providing entertainment using technology. The tech can inform the entertainment side, the entertainment side drives features, but they really can't be separated. An awesome stack that does nothing is useless. A great idea on a poor stack is just as useless. There's a difficult balance that must be achieved and both management and developers must be aware that there's something to balance. All pigs are equal .  All business f

3 0.39851591 788 high scalability-2010-03-04-How MySpace Tested Their Live Site with 1 Million Concurrent Users

Introduction: This is a guest post by Dan Bartow, VP of SOASTA , talking about how they pelted MySpace with 1 million concurrent users using 800 EC2 instances. I thought this was an interesting story because: that's a lot of users, it takes big cajones to test your live site like that, and not everything worked out quite as expected. I'd like to thank Dan for taking the time to write and share this article. In December of 2009 MySpace launched a new wave of streaming music video offerings in New Zealand, building on the previous success of MySpace music.  These new features included the ability to watch music videos, search for artist’s videos, create lists of favorites, and more. The anticipated load increase from a feature like this on a popular site like MySpace is huge, and they wanted to test these features before making them live.   If you manage the infrastructure that sits behind a high traffic application you don’t want any surprises.  You want to understand your breakin

4 0.35559762 511 high scalability-2009-02-12-MySpace Architecture

Introduction: Update: Presentation: Behind the Scenes at MySpace.com . Dan Farino, Chief Systems Architect at MySpace shares details of some of MySpace's cool internal operations tools. MySpace.com is one of the fastest growing site on the Internet with 65 million subscribers and 260,000 new users registering each day. Often criticized for poor performance, MySpace has had to tackle scalability issues few other sites have faced. How did they do it? Site: http://myspace.com Information Sources Presentation: Behind the Scenes at MySpace.com Inside MySpace.com Platform ASP.NET 2.0 Windows IIS SQL Server What's Inside? 300 million users. Pushes 100 gigabits/second to the internet. 10Gb/sec is HTML content. 4,500+ web servers windows 2003/IIS 6.0/APS.NET. 1,200+ cache servers running 64-bit Windows 2003. 16GB of objects cached in RAM. 500+ database servers running 64-bit Windows and SQL Server 2005. MySpace processes 1.5 Billion page views per day and

5 0.17666055 584 high scalability-2009-04-27-Some Questions from a newbie

Introduction: Hello highscalability world. I just discovered this site yesterday in a search for a scalability resource and was very pleased to find such useful information. I have some questions regarding distributed caching that I was hoping the scalability intelligentsia trafficking this forum could answer. I apologize for my lack of technical knowledge; I'm hoping this site will increase said knowledge! Feel free to answer all or as much as you want. Thank you in advance for your responses and thank you for a great resource! 1.) What are the standard benchmarks used to measure the performance of memcached or mySQL/memcached working together (from web 2.0 companies etc)? 2.) The little research I've conducted on this site suggests that most web 2.0 companies use a combination of mySQL and a hacked memcached (and potentially sharding). Does anyone know if any of these companies use an enterprise vendor for their distributed caching layer? (At this point in time I've only heard of Jive soft

6 0.17562033 1240 high scalability-2012-05-07-Startups are Creating a New System of the World for IT

7 0.16452834 1355 high scalability-2012-11-05-Gone Fishin': Building Super Scalable Systems: Blade Runner Meets Autonomic Computing In The Ambient Cloud

8 0.16435091 750 high scalability-2009-12-16-Building Super Scalable Systems: Blade Runner Meets Autonomic Computing in the Ambient Cloud

9 0.16181308 1508 high scalability-2013-08-28-Sean Hull's 20 Biggest Bottlenecks that Reduce and Slow Down Scalability

10 0.15815528 938 high scalability-2010-11-09-Sponsored Post: Imo, Membase, Playfish, Electronic Arts, Tagged, Undertone, Joyent, Appirio, Tuenti, CloudSigma, ManageEngine, Site24x7

11 0.15673737 96 high scalability-2007-09-18-Amazon Architecture

12 0.15385921 840 high scalability-2010-06-10-The Four Meta Secrets of Scaling at Facebook

13 0.14862587 721 high scalability-2009-10-13-Why are Facebook, Digg, and Twitter so hard to scale?

14 0.1420286 922 high scalability-2010-10-19-Sponsored Post: Playfish, Electronic Arts, Tagged, Undertone, Box.net, Wiredrive, Joyent, DeviantART, CloudSigma, ManageEngine, Site24x7

15 0.14136837 153 high scalability-2007-11-13-Friendster Lost Lead Because of a Failure to Scale

16 0.14108644 947 high scalability-2010-11-23-Sponsored Post: Imo, Undertone, Joyent, Appirio, Tuenti, CloudSigma, ManageEngine, Site24x7

17 0.13971816 1021 high scalability-2011-04-12-Sponsored Post: Gazillion, Edmunds, OPOWER, ClearStone, deviantART, ScaleOut, aiCache, WAPT, Karmasphere, Kabam, Newrelic, Cloudkick, Membase, Joyent, CloudSigma, ManageEngine, Site24x7

18 0.1382331 929 high scalability-2010-10-26-Sponsored Post: Membase, Playfish, Electronic Arts, Tagged, Undertone, Joyent, Appirio, Tuenti, CloudSigma, ManageEngine, Site24x7

19 0.13777123 932 high scalability-2010-10-28-Sponsored Post: Amazon, Membase, Playfish, Electronic Arts, Tagged, Undertone, Joyent, Appirio, Tuenti, CloudSigma, ManageEngine, Site24x7

20 0.13639215 1363 high scalability-2012-11-27-Sponsored Post: Akiban, Booking, Teradata Aster, Hadapt, Zoosk, Aerospike, Server Stack, Wiredrive, NY Times, CouchConf, FiftyThree, Percona, ScaleOut, New Relic, NetDNA, GigaSpaces, AiCache, Logic Monitor, AppDynamics


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.288), (1, 0.07), (2, 0.045), (3, -0.03), (4, 0.09), (5, -0.148), (6, -0.086), (7, 0.047), (8, 0.037), (9, -0.02), (10, -0.023), (11, 0.046), (12, -0.01), (13, 0.051), (14, 0.028), (15, -0.004), (16, 0.023), (17, -0.004), (18, -0.014), (19, 0.056), (20, 0.062), (21, 0.008), (22, 0.095), (23, -0.002), (24, -0.023), (25, -0.028), (26, 0.007), (27, -0.001), (28, -0.031), (29, -0.021), (30, -0.042), (31, 0.054), (32, -0.02), (33, -0.064), (34, 0.038), (35, 0.068), (36, -0.025), (37, 0.057), (38, 0.118), (39, 0.106), (40, -0.073), (41, -0.018), (42, 0.036), (43, -0.013), (44, 0.038), (45, 0.07), (46, 0.039), (47, -0.039), (48, 0.004), (49, 0.091)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9607029 1011 high scalability-2011-03-25-Did the Microsoft Stack Kill MySpace?

Introduction: Robert Scoble wrote a fascinating case study, MySpace’s death spiral: insiders say it’s due to bets on Los Angeles and Microsoft , where he reports MySpace insiders blame the Microsoft stack on why they lost the great social network race to Facebook.   Does anyone know if this is true? What's the real story? I was wondering because it doesn't seem to track with the MySpace Architecture  post that I did in 2009, where they seem happy with their choices and had stats to back up their improvements. Why this matters is it's a fascinating model for startups to learn from. What does it really take to succeed? Is it the people or the stack? Is it the organization or the technology? Is it the process or the competition? Is the quality of the site or the love of the users? So much to consider and learn from. Some conjectures from the article: Myspace didn't have programming talent capable of scaling the site to compete with Facebook. Choosing the Microsoft stack made it difficul

2 0.93724251 1014 high scalability-2011-03-31-8 Lessons We Can Learn from the MySpace Incident - Balance, Vision, Fearlessness

Introduction: A surprising amount of heat and light was generated by the whole Micrsoft vs MySpace discussion. Why people feel so passionate about this I'm not quite sure, but fortunately for us, in the best sense of the web, it generated an amazing number of insightful comments and observations. If we stand back and take a look at the whole incident, what can we take a way that might help us in the future? All computer companies are technology companies first.   A repeated theme was that you can't be an entertainment company first. You are a technology company providing entertainment using technology. The tech can inform the entertainment side, the entertainment side drives features, but they really can't be separated. An awesome stack that does nothing is useless. A great idea on a poor stack is just as useless. There's a difficult balance that must be achieved and both management and developers must be aware that there's something to balance. All pigs are equal .  All business f

3 0.81031978 1635 high scalability-2014-04-21-This is why Microsoft won. And why they lost.

Introduction: My favorite kind of histories are those told from an insider's perspective. The story of Richard the Lionheart is full of great battles and dynastic intrigue. The story of one of his soldiers, not so much. Yet the soldiers' story, as someone who has experienced the real consequences of decisions made and actions taken, is more revealing. We get such a history in  Chat Wars , a wonderful article written by David Auerbach, who in 1998 worked at Microsoft on MSN Messenger Service, Microsoft’s instant messaging app (for a related story see  The Rise and Fall of AIM, the Breakthrough AOL Never Wanted ). It's as if Herodotus visited Microsoft and wrote down his experiences. It has that same sort of conversational tone, insightful on-the-ground observations, and facts no outsider might ever believe. Much of the article is a play-by-play account of the cat and mouse game David plays changing Messenger to track AOL's Instant Messenger protocol changes. AOL repeatedly tried to make it so M

4 0.80691826 1477 high scalability-2013-06-18-Scaling Mailbox - From 0 to One Million Users in 6 Weeks and 100 Million Messages Per Day

Introduction: You know your product is doing well when most of your early blog posts deal with the status of the waiting list of hundreds of thousands of users eagerly waiting to download your product. That's the enviable position Mailbox , a free mobile email management app, found themselves early in their release cycle.  Hasn't email been done already? Apparently not. Mailbox scaled to one million users in a paltry six weeks with a team of about 14 people . As of April they were delivering over 100 million messages per day . How did they do it? Mailbox engineering lead, Sean Beausoleil , gave an  informative interview on readwrite.com on how Mailbox planned to scale...  Gather signals early . A pre-release launch video helped generate interest, but it also allowed them to gauge early interest before even releasing. From the overwhelming response they knew they would need to have to scale and scale quickly.  Have something unique . The average person might not think a mailbox app

5 0.78392339 153 high scalability-2007-11-13-Friendster Lost Lead Because of a Failure to Scale

Introduction: Hey, this scaling stuff might just be important. Jim Scheinman, former Bebo and Friendster exec, puts the blame squarely on Friendster's inability to scale as why they lost the social networking race: VB : Can you tell me a bit about what you learned in your time at Friendster?   JS : For me, it basically came down to failed execution on the technology side — we had millions of Friendster members begging us to get the site working faster so they could log in and spend hours social networking with their friends. I remember coming in to the office for months reading thousands of customer service emails telling us that if we didn’t get our site working better soon, they’d be ‘forced to join’ a new social networking site that had just launched called MySpace…the rest is history. To be fair to Friendster’s technology team at the time, they were on the forefront of many new scaling and database issues that web sites simply hadn’t had to deal with prior to Friendster. As is often

6 0.78024822 1171 high scalability-2012-01-09-The Etsy Saga: From Silos to Happy to Billions of Pageviews a Month

7 0.75703132 378 high scalability-2008-09-03-Some Facebook Secrets to Better Operations

8 0.75164843 870 high scalability-2010-08-02-7 Scaling Strategies Facebook Used to Grow to 500 Million Users

9 0.7394684 344 high scalability-2008-06-09-FaceStat's Rousing Tale of Scaling Woe and Wisdom Won

10 0.73800695 493 high scalability-2009-01-16-Just-In-Time Scalability: Agile Methods to Support Massive Growth (IMVU case study)

11 0.73775381 1269 high scalability-2012-06-20-iDoneThis - Scaling an Email-based App from Scratch

12 0.73634702 840 high scalability-2010-06-10-The Four Meta Secrets of Scaling at Facebook

13 0.7347883 1209 high scalability-2012-03-14-The Azure Outage: Time Is a SPOF, Leap Day Doubly So

14 0.73113388 352 high scalability-2008-07-18-Robert Scoble's Rules for Successfully Scaling Startups

15 0.72974324 1492 high scalability-2013-07-17-How do you create a 100th Monkey software development culture?

16 0.72297585 129 high scalability-2007-10-23-Hire Facebook, Ning, and Salesforce to Scale for You

17 0.72033209 1528 high scalability-2013-10-07-Ask HS: Is Microsoft the Right Technology for a Scalable Web-based System?

18 0.71878052 1500 high scalability-2013-08-12-100 Curse Free Lessons from Gordon Ramsay on Building Great Software

19 0.71834117 1444 high scalability-2013-04-23-Facebook Secrets of Web Performance

20 0.71833593 1012 high scalability-2011-03-28-Aztec Empire Strategy: Use Dual Pipes in Your Aqueduct for High Availability


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.165), (2, 0.191), (10, 0.052), (26, 0.013), (30, 0.041), (40, 0.026), (54, 0.077), (56, 0.013), (61, 0.085), (76, 0.01), (77, 0.021), (79, 0.112), (85, 0.021), (94, 0.036), (98, 0.016)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97483468 1430 high scalability-2013-03-27-The Changing Face of Scale - The Downside of Scaling in the Contextual Age

Introduction: Robert Scoble is a kind of Brothers Grimm for the digital age. Instead of inspired romantics walking around the country side collecting the folk tales of past ages, he is an inspired technologist documenting the current mythology of startups. One of the developments Robert is exploring is the rise of the contextual age . Where every bit of information about you is continually being prodded, pulled, and observed, shoveled into a great learning machine, and turned into a fully actionable knowledge graph of context. A digital identity more real to software than your physical body ever was.  Sinner or saviour, the Age of Context has interesting implications for startups. It raises the entrance bar to dizzying heights. Much of the reason companies are tearing down the Golden Age of the Web, one open protocol at a time, is to create a walled garden of monopolized information. To operate in this world you will have to somehow create a walled garden of your own. And it will be a damn

same-blog 2 0.96465236 1011 high scalability-2011-03-25-Did the Microsoft Stack Kill MySpace?

Introduction: Robert Scoble wrote a fascinating case study, MySpace’s death spiral: insiders say it’s due to bets on Los Angeles and Microsoft , where he reports MySpace insiders blame the Microsoft stack on why they lost the great social network race to Facebook.   Does anyone know if this is true? What's the real story? I was wondering because it doesn't seem to track with the MySpace Architecture  post that I did in 2009, where they seem happy with their choices and had stats to back up their improvements. Why this matters is it's a fascinating model for startups to learn from. What does it really take to succeed? Is it the people or the stack? Is it the organization or the technology? Is it the process or the competition? Is the quality of the site or the love of the users? So much to consider and learn from. Some conjectures from the article: Myspace didn't have programming talent capable of scaling the site to compete with Facebook. Choosing the Microsoft stack made it difficul

3 0.95797765 1118 high scalability-2011-09-19-Big Iron Returns with BigMemory

Introduction: This is a guest post by Greg Luck Founder and CTO, Ehcache Terracotta Inc. Note: this article contains a bit too much of a product pitch, but the points are still generally valid and useful. The legendary Moore’s Law, which states that the number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years, has held true since 1965. It follows that integrated circuits will continue to get smaller, with chip fabrication currently at a minuscule 22nm process (1). Users of big iron hardware, or servers that are dense in terms of CPU power and memory capacity, benefit from this trend as their hardware becomes cheaper and more powerful over time. At some point soon, however, density limits imposed by quantum mechanics will preclude further density increases. At the same time, low-cost commodity hardware influences enterprise architects to scale their applications horizontally, where processing is spread across clusters of l

4 0.9564172 195 high scalability-2007-12-28-Amazon's EC2: Pay as You Grow Could Cut Your Costs in Half

Introduction: Update 2: Summize Computes Computing Resources for a Startup . Lots of nice graphs showing Amazon is hard to beat for small machines and become less cost efficient for well used larger machines. Long term storage costs may eat your saving away. And out of cloud bandwidth costs are high. Update: via ProductionScale , a nice Digital Web article on how to setup S3 to store media files and how Blue Origin was able to handle 3.5 million requests and 758 GBs in bandwidth in a single day for very little $$$. Also a Right Scale article on Network performance within Amazon EC2 and to Amazon S3 . 75MB/s between EC2 instances, 10.2MB/s between EC2 and S3 for download, 6.9MB/s upload. Now that Amazon's S3 (storage service) is out of beta and EC2 (elastic compute cloud) has added new instance types (the class of machine you can rent) with more CPU and more RAM, I thought it would be interesting to take a look out how their pricing stacks up. The quick conclusion: the m

5 0.95546925 1302 high scalability-2012-08-10-Stuff The Internet Says On Scalability For August 10, 2012

Introduction: It's HighScalability Time: TNW : On an average day, out of 30 trillion URLs on the web, Google crawls 20B web pages and now serves 100B searches every month. Quotable Quotes: @tapbot_paul : The 2 computers on the Curiosity rover are RAD750 based, they are approximately 1/10th the speed of an iPhone 4s and “only” cost $200k each. @merv : #cassandra12 Why @adrianco loves what he's doing: "You are no longer IO-bound, you’re CPU bound, like you’re supposed to be." @maxtaco : Garbage collection solves a minuscule %age of bugs, that are non-critical (memleaks? big deal!) and easy to find and fix. At a HUGE expense. @merv : #cassandra12 @eddie_satterly describing $1M savings in first year migrating from MS SQL Server with SAN to Cassandra solution - w more data. @mattbrauchler : A slow node is worse than a down node #cassandra12 @practicingEA : "The math of predictive analytics has been around for years, its the computers t

6 0.95461917 918 high scalability-2010-10-12-The CIO’s Problem: Cloud “Mess” or Cloud “Mash”

7 0.95404077 106 high scalability-2007-10-02-Secrets to Fotolog's Scaling Success

8 0.95401931 1514 high scalability-2013-09-09-Need Help with Database Scalability? Understand I-O

9 0.95371145 576 high scalability-2009-04-21-What CDN would you recommend?

10 0.95344812 1028 high scalability-2011-04-22-Stuff The Internet Says On Scalability For April 22, 2011

11 0.95304841 1093 high scalability-2011-08-05-Stuff The Internet Says On Scalability For August 5, 2011

12 0.95299214 1109 high scalability-2011-09-02-Stuff The Internet Says On Scalability For September 2, 2011

13 0.95191187 853 high scalability-2010-07-08-Cloud AWS Infrastructure vs. Physical Infrastructure

14 0.95078737 1398 high scalability-2013-02-04-Is Provisioned IOPS Better? Yes, it Delivers More Consistent and Higher Performance IO

15 0.95071912 1502 high scalability-2013-08-16-Stuff The Internet Says On Scalability For August 16, 2013

16 0.95024562 1008 high scalability-2011-03-22-Facebook's New Realtime Analytics System: HBase to Process 20 Billion Events Per Day

17 0.95016575 1431 high scalability-2013-03-29-Stuff The Internet Says On Scalability For March 29, 2013

18 0.95005804 1499 high scalability-2013-08-09-Stuff The Internet Says On Scalability For August 9, 2013

19 0.94976646 935 high scalability-2010-11-05-Hot Scalability Links For November 5th, 2010

20 0.94969887 1586 high scalability-2014-01-28-How Next Big Sound Tracks Over a Trillion Song Plays, Likes, and More Using a Version Control System for Hadoop Data