blog-mining high_scalability knowledge-graph by maker-knowledge-mining

high_scalability knowledge graph


high_scalability-2014

high_scalability-2013

high_scalability-2012

high_scalability-2011

high_scalability-2010

high_scalability-2009

high_scalability-2008

high_scalability-2007




the latest blogs:

1 high scalability-2014-06-05-Cloud Architecture Revolution

Introduction: The introduction of cloud technologies is not a simple evolution of existing ones, but a real revolution.  Like all revolutions, it changes the points of views and redefines all the meanings. Nothing is as before.  This post wants to analyze some key words and concepts, usually used in traditional architectures, redefining them according the standpoint of the cloud.  Understanding the meaning of new words is crucial to grasp the essence of a pure cloud architecture. << There is no greater impediment to the advancement of knowledge than the ambiguity of words. >> THOMAS REID, Essays on the Intellectual Powers of Man Nowadays, it is required to challenge the limits of traditional architectures that go beyond the normal concepts of scalability and support millions of users (What's Up 500 Million) billions of transactions per day (Salesforce 1.3 billion), five 9s of availability (99.999 AOL).  I wish all of you the success of the examples cited above, but do not think that it is co

2 high scalability-2014-05-23-Gone Fishin' 2014

Introduction: Well, not exactly Fishin', but I'll be on a month long vacation starting today. I won't be posting new content, so we'll all have a break. Disappointing, I know. If you've ever wanted to write an article for HighScalability this would be a great time :-) I'd be very interested in your experiences with containers vs VMs if you have some thoughts on the subject. So if the spirit moves you, please write something. See you on down the road...

3 high scalability-2014-05-21-9 Principles of High Performance Programs

Introduction: Arvid Norberg on the libtorrent blog has put together an excellent list of principles of high performance programs , obviously derived from hard won experience programming on bittorrent: Two fundamental causes of performance problems: Memory Latency . A big performance problem on modern computers is the latency of SDRAM. The CPU waits idle for a read from memory to come back. Context Switching . When a CPU switches context "the memory it will access is most likely unrelated to the memory the previous context was accessing. This often results in significant eviction of the previous cache, and requires the switched-to context to load much of its data from RAM, which is slow." Rules to help balance the forces of evil: Batch work . Avoid context switching by batching work. For example, there are vector versions of system calls like writev() and readv() that operate on more than one item per call. An implication is that you want to merge as many writes as possible.

4 high scalability-2014-05-20-It's Networking. In Space! Or How E.T. Will Phone Home.

Introduction: What will the version of the Internet that follows us to the stars look like? Yes, people are really thinking seriously about this sort of thing. Specifically the  InterPlanetary Networking Special Interest Group (IPNSIG). Ansible-like faster-than-light communication it isn't. There's no magical warp drive. Nor is a network of telepaths acting as a 'verse spanning telegraph system. It's more mundane than that. And in many ways more interesting as it's sort of like the old Internet on steroids, the one that was based on on UUCP and dial-up connections, but over vastly longer distances and with much longer delays : The Interplanetary Internet (based on IPN, also called InterPlaNet) is a conceived computer network in space, consisting of a set of network nodes which can communicate with each other.[1][2] Communication would be greatly delayed by the great interplanetary distances, so the IPN needs a new set of protocols and technology that are tolerant to large delays and

5 high scalability-2014-05-19-A Short On How the Wayback Machine Stores More Pages than Stars in the Milky Way

Introduction: How does the  Wayback Machine work? Now with over  400 billion webpages indexed , allowing the Internet to be browsed all the way back to 1996, it's an even more compelling question. I've looked several times but I've never found a really good answer. Here's some information from a thread on Hacker News. It starts with  mmagin , a former Archive employee: I can't speak to their current infrastructure (though more of it is open source now - http://archive-access.sourceforge.net/projects/wayback/ ), but as far as the wayback machine, there was no SQL database anywhere in it. For the purposes of making the wayback machine go: Archived data was in ARC file format (predecessor to http://en.wikipedia.org/wiki/Web_ARChive) which is essentially a concatenation of separately gzipped records. That is, you can seek to a particular offset and start decompressing a record. Thus you could get at any archived web page with a triple (server, filename, file-offset) Thus it was spread

6 high scalability-2014-05-16-Stuff The Internet Says On Scalability For May 16th, 2014

Introduction: Hey, it's HighScalability time: Cross Section of an Undersea Cable. It's industrial art. The parts . The story . 400,000,000,000 : Wayback Machine pages indexed; 100 billion : Google searches per month;  10 million : Snapchat monthly user growth. Quotable Quotes: @duncanjw : The Great Rewrite - many apps will be rewritten not just replatformed over next 10 years says @cote #openstacksummit @RFFlores : The Openstack conundrum. If you don't adopt it, you will regret it in the future. If you do adopt it, you will regret it now elementai : I love Redis so much, it became like a superglue where "just enough" performance is needed to resolve a bottleneck problem, but you don't have resources to rewrite a whole thing in something fast. @antirez : "when software engineering is reduced to plumbing together generic systems, software engineers lose their sense of ownership" Tom Akehurst : Microservices vs. monolith is a false di

7 high scalability-2014-05-15-Paper: SwiftCloud: Fault-Tolerant Geo-Replication Integrated all the Way to the Client Machine

Introduction: So how do you knit multiple datacenters and many thousands of phones and other clients into a single cooperating system? Usually you don't. It's too hard. We see nascent attempts in services like Firebase and Parse.  SwiftCloud , as described in  SwiftCloud: Fault-Tolerant Geo-Replication Integrated all the Way to the Client Machine , goes two steps further, by leveraging Conflict free Replicated Data Types (CRDTs), which means "data can be replicated at multiple sites and be updated independently with the guarantee that all replicas converge to the same value. In a cloud environment, this allows a user to access the data center closer to the user, thus optimizing the latency for all users." While we don't see these kind of systems just yet, they are a strong candidate for how things will work in the future, efficiently using resources at every level while supporting huge numbers of cooperating users. Abstract : Client-side logic and storage are increasingly used in web a

8 high scalability-2014-05-14-Google Says Cloud Prices Will Follow Moore’s Law: Are We All Renters Now?

Introduction: After Google cut prices  on their Google Cloud Platform Amazon quickly followed with their own price cuts . Even more interesting is what the future holds for pricing. The near future looks great. After that? We'll see. Adrian Cockcroft highlights that Google thinks prices should follow Moore’s law, which means we should expect prices to halve every 18-24 months. That's good news. Greater cost certainty means you can make much more aggressive build out plans. With the savings you can hire more people, handle more customers, and add those media rich features you thought you couldn't afford. Design is directly related to costs. Without Google competing with Amazon there's little doubt the price reduction curve would be much less favorable. As a late cloud entrant Google is now in a customer acquisition phase, so they are willing to pay for customers, which means lower prices are an acceptable cost of doing business. Profit and high margins are not the objective. Getting marke

9 high scalability-2014-05-12-4 Architecture Issues When Scaling Web Applications: Bottlenecks, Database, CPU, IO

Introduction: This is a guest repost by Venkatesh CM at Architecture Issues Scaling Web Applications . I will cover architecture issues that show up while scaling and performance tuning large scale web application in this blog. Lets start by defining few terms to create common understanding and vocabulary. Later on I will go through different issues that pop-up while scaling web application like Architecture bottlenecks Scaling Database CPU Bound Application IO Bound Application Determining optimal thread pool size of an web application  will be covered in next blog. Performance Term performance of web application is used to mean several things. Most developers are primarily concerned with are response time and scalability.   Response Time Is the time taken by web application to process request and return response. Applications should respond to requests (response time) within acceptable duration. If application is taking beyond the acceptable time, it is said to

10 high scalability-2014-05-09-Stuff The Internet Says On Scalability For May 9th, 2014

Introduction: Hey, it's HighScalability time: NASA captures Guatemala volcano erupting from space  40,000 exabytes : from now until 2020, the digital universe will about double every two years;  $650,000 : amount raised by the MaydayPAC in one week. Quotable Quotes: @BenedictEvans : Masayoshi Son: $20m initial investment in Alibaba, current stake worth $58bn. @iamdevloper : I sneezed earlier and Siri compiled it to valid Perl. @cdixon : "There is not enough competition in the last mile market to allow a true market to function"  @PatrickMcFadin : Get ready for some serious server density. AMD is working on K12, brand-new x86 and ARM cores. This plus 8T SSD?  With age comes changing priorities. Facebook is now 10 and has grown up . They are no longer moving fast and breaking things. They are now into the stability thing. Letting developers know they are a stable platform. The play is to get all that beautiful data from developers by bei