high_scalability high_scalability-2007 high_scalability-2007-28 knowledge-graph by maker-knowledge-mining

28 high scalability-2007-07-25-Product: NetApp MetroCluster Software


meta infos for this blog

Source: html

Introduction: NetApp MetroCluster Software Cost-effective is an integrated high-availability storage cluster and site failover capability. NetApp MetroCluster is an integrated high-availability and disaster recovery solution that can reduce system complexity and simplify management while ensuring greater return on investment. MetroCluster uses clustered server technology to replicate data synchronously between sites located miles apart, eliminating data loss in case of a disruption. Simple and powerful recovery process minimizes downtime, with little or no user action required. At one company I worked at they used the NetApp snap mirror feature to replicate data across long distances to multiple datacenters. They had a very fast backbone and it worked well. The issue with NetApp is always one of cost, but if you can afford it, it's a good option.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 NetApp MetroCluster Software Cost-effective is an integrated high-availability storage cluster and site failover capability. [sent-1, score-0.329]

2 NetApp MetroCluster is an integrated high-availability and disaster recovery solution that can reduce system complexity and simplify management while ensuring greater return on investment. [sent-2, score-1.048]

3 MetroCluster uses clustered server technology to replicate data synchronously between sites located miles apart, eliminating data loss in case of a disruption. [sent-3, score-1.174]

4 Simple and powerful recovery process minimizes downtime, with little or no user action required. [sent-4, score-0.548]

5 At one company I worked at they used the NetApp snap mirror feature to replicate data across long distances to multiple datacenters. [sent-5, score-1.042]

6 They had a very fast backbone and it worked well. [sent-6, score-0.267]

7 The issue with NetApp is always one of cost, but if you can afford it, it's a good option. [sent-7, score-0.231]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('metrocluster', 0.588), ('netapp', 0.424), ('replicate', 0.2), ('recovery', 0.172), ('snap', 0.164), ('integrated', 0.156), ('mirror', 0.146), ('distances', 0.144), ('worked', 0.144), ('synchronously', 0.137), ('eliminating', 0.132), ('minimizes', 0.132), ('backbone', 0.123), ('ensuring', 0.123), ('miles', 0.122), ('apart', 0.121), ('simplify', 0.119), ('clustered', 0.111), ('afford', 0.109), ('downtime', 0.102), ('disaster', 0.097), ('loss', 0.096), ('action', 0.09), ('located', 0.089), ('failover', 0.085), ('greater', 0.084), ('issue', 0.081), ('return', 0.08), ('option', 0.079), ('complexity', 0.069), ('powerful', 0.067), ('company', 0.061), ('reduce', 0.06), ('sites', 0.06), ('feature', 0.059), ('case', 0.049), ('cluster', 0.047), ('little', 0.047), ('technology', 0.045), ('uses', 0.045), ('management', 0.045), ('data', 0.044), ('solution', 0.043), ('long', 0.043), ('cost', 0.042), ('always', 0.041), ('site', 0.041), ('process', 0.04), ('simple', 0.039), ('multiple', 0.037)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 28 high scalability-2007-07-25-Product: NetApp MetroCluster Software

Introduction: NetApp MetroCluster Software Cost-effective is an integrated high-availability storage cluster and site failover capability. NetApp MetroCluster is an integrated high-availability and disaster recovery solution that can reduce system complexity and simplify management while ensuring greater return on investment. MetroCluster uses clustered server technology to replicate data synchronously between sites located miles apart, eliminating data loss in case of a disruption. Simple and powerful recovery process minimizes downtime, with little or no user action required. At one company I worked at they used the NetApp snap mirror feature to replicate data across long distances to multiple datacenters. They had a very fast backbone and it worked well. The issue with NetApp is always one of cost, but if you can afford it, it's a good option.

2 0.21039718 25 high scalability-2007-07-25-Paper: Designing Disaster Tolerant High Availability Clusters

Introduction: A very detailed (339 pages) paper on how to use HP products to create a highly available cluster. It's somewhat dated and obviously concentrates on HP products, but it is still good information. Table of contents: 1. Disaster Tolerance and Recovery in a Serviceguard Cluster 2. Building an Extended Distance Cluster Using ServiceGuard 3. Designing a Metropolitan Cluster 4. Designing a Continental Cluster 5. Building Disaster-Tolerant Serviceguard Solutions Using Metrocluster with Continuous Access XP 6. Building Disaster Tolerant Serviceguard Solutions Using Metrocluster with EMC SRDF 7. Cascading Failover in a Continental Cluster Evaluating the Need for Disaster Tolerance What is a Disaster Tolerant Architecture? Types of Disaster Tolerant Clusters Extended Distance Clusters Metropolitan Cluster Continental Cluster Continental Cluster With Cascading Failover Disaster Tolerant Architecture Guidelines Protecting Nodes through Geographic Dispersion Protecting Data th

3 0.14128883 27 high scalability-2007-07-25-Product: 3 PAR REMOTE COPY

Introduction: 3PAR Remote Copy is a uniquely simple and efficient replication technology that allows customers to protect and share any application data affordably. Built upon 3PAR Thin Copy technology, Remote Copy lowers the total cost of storage by addressing the cost and complexity of remote replication. Common Uses of 3PAR Remote Copy: Affordable Disaster Recovery: Mirror data cost-effectively across town or across the world. Centralized Archive: Replicate data from multiple 3PAR InServs located in multiple data centers to a centralized data archive location. Resilient Pod Architecture: Mutually replicate tier 1 or 2 data to tier 3 capacity between two InServs (application pods). Remote Data Access: Replicate data to a remote location for sharing of data with remote users.

4 0.1303753 163 high scalability-2007-11-21-n-phase commit for FS writes, reads stay local

Introduction: I am try i ng to f i nd a L i nux FS that wi l l a l low me to rep l icate a l l wr i tes synchronous l y to n nodes i n a web server c l uster, wh i le keep i ng a l l reads local. It shou l d not require specialized hardware.

5 0.11824169 159 high scalability-2007-11-18-Reverse Proxy

Introduction: Hi, I saw an year ago that Netapp sold netcache to blu-coat, my site is a heavy NetCache user and we cached 83% of our site. We tested with Blue-coat and F5 WA and we are not getting same performce as NetCache. Any of you guys have the same issue? or somebody knows another product can handle much traffic? Thanks Rodrigo

6 0.099986307 20 high scalability-2007-07-16-Paper: The Clustered Storage Revolution

7 0.08611396 1057 high scalability-2011-06-10-Stuff The Internet Says On Scalability For June 10, 2011

8 0.081383757 620 high scalability-2009-06-05-SSL RPC API Scalability

9 0.075876147 1174 high scalability-2012-01-13-Stuff The Internet Says On Scalability For January 13, 2012

10 0.062658437 1596 high scalability-2014-02-14-Stuff The Internet Says On Scalability For February 14th, 2014

11 0.060722478 822 high scalability-2010-05-04-Business continuity with real-time data integration

12 0.060191903 151 high scalability-2007-11-12-a8cjdbc - Database Clustering via JDBC

13 0.058368735 373 high scalability-2008-08-29-Product: ScaleOut StateServer is Memcached on Steroids

14 0.057734385 813 high scalability-2010-04-19-The cost of High Availability (HA) with Oracle

15 0.05733278 240 high scalability-2008-02-05-Handling of Session for a site running from more than 1 data center

16 0.055882033 1041 high scalability-2011-05-15-Building a Database remote availability site

17 0.054006144 1035 high scalability-2011-05-05-Paper: A Study of Practical Deduplication

18 0.052950971 430 high scalability-2008-10-26-Should you use a SAN to scale your architecture?

19 0.052211411 920 high scalability-2010-10-15-Troubles with Sharding - What can we learn from the Foursquare Incident?

20 0.051988117 617 high scalability-2009-06-04-New Book: Even Faster Web Sites: Performance Best Practices for Web Developers


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.073), (1, 0.036), (2, -0.006), (3, -0.019), (4, -0.017), (5, -0.0), (6, 0.024), (7, -0.022), (8, 0.009), (9, -0.0), (10, -0.017), (11, 0.008), (12, -0.015), (13, -0.008), (14, 0.039), (15, 0.029), (16, 0.009), (17, -0.004), (18, 0.025), (19, 0.022), (20, 0.011), (21, 0.023), (22, 0.006), (23, 0.021), (24, -0.031), (25, 0.0), (26, -0.018), (27, -0.02), (28, -0.026), (29, -0.022), (30, -0.008), (31, 0.023), (32, 0.0), (33, -0.022), (34, -0.019), (35, 0.044), (36, 0.008), (37, 0.011), (38, 0.009), (39, 0.036), (40, 0.015), (41, -0.022), (42, -0.034), (43, 0.044), (44, -0.005), (45, -0.031), (46, 0.006), (47, -0.02), (48, -0.036), (49, 0.017)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.92522866 28 high scalability-2007-07-25-Product: NetApp MetroCluster Software

Introduction: NetApp MetroCluster Software Cost-effective is an integrated high-availability storage cluster and site failover capability. NetApp MetroCluster is an integrated high-availability and disaster recovery solution that can reduce system complexity and simplify management while ensuring greater return on investment. MetroCluster uses clustered server technology to replicate data synchronously between sites located miles apart, eliminating data loss in case of a disruption. Simple and powerful recovery process minimizes downtime, with little or no user action required. At one company I worked at they used the NetApp snap mirror feature to replicate data across long distances to multiple datacenters. They had a very fast backbone and it worked well. The issue with NetApp is always one of cost, but if you can afford it, it's a good option.

2 0.71700388 25 high scalability-2007-07-25-Paper: Designing Disaster Tolerant High Availability Clusters

Introduction: A very detailed (339 pages) paper on how to use HP products to create a highly available cluster. It's somewhat dated and obviously concentrates on HP products, but it is still good information. Table of contents: 1. Disaster Tolerance and Recovery in a Serviceguard Cluster 2. Building an Extended Distance Cluster Using ServiceGuard 3. Designing a Metropolitan Cluster 4. Designing a Continental Cluster 5. Building Disaster-Tolerant Serviceguard Solutions Using Metrocluster with Continuous Access XP 6. Building Disaster Tolerant Serviceguard Solutions Using Metrocluster with EMC SRDF 7. Cascading Failover in a Continental Cluster Evaluating the Need for Disaster Tolerance What is a Disaster Tolerant Architecture? Types of Disaster Tolerant Clusters Extended Distance Clusters Metropolitan Cluster Continental Cluster Continental Cluster With Cascading Failover Disaster Tolerant Architecture Guidelines Protecting Nodes through Geographic Dispersion Protecting Data th

3 0.69410288 27 high scalability-2007-07-25-Product: 3 PAR REMOTE COPY

Introduction: 3PAR Remote Copy is a uniquely simple and efficient replication technology that allows customers to protect and share any application data affordably. Built upon 3PAR Thin Copy technology, Remote Copy lowers the total cost of storage by addressing the cost and complexity of remote replication. Common Uses of 3PAR Remote Copy: Affordable Disaster Recovery: Mirror data cost-effectively across town or across the world. Centralized Archive: Replicate data from multiple 3PAR InServs located in multiple data centers to a centralized data archive location. Resilient Pod Architecture: Mutually replicate tier 1 or 2 data to tier 3 capacity between two InServs (application pods). Remote Data Access: Replicate data to a remote location for sharing of data with remote users.

4 0.68284494 240 high scalability-2008-02-05-Handling of Session for a site running from more than 1 data center

Introduction: If using a DB to store session(used by some app server, ex.. websphere), how would an enterprise class site that is housed in 2 different data centers(that are live/live) maintain the session between both data centers. The problem as I see it is that since each data center has their own session database, if I was to flip the users to only access Data Center 1(by changing the DNS records for the site or some other Load balancing technique) then that would cause all previous Data Center 2 users to lose their session. What would be some pure hardware based solutions to this that are being used now? That way the applications supporting the web site can be abstracted from this. As I see now, a solution is to possibly have the session databases in both centers some how replicate the data to each other. I just don't see the best way to even accomplish this you are not even guraunteed that the session ID's will be unique since it's 2 different Application Server tiers(again websphere)

5 0.67033374 430 high scalability-2008-10-26-Should you use a SAN to scale your architecture?

Introduction: This is a question everyone must struggle with when building out their datacenter. Storage choices are always the ones I have the least confidence in. David Marks in his blog You Can Change It Later! asks the question Should I get a SAN to scale my site architecture? and answers no. A better solution is to use commodity hardware, directly attach storage on servers, and partition across servers to scale and for greater availability. David's reasoning is interesting: A SAN creates a SPOF (single point of failure) that is dependent on a vendor to fly and fix when there's a problem. This can lead to long down times during this outage you have no access to your data at all. Using easily available commodity hardware minimizes risks to your company, it's not just about saving money. Zooming over to Fry's to buy emergency equipment provides the kind of agility startups need in order to respond quickly to ever changing situations. It's hard to beat the power and flexibility (backup

6 0.66219401 1070 high scalability-2011-06-29-Second Hand Seizure : A New Cause of Site Death

7 0.66190743 1367 high scalability-2012-12-05-5 Ways to Make Cloud Failure Not an Option

8 0.65428835 1041 high scalability-2011-05-15-Building a Database remote availability site

9 0.65307212 809 high scalability-2010-04-13-Strategy: Saving Your Butt With Deferred Deletes

10 0.64565432 1589 high scalability-2014-02-03-How Google Backs Up the Internet Along With Exabytes of Other Data

11 0.63911158 23 high scalability-2007-07-24-Major Websites Down: Or Why You Want to Run in Two or More Data Centers.

12 0.63654697 1157 high scalability-2011-12-14-Virtualization and Cloud Computing is Changing the Network to East-West Routing

13 0.63643384 1046 high scalability-2011-05-23-Evernote Architecture - 9 Million Users and 150 Million Requests a Day

14 0.63257414 822 high scalability-2010-05-04-Business continuity with real-time data integration

15 0.63201803 1059 high scalability-2011-06-14-A TripAdvisor Short

16 0.6195842 1338 high scalability-2012-10-11-RAMCube: Exploiting Network Proximity for RAM-Based Key-Value Store

17 0.61561608 271 high scalability-2008-03-08-Product: DRBD - Distributed Replicated Block Device

18 0.61401749 742 high scalability-2009-11-17-10 eBay Secrets for Planet Wide Scaling

19 0.61364168 1597 high scalability-2014-02-17-How the AOL.com Architecture Evolved to 99.999% Availability, 8 Million Visitors Per Day, and 200,000 Requests Per Second

20 0.61283147 143 high scalability-2007-11-06-Product: ChironFS


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.233), (10, 0.086), (27, 0.402), (61, 0.074), (79, 0.06)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.90873069 555 high scalability-2009-04-04-Performance Anti-Pattern

Introduction: Want your apps to run faster? Here’s what not to do. By: Bart Smaalders, Sun Microsystems. Performance Anti-Patterns: - Fixing Performance at the End of the Project - Measuring and Comparing the Wrong Things - Algorithmic Antipathy - Reusing Software - Iterating Because That’s What Computers Do Well - Premature Optimization - Focusing on What You Can See Rather Than on the Problem - Software Layering - Excessive Numbers of Threads - Asymmetric Hardware Utilization - Not Optimizing for the Common Case - Needless Swapping of Cache Lines Between CPUs For more detail go there

same-blog 2 0.79876047 28 high scalability-2007-07-25-Product: NetApp MetroCluster Software

Introduction: NetApp MetroCluster Software Cost-effective is an integrated high-availability storage cluster and site failover capability. NetApp MetroCluster is an integrated high-availability and disaster recovery solution that can reduce system complexity and simplify management while ensuring greater return on investment. MetroCluster uses clustered server technology to replicate data synchronously between sites located miles apart, eliminating data loss in case of a disruption. Simple and powerful recovery process minimizes downtime, with little or no user action required. At one company I worked at they used the NetApp snap mirror feature to replicate data across long distances to multiple datacenters. They had a very fast backbone and it worked well. The issue with NetApp is always one of cost, but if you can afford it, it's a good option.

3 0.79407936 108 high scalability-2007-10-03-Paper: Brewer's Conjecture and the Feasibility of Consistent Available Partition-Tolerant Web Services

Introduction: Abstract: When designing distributed web services, there are three properties that are commonly desired: consistency, availability, and partition tolerance. It is impossible to achieve all three. In this note, we prove this conjecture in the asynchronous network model, and then discuss solutions to this dilemma in the partially synchronous model.

4 0.74289662 1483 high scalability-2013-06-27-Paper: XORing Elephants: Novel Erasure Codes for Big Data

Introduction: Erasure codes are one of those seemingly magical mathematical creations that with the developments described in the paper  XORing Elephants: Novel Erasure Codes for Big Data , are set to replace triple replication as the data storage protection mechanism of choice. The result says Robin Harris (StorageMojo) in an excellent article,  Facebook’s advanced erasure codes : "WebCos will be able to store massive amounts of data more efficiently than ever before. Bad news: so will anyone else." Robin says with cheap disks triple replication made sense and was economical. With ever bigger BigData the overhead has become costly. But erasure codes have always suffered from unacceptably long time to repair times. This paper describes new Locally Repairable Codes (LRCs) that are efficiently repairable in disk I/O and bandwidth requirements: These systems are now designed to survive the loss of up to four storage elements – disks, servers, nodes or even entire data centers – without losing

5 0.68904525 1097 high scalability-2011-08-12-Stuff The Internet Says On Scalability For August 12, 2011

Introduction: Submitted for your scaling pleasure, you may not  scale often, but when you scale, please drink us: Quotably quotable quotes: @mardix : There is no single point of truth in #NoSQL . #Consistency is no longer global, it's relative to the one accessing it. #Scalability @kekline : RT @CurtMonash: "...from industry figures, Basho/Riak is our third-biggest competitor." How often do you encounter them? "Never have" #nosql @dave_jacobs : Love being in a city where I can overhear a convo about Heroku scalability while doing deadlifts. #ahsanfrancisco @satheeshilu : Doctor at #hospital in india says #ge #healthcare software is slow to handle 100K X-rays an year.Scalability is critical 4 Indian #software @sufw : How can it be possible that Tagged has 80m users and I have *never* heard of it!?! @EventCloudPro : One of my vacation realizations? Whole #bigdata thing has turned into a lotta #bighype - many distinct issues & nothing to do w/ #bigdata No

6 0.640176 1141 high scalability-2011-11-11-Stuff The Internet Says On Scalability For November 11, 2011

7 0.63263834 544 high scalability-2009-03-18-QCon London 2009: Upgrading Twitter without service disruptions

8 0.60893339 883 high scalability-2010-08-20-Hot Scalability Links For Aug 20, 2010

9 0.59876835 717 high scalability-2009-10-07-How to Avoid the Top 5 Scale-Out Pitfalls

10 0.5887078 705 high scalability-2009-09-16-Paper: A practical scalable distributed B-tree

11 0.58787102 1230 high scalability-2012-04-18-Ansible - A Simple Model-Driven Configuration Management and Command Execution Framework

12 0.57879585 1622 high scalability-2014-03-31-How WhatsApp Grew to Nearly 500 Million Users, 11,000 cores, and 70 Million Messages a Second

13 0.57585388 900 high scalability-2010-09-11-Google's Colossus Makes Search Real-time by Dumping MapReduce

14 0.57525152 666 high scalability-2009-07-30-Learn How to Think at Scale

15 0.54245204 835 high scalability-2010-06-03-Hot Scalability Links for June 3, 2010

16 0.53636229 265 high scalability-2008-03-03-Two data streams for a happy website

17 0.52501845 468 high scalability-2008-12-17-Ringo - Distributed key-value storage for immutable data

18 0.52438444 312 high scalability-2008-04-30-Rather small site architecture.

19 0.5231995 109 high scalability-2007-10-03-Save on a Load Balancer By Using Client Side Load Balancing

20 0.52221578 1425 high scalability-2013-03-18-Beyond Threads and Callbacks - Application Architecture Pros and Cons