Sunday, October 28, 2012

Cloud Computing - High Availability (HA) and Disaster Recovery

The goal of the traditional High Availability (HA) architecture is to mitigate or prevent application downtime or outages due to failures caused by application errors or any infrastructure failure. Disaster recovery primarily deals with falling back on the secondary site in case of a failure at the primary site.


HA focuses on overcoming technology failures such as network and storage failures. DR focuses on overcoming the physical data center or infrastructure disasters. Both HA and DR focus on ensuring that applications are available 24x7 with zero or minimal downtime caused by planned or unplanned outages


Recovery Time Objective (RTO):  how quickly an application must be back in operation following a failure


Recovery Point Objective (RPO):


A simple way to achieve HA/DR is by using backup and restore operations or by using cluster architecture  and finally using fault tolerant hardware. Needless to say not all organizations use fault tolerant hardware but backup/restore becomes a very slow process and could take hours or days.


==
 Data replication and HA clustering can be combined to build what is called a "shared nothing cluster". The clustering technology ensures that applications and servers are operational and can perform a failover from one server to another in the event that any problems are detected. The data replication software handles mirroring data needed by the application between the servers so that no matter which server the application is running on, the data is available to it. The data replication can occur across either a LAN or across a WAN, depending on where the servers in the cluster are located.


==

Sunday, October 14, 2012

cloud Computing - Storage


Besides required features, performance, and cost, the criteria that typically drive customer choices are the reliability, availability, and scalability of a given storage solution. SANs are specifically designed to meet these additional criteria and satisfy the requirements of mission-critical business applications. The data center infrastructures built to run these applications typically handle large volumes of important information and data. As such, they must operate reliably and continually and be able to grow to meet increasing business volume, peak traffic, and an ever-expanding number of applications and users. The key capabilities that SANs provide to meet these requirements include

Storage clustering
Ease of connectivity
Storage consolidation
LAN-free backup
Server-less backup
Ease of scalability
Storage and server expansion
Bandwidth on demand
Load balancing


===

LV 1871 succeeded in virtualizing its servers and moving its application code base, but it felt it could take virtualization even further to benefit the business. “Our storage was principally network-attached at this point,” said Triebs. “We believed that we would improve both IT cost ratios and our computing performance for the company and its agents if we virtualized storage, and that this would also improve our capabilities of failover and data mirroring.”
One of the data center agility concerns was the time and effort it took to re-provision storage with LV 1871’s network-attached storage orientation. “With the SCSI-attached storage, we did not have a dynamic infrastructure that would allow us to quickly re-provision storage when we needed to,” said Triebs. “Instead, we had to concern ourselves with the model of the hardware, the space required, a buy decision and finally implementation. This end to end process could take as long as six to twelve weeks, and it was an expensive use of internal IT resources.”
LV 1871 made the decision to invest in an initially more expensive SAN (storage-attached network) solution that would pave the way for a virtualized storage framework, where different storage media could easily be tiered into fast access, more expensive disk and lower access, cheaper disk—with deployments and provisioning being accomplished in a matter of minutes, not weeks. “The virtual storage backbone not only reduced our internal costs and our speed of response, but it also allowed us to improve our failover and backup mechanisms for our two separate data centers,” said Triebs. “Data mirroring between the two data centers now takes minutes.” The virtual storage backbone has dramatically improved performance. Virtualization has given LV 1871 vendor independence as well, which lends more flexibility to buying decisions. Additional return on investment (ROI) is being seen in the new tiered storage strategy with its reduction of wasted storage space. “It costs us roughly $20/GB for faster access, tier one storage, while tier two, slower access storage costs around $8/GB,” said Triebs. “In our new tiered storage structure, we find that only one-third of our data is constantly accessed and needs to be on tier one, and we have organized our data this way. This is a primary area where we are realizing data center savings.”
Along the way, Triebs and his staff learned valuable lessons about working with virtualized server and storage infrastructures. “One was a simple practice to remember to delete virtual machines when they were no longer needed,” said Triebs. “On the data side, it is also imperative to think about technologies like data deduplication before doing backups, so you do not store extraneous data. Finally, when you consider going to a LAN/SAN virtual infrastructure as your backbone, you need to consider architectural concepts, such as a split fabric with the use of virtual LANs (VLANs)—and when it comes to security you want your DMZ and LAN to be separate from each other, and hosted on separate hardware.”


===





tiered storage
In a tiered strategy, Tier 1 storage is reserved for demanding applications, such as databases and e-mail, that require the highest performance and can justify the cost of serial-attached SCSI, Fibre Channel SANs, high-performance RAID levels, and the fastest available spindles--or even SSD drives.



Direct attached storage (DAS) is still the top choice of our survey participants for Tier 1 storage of applications such as databases and e-mail. Fibre Channel came in a close second, with 45% of respondents reporting use of Fibre Channel SANs, mainly for Tier 1 storage. Fibre Channel remains strong despite countless predictions of its rapid demise by most storage pundits--and the downright offensive "dead technology walking" label attached by a few.
One survey finding that's not completely unexpected--we were tipped off by similar findings in previous InformationWeek Analytics State of Storage reports--but is nonetheless puzzling is the poor showing by iSCSI SANs, which are the main Tier 1 storage platform for just 16% of respondents. That's less than half the number who report using NAS, our third-place response. Seems most IT pros didn't get the memo that iSCSI would force Fibre Channel into early retirement.
In all seriousness, the continued dearth of interest in iSCSI is mystifying given the current economic backdrop, the widespread availability of iSCSI initiators in recent versions of Windows (desktop and server) and Linux, and the declining cost of 1-GB and 10-GB connectivity options. We think the slower-than-predicted rate of iSCSI adoption--and the continued success of Fibre Channel--is attributable to a few factors. First, the declining cost of Fibre Channel switches and host bus adapters improves the economic case for the technology. Second, we're seeing slower-than-expected enterprise adoption of 10-Gbps Ethernet, leaving iSCSI at a performance disadvantage against 4-GB Fibre Channel.
However, iSCSI's performance future does look bright thanks to emerging Ethernet standards such as 40 Gbps and 100 Gbps that will not only increase the speed limit, but also accelerate adoption of 10-Gbps Ethernet in the short term. In our practice, we also see a reluctance among CIOs to mess with a tried-and-true technology such as Fibre Channel, particularly for critical applications like ERP, e-mail, and enterprise databases. Sometimes, peace of mind is worth a price premium.

tier 2
Tier 2 comprises the less expensive storage, such as SATA drives, NAS, and low-cost SANs, suitable for apps like archives and backups, where high capacity and low cost are more important than blazing speed. Our survey shows that NAS is the Tier 2 architecture of choice, used by 41% of respondents. DAS is the main Tier 2 storage of 34% of respondents. Once again, iSCSI SAN finished last, with a mere 17% of respondents using the technology primarily for Tier 2 storage. This is an even more surprising result than for Tier 1--we expected iSCSI's low cost relative to Fibre Channel SANs to result in a healthy showing here.


Tier 3 storage typically consists of the lowest-cost media, such as recordable optical or WORM (write once, read many) disks, and is well suited for historical archival and long-term backups.


== growing importance of data deduplication.. but more on that some other time.