Showing posts with label Cloud Computing. Show all posts
Showing posts with label Cloud Computing. Show all posts

Saturday, December 10, 2011

Cloud Computing

OK. I made up my mind. Let's cover Cloud Computing before we talk about SaaS and Multi-Tenancy.


Maybe explaining the below terms will be better than trying to come up with a technical definition of cloud Computing.


IaaS(Infrastructure as a Service)
IaaS provides data center, infrastructure hardware and software services over the web. One example is Amazon EC2. This is probably the most dominant form of Cloud Computing that we see today.


PaaS ( Platform as a Service)
PaaS is the next level of abstraction and provides the platform to build software services or products. For instance it may provide a Database, Web Server etc. Example is Salesforce.com's force.com.


SaaS (Software as a Service)
In SaaS, applications are provided as hosted services over the web.Probably the most widely used model of Cloud Computing.


So you could define Cloud Computing as "some service provided over the internet". It could be a computer server, a virtual server, a pre-configured OS, a hosted environment(Middleware), web based apps ( Google Apps), Web Services (Gmail ) etc. The three characteristics of a cloud service are:

  • Elasticity
  • Self-Service access
  • Quick response.

Cloud Computing versus Grid Computing
Grid Computing typically relies on a batch scheduling mechanism to fan-out the task to multiple nodes and then accumulate the results. So Grid computing doesn't necessarily deal with getting processing capacity right now and instead focuses on some mechanism that is predefined and allows the "batch-job" to schedule jobs across these Grid nodes.


One could argue that Cloud Computing is more evolved version of Grid Computing, Virtualization is important for Cloud Computing because it is on-demand and is a key difference versus Grid Computing. However there are ways of achieving this by using Multi-Tenancy (Salesforce.com?) without Virtualization and we need to talk about those reference models.


If I haven't confused you already, then let me try some more. What about Grid Computing that is available in the Cloud? :-)


Public, Private, Hybrid and Community Clouds
Location is not important here but the ownership is. Who "owns" the facility? If the facility is shared by multiple public clients then it's a public cloud. If the facility is co-located within the enterprise ( it could be managed by a third party provider) then it's a private cloud. Hybrid clouds, ofcourse combine elements from both public and private clouds. Community clouds have multiple owners and are shared across those communities.



Cloud Computing benefits
Faster processing by making use of better/faster infrastructure available at much cheaper rates.
Minimize infrastructure bottlenecks by delegating the scalability and on-demand handling to the provider.
Low barrier to entry by allowing SMBs to participate in provinding solutions irrespective of the size of their data center.


In spite of the benefits, we do have to consider the network bandwidth requirements forced by servicing clients. Is it sufficient to meet the clients demands? What is the latency?



So, how do you use the Cloud?
OCCI is working towards standardizing the "API" to access the cloud but unfortunately, it is not completely implemented at vendors yet. Not sure if the major vendors like Amazon and SalesForce.com are part of it, so Customers hoping for interoperability between cloud providers will be disappointed. However it is still a useful resource to keep track of.


Architecture specific focus


Application architectures now have to consider a few extra things in addition to traditional issues such loose-coupling, distributed deployment. They now have to focus on delivering the entire application architecture as a set of composable services. If it can be virtualized, composed and assembled programmatically and quickly then it falls into the perfect category of Cloud applications.


The following are key for successful cloud applications:
Horizontal Scaling is the key to having a successful cloud application. If we can deploy the application components in a distributed fashion and provision additional deployments if the demand increases, we would be able to serve additional requests. Surge computing can be utilized as well to procure the computing needs from public clouds in case we are running in a private one. Horizontal scaling assumes we have Parallelization at some level. Without Parallelization, the nodes would depend on each other or on some other common service which would become the bottleneck.


Security, Compliance are important topics that we can't cover in detail in this post but should be handled in any cloud architecture. Concern or doubts regarding these two are perhaps the main reason why cloud is not adopted in large corporations. They deserve a separate and a detailed post.



IBM cloud reference architecture
Not sure how much detail I would be able to go into but the majority of "fluff" around the IBM cloud reference architecture can be ignored. Most enterprises don't embark on providing IaaS, PaaS and SaaS in the same breath. It's mostly a business decision and what makes economic sense. However, here is what you can take away from it:


Common Cloud Management Platform (CCMP) consists of Operational Support Services (OSS) and Business Support Services (BSS)

  • Business Support Services represents the business-related services involving pricing, metering, billing, account etc
  • Operational Support Services represents the technical-related services involving, provisioning, Ticket Management, Virtualization Management etc.

Qos (Quality of Service) in CCRA
The non-functional aspects like Security, Resiliency, Performance & Consumability are cross-cutting aspects of QoS spanning the hardware infrastructure and Cloud Services and must be viewed from an end-to-end perspective including the structure of CCRA by itself, the way the hardware infrastructure is set up (e.g., in terms of isolation, disaster recovery, etc.) and how the cloud services are implemented. The major aspects of QoS are:
  • Governance and Policy.
  • Threat and Vulnerability Management
  • Data Protection
  • Availability & Continuity Management
  • Ease of doing business
  • Simplified Operations

Summary

  • It's a starting point. If Cloud Architecture seems daunting then this is a good starting point.
  • It's a good "best practices" document. It does define the collective experience of IBM experiences across various cloud solutions. There's got to be something useful here. :-)
  • It does define four architectural principles(referred as ELEG) of which atleast three seem to of value.


    • Efficiency. Basically means we need to increase utilization of cloud services.
    • Lightweightness. Basically use some form of Virtualization or other technique to avoid "heavy" management of IT. 
    • Economies of Scale. The idea is to have common management services that can be shared across cloud flavors.
    • Genericity. No clue what the message here was! Please read the document and help me! :-) 

Three and Four seem similar and not sure what the differentiating factors are. Maybe we need to get some kind of context around the IBM doc to fully appreciate the message because it will come up in discussions and it will be important to explain the best points of this and avoid the unnecessary details. I think I will get back to this someday...


There are other Reference models out there. NIST has one and so does DMTF. Again, I will get to them someday... :-)

Saturday, November 19, 2011

Grid Computing and Cloud Computing

Grid Computing
According to Wikipedia. "Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files. What distinguishes grid computing from conventional high performance computing systems such as cluster computing is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed."

Grid Computing will allow us to harness the power of distributed, heterogeneous and loosely coupled computing resources as if they belong to one large infrastructure. Utility Computing involves the concept of paying for what you use on the Grid. So you basically have to provision what you need in advance and that allows you to harness the power of those resources as if they are part of your infrastructure.


Cloud Computing
Cloud Computing at a conceptual level is similar to Grid Computing in terms of computing resources being consumed as electric power from a grid would.

So the way I see it, Grid Computing was a precursor to Cloud Computing.

Further details will be the subject of another blog post but not sure if I should write about Multi-Tenancy and SaaS first...

The opinions and statements in this communication are my own and do not necessarily reflect the opinions or policies of CA.

Monday, November 7, 2011

Enterprise Architecture Frameworks: Zachman, TOGAF and other Methodologies

I guess this happens to most of us where we forget the theoretical aspects of the Enterprise Architecture(EA) and instead just recall "highlights" of that methodology or framework. Well, this is what happened to me when a friend once asked me about Zachman Framework and if it's actually useful. What about TOGAF?
I am trying to summarize the discussion we had and maybe someday I might actually go back and refresh the entire details.

Zachman Framework.
I agree with the general perception that this is not really a framework. It doesn't define an actual framework for Enterprise architecture. It's simply a taxonomy for organizing architectural artifacts.
These artifacts are important elements in the enterprise architecture and they define the different representations of the Enterprise from different stakeholders perspectives.

There are basically two dimensions in Zachman framework. One dimension is the various actors in the scheme. Using the building industry as an example, the actors could include the owner, the builder etc. Different artifacts are actually needed for each of these actors. Quoting Zachman, "These architectural representations differ from the others in essence, not merely in the level of detail". There are 6 actor perspectives viz; contextual(planner), conceptual(owner), Logical(designer), Physical(subcontractor) and Detailed(enterprise).

The second dimension is the descriptive focus of the artifact. Zachman proposed six descriptive foci; the What(data), Where(Network), When(time), Why(Motivation), Who(People) and How(Function). 


Here is an image from Wikipedia:







Here is a helpful hint for row Descriptions ( = means is equal to :-) ):
Scope=Contextual = Planner
Business Model= Conceptual = Owner
System Model = Logical = Designer
Technology Model = Physical = Builder
Detailed Representations = Detailed = Sub Contractor

Zachman framework usage

  • Every architectural artifact created in the Enterprise Architecture practice should belong to one and only one cell. So for instance, if we create a Data Architecture it should belong in the System Model(Designer) ==>Data cell. 
  • Every cell needs to be complete which means we need to have some artifact that applies to that perspective.
  • The key thing to remember is that the information in columns should be related to each other or have a strong correspondence with each other. For instance, if we are creating artifacts related to Data then these artifacts should talk about the same data from different perspectives. Let's say data represents Customers for a Business Owner and is represented in the business data model diagram. These same customers should be represented in the Physical(Technology model) data diagram as well and should describe how Customers are actually represented in terms of the technology being used. 

So to summarize, Zachman Framework gives us a way to classify architectural artifacts and ensure that all perspectives are met and that they actually address the same problem. It however does not give a step by step process for creating Enterprise architecture. It doesn't say how the artifacts have to be created, what should be in those artifacts and how should they be consumed.


TOGAF®( The Open Group Architecture Framework)
I hear a lot about TOGAF especially from IBM folks. :-) Not sure of the exact reasons but maybe because it's owned by "The Open Group" and also because of heavy SOA practitioners bias.

To Quote the description from Wikipedia:
"The Open Group Architecture Framework (TOGAF®) is a framework for enterprise architecture which provides a comprehensive approach for designing, planning, implementation, and governance of an enterprise information architecture. TOGAF is a registered trademark of The Open Group in the United States and Other countries [2]. TOGAF is a high level and holistic approach to design, which is typically modeled at four levels: Business, Application, Data, and Technology. It tries to give a well-tested overall starting model to information architects, which can then be built upon. It relies heavily on modularization, standardization and already existing, proven technologies and products".


To put it simply, TOGAF is a step by step process for creating an Enterprise Architecture. So if one wanted to combine Zachman and TOGAF, then they could create the artifacts with TOGAF and then categorize them with Zachman.


An explanation of few key terms used in TOGAF:

Enterprise Continuum
TOGAF views the enterprise architecture as a continuum of architectures.

Foundation Architectures
These are generic architectures that can be used by any IT organization.

Common System Architectures
These are principles visible in many but not all enterprises.

Industry Architectures
These are specific across enterprises that belong to the same domain. For instance Healthcare.

Organizational Architectures
These are very specific to a given enterprise.

ADM

At the heart of TOGAF is ADM(Architecture Development Method). ADM describes a method for creating Enterprise Architecture.  The basic structure of ADM is shown in an Architectural Development Cycle ( ADC). Here is an image from Wikipedia:

ADM is iterative over the whole process, between phases and within phases as well. ADM has been specifically designed with flexibility and adaptability in mind. The phases in ADC could be adapted and their order changed to suit an organization. Also ADM could be integrated with other frameworks or methodologies such as the Zachman Framework and this might result in a modified ADC.

A brief description of the phases.


Phase A
This will define the scope of the project, identify constraints, document requirements and establish high level definitions for current and target architectures. Output is Statement of architecture work.

Phase B
This could be very time consuming depending on the scope and the depth of the Business Modelling effort. Output will be the current and target business objectives.

Phase C
Again, this is detailed as well and there are multiple steps to be followed here. It basically boils down to developing current data-architecture description, architectural models, logical data models, data-management process models and relationship models that map business models to CRUD.

Phase D
This involves completing the technical architecture i.e. the infrastructure to support the new enterprise.

Phase E and F
Identifies the implementation projects with focus on easy-wins or projects that could have the most visible impact with least risk.

Phase G
Involves creating architectural specifications and acceptance criteria of the implementation projects

Phase H
The final phase involves management of the artifacts created.

..And the cycle could repeat itself....


Federal Enterprise Architecture (FEA)
I am not familiar at all with FEA but the below resource makes me unenthusiastic about FEA.
FEA reference models Tragically Misnamed

However just glancing through it, it seems to have five reference models:

  • Business Reference Model ( BRM)

      Business view of the various functions of the federal government.

  • Components Reference Model (CRM)

      IT view of systems

  • Technical Reference Model (TRM)

     The Technologies and Standards that can be used in building IT systems.

  • Data Reference Model ( DRM)

     Standard way of describing data.

  • Performance Reference Model ( PRM)

     Describes the value delivered by EA.

FEA seems to be a very well defined framework but I don't want to explore it at this point. Maybe if something piques my interest.

Gartner
Never had a chance to use this but might be useful in certain contexts. It might be interesting to find out if this applies in non-Garter engagements as well or is only a Gartner specific thing?

Conclusion
Since these frameworks are highly theoretical, verbose and in some cases very confusing, it's best for an organization to pick one or two EA frameworks and combine the best elements from them to create an Enterprise/Industry specific framework. A framework that recognizes the Business, Technology and People constraints of an enterprise and creates a framework that allows EA to be in sync with the Business needs and doesn't overwhelm the process or people(Technologists or Stakeholders) involved.
The key is to remember that not all organizations have well defined processes and not all organizations have mature IT or EA practice. Such organizations need a scaled down version of EA that excites them and provides visibility into the solutions that are being address by EA.

This bring us to the end of the post here and leads to our next main topic around Grid and Cloud Computing etc. This one is almost on everyone's mind and one that comes up so many times in discussions that it's fair to say it's currently the hottest topic.

Resources
Wikipedia

The opinions and statements in this communication are my own and do not necessarily reflect the opinions or policies of CA.