Saturday, December 31, 2011

SaaS (Software as a Service) and Multi-Tenancy

SaaS
SaaS can be defined as:
"software deployed as a hosted service and provided over the internet". 


   It is becoming an increasingly popular word, one that Software Vendors can ill-afford to ignore. It is opening up markets and segments that were previously inaccessible to vendors but it is also throwing up challenges like never before. 
    SaaS allows an application to scale to (theoretically) unlimited number of customers. It is typically achieved by on-demand horizontal and vertical scaling behind the scenes to provide a seamless experience to the user. 
    The practice of Application architecture has to adjust to account for SaaS and to Architect and Design applications for SaaS. Support for Multi-Tenancy is the approach some SaaS vendors follow to effectively address the SaaS challenges and this article will attempt to address the different aspects of Multi-Tenancy. So...


What is Multi-Tenancy
Multitenancy refers to a principle in software architecture where a single instance of the software runs on a server, serving multiple client organizations (tenants).


Why Multi-tenancy?
  • Great and Easy deployment. There is a single system to manage.
  • Cost savings associated with less hardware(or Virtualized Hardware)
  • Less software licenses to buy.

Multi-Tenancy Issues
  • Security testing has to be extensively done.
  • Expensive hardware to buy. Increased cost associated with big rack hardware.
  • Not easy to convert an existing application/software to Multi-tenant. 
  • Requires schema change to existing apps. 
  • Pure Multi-tenancy requires applications and infrastructure to scale-up to address demand.
  • Single point of failure issues. Which is the weakest link?
  • Multiple tenant metadata is difficult to manage.
  • Down time issues. What happens if the Database has to be brought down for maintenance?
  • Access control. This could be managed and configured at diff customers and less control over issues surrounding this.
  • Extensions to data model are tricky and sometimes impact all customers.
  • What about architecture that relies on communication between components in SaaS world and ones behind the firewall ?

Muti-tenancy versus Virtualization
Virtualization to a limited extent seems to be a good alternative for Multi-tenancy. However there are drawbacks. Would virtualization be profitable if there were 1000 customers deployments? How would you deploy and manage such an environment? Also what would we do if the Customers are "on-demand"? Which means they may want to use the system for brief periods of time and then be gone for days/weeks/months. How do you keep a Virtualization solution profitable?


But this is not to say that it just can't be done profitably. There are solutions out there that are using Virtualization to achieve Multi-Tenancy especially If the customer deployments are in single or very low double digits.Virtualization might (better) provide the following benefits:

  • Data Isolation.
  • Security. Both at the PaaS layer and at the Virtualized layer.
  • Performance ( one client can't directly impact the other's performance).

Designing for Multi-tenancy


Key points of Multi-Tenancy
  • Flexibility
  • Share-ability
  • Maintainability
  • Customizability

Architectural Constraints
  • Maintain a single code base to ease deployments and upgrades.
  • Share the data resources to have a consistent view of the Schema.
  • Components must be customizable at every possible level.
  • The Application Tier must be as stateless as possible to allow Scalability.

Trade offs
  • Complexity versus Time to market. What do the customers want and when?
  • Resource sharing vs Security/Availability. Who is my customer? Legal or SLA considerations?
  • Customize-ability vs Maintainability. A myriad of customizable options. Which one was chosen when this issue occurred? How do we fix this without affecting everyone else?

Interfaces
Multi-Tenant applications should expose (and consume) standard based interfaces like
  • REST
  • WS-*
Configurable
The application has to be configurable. In a traditional MVC style of application, the following would have to be extensively configurable:
Model: Allow schema extensions.
Controller: Allow new business logic to be plugged in or existing ones to be enhanced. Allow modification/customization of security policy.
View: Allow look and feel changes, changing of display items, screen order, messages etc. This does assume that there is a metadata somewhere that allows the configuration options and allows the particular view to be "instantiated" based on the metadata.



Security
Security includes, secure the SaaS model as whole and has an application level security architecture and a Data level security architecture. Data level Security is addressed in the next section. Application level security could involve either storing all user accounts with the SaaS provider and/or federating authentication to  trusted STSs ( Security Token Services) or trusted Identity Providers.
The SaaS application can also provide configurable Identity Management modules that either do the authentication/authorization themselves or federate as noted above. If the Authentication is federated, then we need some kind of a mapping service to map roles from Trusted servers to the roles/policies defined in the SaaS application.


Multi-Tenant Data Architecture
  • Separate Database 
    • Easy to Maintain
    • Customizable ( with probable issues later on)
    • Secure
    • Upgradable
    • Higher costs
  • Shared Database with separate Schema
    • Easier to Maintain
    • Customizable ( with probable issues later on)
    • More Secure
    • Upgradable but complex
    • lower costs
  • (Truly) Shared Database. This is the ideal scenario and the one most touted by purists. However the risk is that the different tenant data could inadvertently be mixed resulting in legal, compliance or contract issues. Also there are issues around Database sizes and multiple strategies including Partitioning and Segmentation have to be used to manage the Database. We won't go into details as there is more than adequate literature around these topics.
    • Complex Upgrade process
    • Impacts Multiple customers
    • Data Security at Data Access Layer.
    • Low Cost ( not assuming cost of design, development etc).


Which approach to choose?
Chose the Shared Database option if the number of tenants are higher to justify initial investment. However if number of users per tenant or database size per tenant or per tenant customizations are greater then choose the isolated model. 



Data-Security
Security strategies for shared database could include having views that filter the data visible to a tenant, using access control to the database itself and encryption at the data layer ( decryption at the DAC using tenant specific keys). 

Data-Extensibility
Multiple options are available including using name-value pairs to store data. Traditional data-extensibility approaches include have predefined fields and allowing extensions through metadata tables.

Data-Scalability
Data partitioning as noted in the above sections can aid in horizontal scaling of the Database.



Refer Multi-Tenant Data Architecture for further details.


Conclusion
Achieving the most optimum "degree" of multi-tenancy is something that the organization has to strive for. It could start with the Multi-Tenancy at the Infrastructure layer (IaaS), and the Platform layer (PaaS), move on to SaaS clusters that can provide some degree of Multi-Tenancy and then finally move on to the complete Multi-Tenancy at the Software layer (SaaS). 


Resources
  1. Wikipedia- Multitenancy
  2. Many degrees of Multi-tenancy is an excellent blog post that outlines the current debate and approaches for Multi-tenancy. 
  3. Multi-Tenant Data Architecture is an excellant resource that talks about Data design patterns for Multi-Tenancy.
The opinions and statements in this communication are my own and do not necessarily reflect the opinions or policies of CA.

Saturday, December 10, 2011

Cloud Computing

OK. I made up my mind. Let's cover Cloud Computing before we talk about SaaS and Multi-Tenancy.


Maybe explaining the below terms will be better than trying to come up with a technical definition of cloud Computing.


IaaS(Infrastructure as a Service)
IaaS provides data center, infrastructure hardware and software services over the web. One example is Amazon EC2. This is probably the most dominant form of Cloud Computing that we see today.


PaaS ( Platform as a Service)
PaaS is the next level of abstraction and provides the platform to build software services or products. For instance it may provide a Database, Web Server etc. Example is Salesforce.com's force.com.


SaaS (Software as a Service)
In SaaS, applications are provided as hosted services over the web.Probably the most widely used model of Cloud Computing.


So you could define Cloud Computing as "some service provided over the internet". It could be a computer server, a virtual server, a pre-configured OS, a hosted environment(Middleware), web based apps ( Google Apps), Web Services (Gmail ) etc. The three characteristics of a cloud service are:

  • Elasticity
  • Self-Service access
  • Quick response.

Cloud Computing versus Grid Computing
Grid Computing typically relies on a batch scheduling mechanism to fan-out the task to multiple nodes and then accumulate the results. So Grid computing doesn't necessarily deal with getting processing capacity right now and instead focuses on some mechanism that is predefined and allows the "batch-job" to schedule jobs across these Grid nodes.


One could argue that Cloud Computing is more evolved version of Grid Computing, Virtualization is important for Cloud Computing because it is on-demand and is a key difference versus Grid Computing. However there are ways of achieving this by using Multi-Tenancy (Salesforce.com?) without Virtualization and we need to talk about those reference models.


If I haven't confused you already, then let me try some more. What about Grid Computing that is available in the Cloud? :-)


Public, Private, Hybrid and Community Clouds
Location is not important here but the ownership is. Who "owns" the facility? If the facility is shared by multiple public clients then it's a public cloud. If the facility is co-located within the enterprise ( it could be managed by a third party provider) then it's a private cloud. Hybrid clouds, ofcourse combine elements from both public and private clouds. Community clouds have multiple owners and are shared across those communities.



Cloud Computing benefits
Faster processing by making use of better/faster infrastructure available at much cheaper rates.
Minimize infrastructure bottlenecks by delegating the scalability and on-demand handling to the provider.
Low barrier to entry by allowing SMBs to participate in provinding solutions irrespective of the size of their data center.


In spite of the benefits, we do have to consider the network bandwidth requirements forced by servicing clients. Is it sufficient to meet the clients demands? What is the latency?



So, how do you use the Cloud?
OCCI is working towards standardizing the "API" to access the cloud but unfortunately, it is not completely implemented at vendors yet. Not sure if the major vendors like Amazon and SalesForce.com are part of it, so Customers hoping for interoperability between cloud providers will be disappointed. However it is still a useful resource to keep track of.


Architecture specific focus


Application architectures now have to consider a few extra things in addition to traditional issues such loose-coupling, distributed deployment. They now have to focus on delivering the entire application architecture as a set of composable services. If it can be virtualized, composed and assembled programmatically and quickly then it falls into the perfect category of Cloud applications.


The following are key for successful cloud applications:
Horizontal Scaling is the key to having a successful cloud application. If we can deploy the application components in a distributed fashion and provision additional deployments if the demand increases, we would be able to serve additional requests. Surge computing can be utilized as well to procure the computing needs from public clouds in case we are running in a private one. Horizontal scaling assumes we have Parallelization at some level. Without Parallelization, the nodes would depend on each other or on some other common service which would become the bottleneck.


Security, Compliance are important topics that we can't cover in detail in this post but should be handled in any cloud architecture. Concern or doubts regarding these two are perhaps the main reason why cloud is not adopted in large corporations. They deserve a separate and a detailed post.



IBM cloud reference architecture
Not sure how much detail I would be able to go into but the majority of "fluff" around the IBM cloud reference architecture can be ignored. Most enterprises don't embark on providing IaaS, PaaS and SaaS in the same breath. It's mostly a business decision and what makes economic sense. However, here is what you can take away from it:


Common Cloud Management Platform (CCMP) consists of Operational Support Services (OSS) and Business Support Services (BSS)

  • Business Support Services represents the business-related services involving pricing, metering, billing, account etc
  • Operational Support Services represents the technical-related services involving, provisioning, Ticket Management, Virtualization Management etc.

Qos (Quality of Service) in CCRA
The non-functional aspects like Security, Resiliency, Performance & Consumability are cross-cutting aspects of QoS spanning the hardware infrastructure and Cloud Services and must be viewed from an end-to-end perspective including the structure of CCRA by itself, the way the hardware infrastructure is set up (e.g., in terms of isolation, disaster recovery, etc.) and how the cloud services are implemented. The major aspects of QoS are:
  • Governance and Policy.
  • Threat and Vulnerability Management
  • Data Protection
  • Availability & Continuity Management
  • Ease of doing business
  • Simplified Operations

Summary

  • It's a starting point. If Cloud Architecture seems daunting then this is a good starting point.
  • It's a good "best practices" document. It does define the collective experience of IBM experiences across various cloud solutions. There's got to be something useful here. :-)
  • It does define four architectural principles(referred as ELEG) of which atleast three seem to of value.


    • Efficiency. Basically means we need to increase utilization of cloud services.
    • Lightweightness. Basically use some form of Virtualization or other technique to avoid "heavy" management of IT. 
    • Economies of Scale. The idea is to have common management services that can be shared across cloud flavors.
    • Genericity. No clue what the message here was! Please read the document and help me! :-) 

Three and Four seem similar and not sure what the differentiating factors are. Maybe we need to get some kind of context around the IBM doc to fully appreciate the message because it will come up in discussions and it will be important to explain the best points of this and avoid the unnecessary details. I think I will get back to this someday...


There are other Reference models out there. NIST has one and so does DMTF. Again, I will get to them someday... :-)

Saturday, November 19, 2011

Grid Computing and Cloud Computing

Grid Computing
According to Wikipedia. "Grid computing is a term referring to the combination of computer resources from multiple administrative domains to reach a common goal. The grid can be thought of as a distributed system with non-interactive workloads that involve a large number of files. What distinguishes grid computing from conventional high performance computing systems such as cluster computing is that grids tend to be more loosely coupled, heterogeneous, and geographically dispersed."

Grid Computing will allow us to harness the power of distributed, heterogeneous and loosely coupled computing resources as if they belong to one large infrastructure. Utility Computing involves the concept of paying for what you use on the Grid. So you basically have to provision what you need in advance and that allows you to harness the power of those resources as if they are part of your infrastructure.


Cloud Computing
Cloud Computing at a conceptual level is similar to Grid Computing in terms of computing resources being consumed as electric power from a grid would.

So the way I see it, Grid Computing was a precursor to Cloud Computing.

Further details will be the subject of another blog post but not sure if I should write about Multi-Tenancy and SaaS first...

The opinions and statements in this communication are my own and do not necessarily reflect the opinions or policies of CA.

Monday, November 7, 2011

Enterprise Architecture Frameworks: Zachman, TOGAF and other Methodologies

I guess this happens to most of us where we forget the theoretical aspects of the Enterprise Architecture(EA) and instead just recall "highlights" of that methodology or framework. Well, this is what happened to me when a friend once asked me about Zachman Framework and if it's actually useful. What about TOGAF?
I am trying to summarize the discussion we had and maybe someday I might actually go back and refresh the entire details.

Zachman Framework.
I agree with the general perception that this is not really a framework. It doesn't define an actual framework for Enterprise architecture. It's simply a taxonomy for organizing architectural artifacts.
These artifacts are important elements in the enterprise architecture and they define the different representations of the Enterprise from different stakeholders perspectives.

There are basically two dimensions in Zachman framework. One dimension is the various actors in the scheme. Using the building industry as an example, the actors could include the owner, the builder etc. Different artifacts are actually needed for each of these actors. Quoting Zachman, "These architectural representations differ from the others in essence, not merely in the level of detail". There are 6 actor perspectives viz; contextual(planner), conceptual(owner), Logical(designer), Physical(subcontractor) and Detailed(enterprise).

The second dimension is the descriptive focus of the artifact. Zachman proposed six descriptive foci; the What(data), Where(Network), When(time), Why(Motivation), Who(People) and How(Function). 


Here is an image from Wikipedia:







Here is a helpful hint for row Descriptions ( = means is equal to :-) ):
Scope=Contextual = Planner
Business Model= Conceptual = Owner
System Model = Logical = Designer
Technology Model = Physical = Builder
Detailed Representations = Detailed = Sub Contractor

Zachman framework usage

  • Every architectural artifact created in the Enterprise Architecture practice should belong to one and only one cell. So for instance, if we create a Data Architecture it should belong in the System Model(Designer) ==>Data cell. 
  • Every cell needs to be complete which means we need to have some artifact that applies to that perspective.
  • The key thing to remember is that the information in columns should be related to each other or have a strong correspondence with each other. For instance, if we are creating artifacts related to Data then these artifacts should talk about the same data from different perspectives. Let's say data represents Customers for a Business Owner and is represented in the business data model diagram. These same customers should be represented in the Physical(Technology model) data diagram as well and should describe how Customers are actually represented in terms of the technology being used. 

So to summarize, Zachman Framework gives us a way to classify architectural artifacts and ensure that all perspectives are met and that they actually address the same problem. It however does not give a step by step process for creating Enterprise architecture. It doesn't say how the artifacts have to be created, what should be in those artifacts and how should they be consumed.


TOGAF®( The Open Group Architecture Framework)
I hear a lot about TOGAF especially from IBM folks. :-) Not sure of the exact reasons but maybe because it's owned by "The Open Group" and also because of heavy SOA practitioners bias.

To Quote the description from Wikipedia:
"The Open Group Architecture Framework (TOGAF®) is a framework for enterprise architecture which provides a comprehensive approach for designing, planning, implementation, and governance of an enterprise information architecture. TOGAF is a registered trademark of The Open Group in the United States and Other countries [2]. TOGAF is a high level and holistic approach to design, which is typically modeled at four levels: Business, Application, Data, and Technology. It tries to give a well-tested overall starting model to information architects, which can then be built upon. It relies heavily on modularization, standardization and already existing, proven technologies and products".


To put it simply, TOGAF is a step by step process for creating an Enterprise Architecture. So if one wanted to combine Zachman and TOGAF, then they could create the artifacts with TOGAF and then categorize them with Zachman.


An explanation of few key terms used in TOGAF:

Enterprise Continuum
TOGAF views the enterprise architecture as a continuum of architectures.

Foundation Architectures
These are generic architectures that can be used by any IT organization.

Common System Architectures
These are principles visible in many but not all enterprises.

Industry Architectures
These are specific across enterprises that belong to the same domain. For instance Healthcare.

Organizational Architectures
These are very specific to a given enterprise.

ADM

At the heart of TOGAF is ADM(Architecture Development Method). ADM describes a method for creating Enterprise Architecture.  The basic structure of ADM is shown in an Architectural Development Cycle ( ADC). Here is an image from Wikipedia:

ADM is iterative over the whole process, between phases and within phases as well. ADM has been specifically designed with flexibility and adaptability in mind. The phases in ADC could be adapted and their order changed to suit an organization. Also ADM could be integrated with other frameworks or methodologies such as the Zachman Framework and this might result in a modified ADC.

A brief description of the phases.


Phase A
This will define the scope of the project, identify constraints, document requirements and establish high level definitions for current and target architectures. Output is Statement of architecture work.

Phase B
This could be very time consuming depending on the scope and the depth of the Business Modelling effort. Output will be the current and target business objectives.

Phase C
Again, this is detailed as well and there are multiple steps to be followed here. It basically boils down to developing current data-architecture description, architectural models, logical data models, data-management process models and relationship models that map business models to CRUD.

Phase D
This involves completing the technical architecture i.e. the infrastructure to support the new enterprise.

Phase E and F
Identifies the implementation projects with focus on easy-wins or projects that could have the most visible impact with least risk.

Phase G
Involves creating architectural specifications and acceptance criteria of the implementation projects

Phase H
The final phase involves management of the artifacts created.

..And the cycle could repeat itself....


Federal Enterprise Architecture (FEA)
I am not familiar at all with FEA but the below resource makes me unenthusiastic about FEA.
FEA reference models Tragically Misnamed

However just glancing through it, it seems to have five reference models:

  • Business Reference Model ( BRM)

      Business view of the various functions of the federal government.

  • Components Reference Model (CRM)

      IT view of systems

  • Technical Reference Model (TRM)

     The Technologies and Standards that can be used in building IT systems.

  • Data Reference Model ( DRM)

     Standard way of describing data.

  • Performance Reference Model ( PRM)

     Describes the value delivered by EA.

FEA seems to be a very well defined framework but I don't want to explore it at this point. Maybe if something piques my interest.

Gartner
Never had a chance to use this but might be useful in certain contexts. It might be interesting to find out if this applies in non-Garter engagements as well or is only a Gartner specific thing?

Conclusion
Since these frameworks are highly theoretical, verbose and in some cases very confusing, it's best for an organization to pick one or two EA frameworks and combine the best elements from them to create an Enterprise/Industry specific framework. A framework that recognizes the Business, Technology and People constraints of an enterprise and creates a framework that allows EA to be in sync with the Business needs and doesn't overwhelm the process or people(Technologists or Stakeholders) involved.
The key is to remember that not all organizations have well defined processes and not all organizations have mature IT or EA practice. Such organizations need a scaled down version of EA that excites them and provides visibility into the solutions that are being address by EA.

This bring us to the end of the post here and leads to our next main topic around Grid and Cloud Computing etc. This one is almost on everyone's mind and one that comes up so many times in discussions that it's fair to say it's currently the hottest topic.

Resources
Wikipedia

The opinions and statements in this communication are my own and do not necessarily reflect the opinions or policies of CA.

Sunday, October 16, 2011

Agile is not a panacea.

Agile can be a powerful tool in software development but trying to force agile when the underlying enterprise architecture and design are not well thought out leads to a fragmented and disjointed system that constantly needs refactoring and re-architecting to handle major requirements.

Consider the following scenario:


Product A is based on standalone messaging modules that communicate with each other using CORBA or any proprietary messaging protocol. This leads to a situation where the product can't communicate across firewalls. The product management team wants development to re-architect this so that it can work across firewalls and while they are at it, allow the messaging modules to communicate using standard protocols so that new messaging modules can be dynamically added to the system.

Analysis:
These kinds of disruptive changes can be handled by Agile methodologies but they will end up having an extensive planning session which basically starts to look more like the waterfall approach. If the team is hurried, then the planning may not happen correctly leading to an incomplete architecture and high level design.

The agile methodology cannot pick to convert a select few messaging modules because the entire set has to work in a customer environment. This might result in creating multiple sprints( or iterations) that together will eventually cover the conversion process. During these 'sprints' the development process will end up resembling an iterative-waterfall model. 
QA on the other hand, can't fully make use of the sprints because the nightly builds they get from Dev don't completely work and are not expected to work(yet). Some may argue that this violates Agile, that the build should work but let's face it; this is not realistic. There are countless real world situations where the builds don't work while the project wide refactoring is going on.

Also we need to consider cases where software development is outsourced. You do need traditional requirements management practices to ensure that these requirements are well understood and can be implemented in the time estimated. A lot of money and reputation rests on these assumptions, so trying to do away with proper requirements management by saying that this is not agile is wrong. We need to implement requirements management process where we define how to deal with changes in requirements( we can guarantee this will happen) and what the contingency plan is.


So, I think that Agile is not a panacea, it's not going to magically solve the software development problems we are having. In fact nothing or no methodology is going to solve all the problems. Software teams have successfuly modified the waterfall model to do iterative development and added flexibility in requirements management. Not that the waterfall model is perfect but arguing that it just fails or is always prone to disaster is wrong. Here are a few things that successful team and projects have:
  • Smart(er) teams have always combined processes and methodologies from multiple SDLC models to achieve the best possible result.
  • Smart(er) teams always use common sense.
  • Smart(er) teams have management that adapts well to requirement changes and keeps things flexible.
  • Smart(er) teams have developers that get to focus on their work at hand and are accountable on a day by day basis.
  • Smart(er) teams have architects that define the overall architecture, lay down the high level design for the entire release and are not restricted to break down architecture into "days". By no means this implies that Architecting is an endless and long task; actually on the contrary. But trying to box vision, thought and architecture into a "day's worth of work" is not going to help.
The ?(project budget) dollar questions are: 

Where does Agile apply? When exactly do I use it and how?


Agile is most applicable in situations where the architecture and design are more or less stable and where the systemic and disruptive changes have already been accounted for. There you go. It's out of my system now! Agile should be used as a "coding" methodology. Which means that the Enterprise Architecture has to exist ( it could be minimal), the product "main"(don't have to capture everything) requirements have been defined, the data architecture and the application architecture have been at least minimally defined.


Here are a few examples where agile fits:

  • Where customer needs change and reordering of "small" features is necessary.
  • Small development Projects
  • Developing Small Features
  • Adding Enhancements to an existing features.
  • Bug Fixes
  • Complete or proper documentation is not mandatory.
So to summarize, I advocate a hybrid approach:

  • Where we don't try to be 100% Agile compliant and criticize every little deviation as being "not agile". 
  • Where we finish the overall architectural model of the product or application and have a decent idea of the software development plan before we embark on strict agile practices. 
  • Where the requirements management is flexible and architectural requirements are not forced-in midway into the release. We use traditional architecture and design methodologies here sprinkled with heavy doses of common sense. 
  • Where the developer can be made accountable by following an Agile approach and making sure we have visibility into the day to day functioning. Too often development becomes a black box and I think this is where Agile is great. It could make the process more transparent and allow Management to peek into the progress on a day to day basis.
    The hybrid approach I listed above is based on my own experiences but below are a few links that talk about a Hybrid approach to Software Development. These give me a reason to believe that I am not prejudiced or lethargic and that Agile doesn't have to be adopted "as-is". It can be modified and adapted to suit business goals. Phew...
    Software Developers Are More Agile, But Not Entirely So
    Mozilla Takes Hybrid Approach to Agile Software Development


    The opinions and statements in this communication are my own and do not necessarily reflect the opinions or policies of CA.

    Saturday, October 1, 2011

    SCA (Service Component Architecture)- Part 3

    This is the part 3 in the series of articles that will cover SCA(Service Component Architecture). Refer Part 1 and Part 2 before you read this.


    How do I use SCA?
    Even though SCA composites rely on a domain to handle the intra-component communication, the components by themselves can expose Web Services and allow consumers to interact with them using those Web Services.


    Accessing services from SCA component
    SCA components can use other SCA components within the same domain by using references.


    Accessing services from non-SCA component implementations
    Non-SCA components, which are part of the SCA module, get access to services using Module Context. They use ModuleContext in their implementations to find services. The non-SCA component implementation would include a line like the following to get access to the module context:

    ModuleContext moduleContext = CurrentModuleContext.getContext().


    Accessing services from non-SCA Client
    A client can also use the OASIS SCAClient API to invoke a service in a remote SCA domain.The client could be a plain Java SE class with a main method which uses the OASIS SCAClient API to invoke a service in a remote SCA domain.


    For instance, we can use the below client assuming, we have a helloworld service running in a SCA domain somewhere. The client needs access to the HelloWorld Interface.


    Helloworld OASIS Client


    package abc;
    import org.oasisopen.sca.NoSuchDomainException;
    import org.oasisopen.sca.NoSuchServiceException;
    import org.oasisopen.sca.client.SCAClientFactory;


    public class HelloworldSCAClient {


        public static void main(String[] args) throws NoSuchDomainException, NoSuchServiceException {


            String domainURI = "uri:default" //or could be uri:default?wka=127.0.0.1:7654
            if (domainURI == null || domainURI.length() < 1) {
                domainURI = ;
            }


            System.out.println("HelloworldSCAClient, using domainURI " + domainURI);
            SCAClientFactory factory = SCAClientFactory.newInstance(URI.create(domainURI));


            String name = args.length < 1 ? "world" : args[0];
            System.out.println("Calling HelloworldComponent.sayHello(\"" + name + "\"):");
            Helloworld service = factory.getService(Helloworld.class, "HelloworldComponent");
            System.out.println(service.sayHello(name));
         }

    }




    SCA Policy Framework


    SCA defines a policy framework that has:


    • Interaction Policies
    Policies that might apply to bindings, for instance Security etc.
    • Implementation Policies
    Local policies that apply to how a component behaves within the SCA domain. For instance the requirement that there be a transaction for this component. The intersection set between two components policies determine how they communicate with each other. Typically one domain has one security policy( double check)

    These policies can be defined using WS-Policy for communication outside the domain. Communication inside the domain are not defined by the spec, so the SCA runtime is free to choose the mechanism.


    Example:
    PolicySet defines the list of policies and the service/reference/binding indicates which policies to apply.



    <service name="MyService“ promote=“SomeComponent” requires="sca:myAuthPolicy">
       <interface.java interface="someInterface"/>
       <binding.ws port="http://host/Myservice"/>
    </service>
    <reference name="MyServiceRef“ promote=“SomeComponent/someService”>
       <interface.java interface="someInterface2"/>
       <binding.ws port="http://host/Myservice2" requires=“sca:myAuthPolicy”/>
    </reference>




    <policySet name="sca:MyUserNameToken" provides="sca:myAuthPolicy" appliesTo="sca:binding.ws">
       <wsp:Policy>
            <sp:SupportingToken>
                <wsp:Policy>
                 <sp:UserNameToken>
                  </sp:UserNameToken>
                 </wsp:Policy>
           </sp:SupportingToken>
        </wsp:Policy>
    </policySet>

    The opinions and statements in this communication are my own and do not necessarily reflect the opinions or policies of CA.

    Wednesday, September 14, 2011

    SCA (Service Component Architecture)- Part 2

    This is the second part in the series of articles that will cover SCA. See Part 1 for some background.


    How do I create an SCA component/Composite?
    The below sections use the SCA java programming model to describe the annotations that are available, followed by an example.


    Brief Overview of Annotations
    The below is a very brief overview of the Annotations in SCA. This is just meant as a cursory reference. Please refer the SCA spec for details.

     @Service
     A Java class component uses @Service annotation to specify the interface of the services implemented. 


    @Remotable
    This annotation defines a remotable service in the Java interface of the service. Remotable services use by-value data exchange semantics. 



    Local service
    A local service can only be invoked by clients that are part of the same module. A Java local service interface is defined without a @Remotable annotation.


    @AllowPassByReference
    This annotation on the implementation of a remotable service is used to declare that whether
    it allows pass by reference data exchange semantics on calls to it. 
    Local services use by-reference data exchange semantics, so changes made to parameters are seen by both the client and the service provider.

    @Scope
    A @Scope annotation on either the service's interface definition or on the service class denotes a scoped service.  The scope values could be:
    • stateless: This is the default and each request in this mode is handled separately.
    • request: In this mode the requests are directed to the same Java interface for all local service invocations that occur while servicing a remote service request.
    • session : In this mode the requests are directed to the same Java instance for all requests in the same "session".
    • module :In this mode the requests are directed to the same Java instance for all requests within the same "module".

    The SCA runtime will prevent concurrent execution of methods on an instance of these implementations except for the module scoped one.

    Lifecycle methods on a scoped service are implemented using @Init(called only once at the start of its scope and after its properties and references have been injected) annotation and @Destroy(called when its scope ends) annotation.


    @Property

    This annotation on a field or method in the Java class defines the configuration properties of a Java component implementation. It takes the name of the property and the boolean to specify whether injection is required.

    @Reference
    This annotation is used to acquire access to a service using reference injection it takes the name of the reference and whether it is required.

    @Context
    This annotation along with a field of type ModuleContext is used to access a service.

    @Oneway
    This annotation is used to mark a non-blocking method. The method returns immediately allowing the client to continue execution.


    @Session and @SessionID
    These annotations along with @Scope is used to implement conversational services. A session is used to maintain information about a single conversation between a client and a remotable service. Thus, SCA simplifies the design of conversational services while leaving the details of ID generation, state management and routing to the SCA container.

    @Callback
    This annotation can be used to provide callback services on a remotable service interface. It takes the Java Class object of the interface as a parameter. The Callback service provides asynchronous communication from a service provider to its client. 



    Examples  (Taken from Apache Tuscany)


    Catalog Interface
    package abc;
    import org.oasisopen.sca.annotation.Remotable;


    @Remotable
    public interface Catalog {
        Item[] get();
    }


    Cart Interface


    @Remotable
    public interface Cart extends Collection<String, Item> {
    }


    Total Interface


    @Remotable
    public interface Total {
        String getTotal();
    }

    Catalog Interface Implementation


    package abc;
    import org.oasisopen.sca.annotation.Init;
    import org.oasisopen.sca.annotation.Property;
    import org.oasisopen.sca.annotation.Reference;


    public class FruitsCatalogImpl implements Catalog { 
        @Property
        public String currencyCode = "USD";

        @Reference
        public CurrencyConverter currencyConverter;

        private List<Item> catalog = new ArrayList<Item>();
       
       @Init
        public void init() {
            //some code
        }
        public Item[] get() {
           return catalogArray;
        }
    }


    Shopping Cart Implementation 


    package abc;
    import org.apache.tuscany.sca.data.collection.Entry;
    import org.apache.tuscany.sca.data.collection.NotFoundException;
    import org.oasisopen.sca.annotation.Init;
    import org.oasisopen.sca.annotation.Scope;


    @Scope("COMPOSITE")
    public class ShoppingCartImpl implements Cart, Total { 
        private Map<String, Item> cart; 
        @Init
        public void init() {}


        public Entry<String, Item>[] getAll() {
            return entries;
        }


        public Item get(String key) throws NotFoundException {
           //return item
        }


        public String post(String key, Item item) {
           //put in cart and return key.
            return key;
        }


        public void put(String key, Item item) throws NotFoundException {
            //update item
        }

        public void delete(String key) throws NotFoundException {}


        public Entry<String, Item>[] query(String queryString) {
         }

        public String getTotal() {
            //return total
        }
    }


    SCDL File
    <composite xmlns="http://docs.oasis-open.org/ns/opencsa/sca/200912"
    xmlns:tuscany="http://tuscany.apache.org/xmlns/sca/1.1"
    targetNamespace="http://store"
    name="store">

          <component name="Store">
            <tuscany:implementation.widget location="uiservices/store.html"/>
            <service name="Widget">
             <tuscany:binding.http uri="/store"/>
            </service>
    <reference name="catalog" target="Catalog"/>
     <reference name="shoppingCart" target="ShoppingCart/Cart"/>
     <reference name="shoppingTotal" target="ShoppingCart/Total"/>
           </component>

    <component name="Catalog">
    <implementation.java class="services.FruitsCatalogImpl"/>
    <property name="currencyCode">USD</property>
    <service name="Catalog">
    <tuscany:binding.jsonrpc uri="/Catalog"/>
        </service>
    <reference name="currencyConverter" target="CurrencyConverter"/>
    </component>
     
    <component name="ShoppingCart">
    <implementation.java class="services.ShoppingCartImpl"/>
    <service name="Cart">
    <tuscany:binding.atom uri="/ShoppingCart/Cart"/>
    </service>    
    <service name="Total">
    <tuscany:binding.jsonrpc uri="/ShoppingCart/Total"/>
    </service>    
    </component>


    <component name="CurrencyConverter">
    <implementation.java class="services.CurrencyConverterImpl"/>
    </component>  


           <service name="CompositeWidget" promote="Store/Widget">
             <tuscany:binding.http uri="/compositestore"/>
            </service>
    </composite>