Showing posts with label article. Show all posts
Showing posts with label article. Show all posts

Tuesday, April 12, 2011

The Enterprise's Vision on IT Sourcing

Sourcing strategies are constrained by application level architectures. Closed application landscapes with proprietary interfaces, intertwined structures and poor identity- and access management will raise barriers to the feasibility of secure and flexible sourcing initiatives including the use of cloud computing, multiple provider strategies and seamless on premise interactions.

To gain full benefits of to-day’s sourcing offerings applications need to adhere to contemporary standards and architectural principles.

The challenges

The end-user has a responsibility to fulfill business processes or parts of it. He or she is facilitated by chains of applications. It is an IT-responsibility to maintain an adequate user experience in using the applications - including seamlessness, continuity, device independency and location independency - without distracting the user from the business process to be fulfilled.

At the same time it must be possible to outsource the applications in a flexible way, without being constrained by the supported business processes, being able to offer services at any place on any device to any user in a secured environment.

This leads to the following architectural challenges at the application layer:

• On premise accessibility, inbound and outbound interaction
• Cross-provider interfacing
• Seamless and quick workload transfer across multiple providers
• Universal access including single sign-on from any place on any device by anyone


The solutions

On premise accessibility

Legacy applications need to be wrapped with a standards based interaction shell and infrastructural middleware components need locally be installed to enable smooth communication between on premise applications and external applications in both directions.

Cross-provider interfacing

Applications must adhere to commonly implemented interoperability standards to enable communication between applications running in environments of different providers.

Workload transfer across multiple providers

Outsourced application component images must be portable between platforms running in environments of different providers.

Outsourced platform images - including the supported application components - must be portable between infrastructures running in environments of different providers.

Outsourced virtual infrastructure images - including the supported platforms and application components - must be portable between environments of different providers.

Universal access

Web bases access is required to enable application accessibility from any place and from any device.

Federated identity based access mechanisms must be in place to securely enable a single sign-on experience across multiple provides, including on premise access for potentially anyone.

Saturday, May 01, 2010

Cloud Computing Explained

We are heading toward Cloud Computing. About one year ago I published a posting about this trend. But what is Cloud Computing at all? Does it replace the SOA and EDA hypes? Answer on this last question: No! Cloud software takes full advantage of the cloud paradigm by being service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability.

The National Institute of Standards and Technology - NIST - has defined Cloud Computing. This definition perfectly matches my own vision and awareness. I think it may be worthfull to share this vision. In this posting I'll add two of my own pictures to support the understanding.

First of all, to understand Cloud Computing it is very important to understand the viewpoint of IT-services from a layered perspective. The picture below is a simplified version of the model I've always at hand in my daily practice and which I published before on my blog.

IT-services stack
[click to enlarge]


IT-delivery offerings in the market tend to concentrate on each of these layers. Each layer provides services to the next higher layer in the stack adding abstraction and value to its lower level layer. This is a move-away from the stove pipes where every application relies on dedicated solutions throughout the stack.

(Honesty demands to mention appliances, which are hardware stove pipe boxes for the sake of - very - high performance requirements. The consumer of the services should however be unaware of these lower level implementation strategies.)

When you understand this layered view, you will be able to understand Cloud Computing. NIST defines Cloud Computing as follows:
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models.

Essential Characteristics
On-demand self-service
A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.

Broad network access
Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).

Resource pooling
The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.

Rapid elasticity
Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.

Measured Service
Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service.

Service Models
Software as a Service (SaaS)
The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

Example: Google Gmail

Platform as a Service (PaaS)
The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.

Example: IBM Cloud Burst

Infrastructure as a Service (IaaS)
The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).

Example: Amazon EC2


My visualization
[Click to enlarge]
Deployment Models
Private cloud
The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.

Community cloud
The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise.

Public cloud
The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.

Hybrid cloud
The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).

Thank you, Peter Mell and Tim Grance! In return feel free to reuse my pictures...

Thursday, July 02, 2009

The CIO's top 3 priorities

New waves of technological innovation lead to new businesses for IT-delivery. These new businesses use very fast and ultra large scale models to deliver IT-services to consumers. These businesses deliver infrastructure like high volume processing, storage and network facilities within minutes at rates of a few cents per hour usage. Consumers can access virtual PC-s in virtual LAN-s at any size for any period of time on demand using protocols like RDP (Remote Desk Top), which gives the user a local experience of high capacity. On top of this infrastructure other businesses deliver application functionality at the same ultra large scale. Amortizations are spread over huge amounts of users world wide connected over the Internet.

In every enterprise time-to-market as well as IT-costs are continuously under pressure. As emerging new businesses promise - and currently start to prove - to dramatically cut down time-to-market and costs, the enterprises' IT-departments must prepare for change. Although the change will be fundamental, it is not realistic to rely on a big bang.

To deliver application functionality and platform services to the enterprise, policies need to be established with regard to:

A. In-house delivery
B. Outsouring to partners
C. Consuming services from the cloud

During the next 5 years a hybrid situation will evolve with changing weight from A to B to C. Many organizations already witness the change from A to B, starting with consuming housing services and evolving to consuming hosting services.

To guarantee flexibility and interoperability in a hybrid context - which will last for a long time, if not "forever" - extensive platform standardization is required. Three subjects will dominate the CIO's agenda for the next couple of years:


  • Platform standardization

  • Sourcing strategy

  • Commodity utilization



1. Platform standardization

Application platforms (a framework essentially consisting of Portals, ESB-s, DBMS-s, Application servers, Web browsers) and infrastructure platforms (essentially offering OS, network, storage and underlying hardware) need to be highly standardized in order to allow easy interoperability and scalability and flexible deployments. These platforms need to be based on open architectures to allow for seamless integration internally and externally.

2. Sourcing strategy

Delivery will be outsourced to specialized parties, whose core business is IT-delivery. The enterprise can take advantage of the competences and efficiency of scale of specialized suppliers. Focus will change from own in-house delivery to orchestration of delivery by multiple sourcing partners.

3. Commodity utilization

Platform services and application functionality is emerging from the cloud. PaaS (Platform as a Service) and SaaS (Software as a Service) will become available instantly on demand and on a pay-as-you-go basis with automated fast-scale facilities. Global scaling benefits of tens of thousands of highly standardized virtualized resources lead to huge cost reductions with hardly any pre-investment for the consumers. After a level of trust has been established with regard to performance, availability and security, enterprises will massively embrace these offerings. Small organizations and start-ups with little or no budget and hardly any legacy will be the first ones and are already consuming these services today.

Sunday, November 16, 2008

Enterprise Agility - An Integrated Approach

Earlier this year I published a posting on the purpose of Enterprise Architecture in your company. I explained what the environment dynamics are in a what technologies form the layer of indirection between the business and changing contexts.

Lars Hansen
, a student at the IT-university in Denmark took the subject of enterprise agility to an academic level with a thesis called: Enterprise Agility - An Integrated Approach (PDF).

His focal point is agility in relation to business processes and information systems. He analyzes the relationship between BPM, SOA and EA (Enterprise Architecture). He sees EA playing a very different role in regards to agility compared to that of BPM and SOA; EA is taking the long-term enterprise-wide look at resource utilization in the enterprise. In some ways, this long-term view is an anti-thesis to agility, but he sees huge synergies in using EA in combination with BPM and SOA. However, he also finds that something is missing from the equation. To be able to integrate EA, BPM and SOA there need be a shared language to understand the architecture as a whole.

Worthwhile reading!