An object can be anything you decide to maintain data about; a human, a train, an order. When things relevant to these objects happen, the data about these objects may need to be changed to represent the new situation; a human gets ill, a train gets delayed, an order gets rejected. Things that happen are events.
Data about objects is maintained in databases. So events may trigger database updates. The database persists the state of objects. You may choose to persist the state-history (data warehouses), or not.
So far so good, we got our applications to handle these updates based on simple or complex algorithms. But things might get complicated in highly active operational environments with near real-time processing requirements. Consider the following cases.
The state change of an object is derived by correlating multiple events occurring within a time-frame.
A trains new estimated time of arrival at B depends on (1) trains departure time at A, AND (2) speed limit between A and B due to heavy weather, AND (3) congestion approaching B (because of other trains departed earlier from A and not yet arrived at B).
The state change of an object is derived from patterns in multiple events occurring within a time-frame.
Two gate passages are detected with the same access token within a time-frame that is too short to travel between the two gates. This pattern detects - in real-time - an illegal copy of the access token (new state: blacklisted) and may alert authorized personnel on duty to arrest the passenger instantly.
The state of objects changes quicker then a database can follow.
A huge number of stock quotes to be traded changes within milliseconds.
Actions are to be started instantly based on a specific state change of an object.
Inform passengers their train will get delayed (mind: the action is the result of a real-time correlation of multiple event types within certain time-frames, see above).
In case of high volume state changes, real-time event correlation or real-time event pattern recognition, wouldn’t it be a good idea to deploy a dedicated service in your SOA to process events, hold states and publish new derived events?
An event processor is a service that pulls multiple streams of event data through its memory for comparison. Boolean logic detects correlations and/or the occurrence of predefined patterns across multiple event instances within certain time-frames. The event processor may also instantiate the objects you want to track some predefined states of (e.g. delay states of all running trains), likely in memory. Boolean logic snaps relevant state changes of the object instances and will trigger instantiation of the new state (that can be queried). All in real-time and instantly within a few clock-cycles by applying boolean algebra and truth tables. Based on the results actions can be triggered including executing tasks, start processes (BPM), publishing derived events and new object states or driving business activity monitors (BAM).
If you are able to model the event data being idempotent this service will not only be potentially very powerful, but very robust as well.
Monday, March 31, 2008
Event Processor tracks State of Objects
Saturday, March 29, 2008
IT Services Stack: collaboration experiment
It is not always easy for an enterprise IT architect to keep scope and hold the complete picture. As we have several architects with different competences I felt the urge to develop an IT Services Stack. The IT Services Stack is a picture of a layered view on all aspects of IT from a component perspective.
I have this picture always at hand during every meeting. And I use the picture to address subjects to the most competent architects.
The idea behind the view is the layering of services delivered by components. At every layer components are defined that play a role in delivering services. Components on one layer make use of services delivered by components on that same layer or by components on the next lower layer. Those are the constraints I applied to construct the model. Don't view the layers as a logical top-down flow, but as a way of grouping and encapsulating cohesive components.
The top layer is the business layer. The next lower layer is the process layer. These two business oriented layers do not exclusively imply externally visible business and processes (like transportation of people by trains), but also internal business and processes. E.g. the IT department delivers services to other departments. This is the IT-scoped business defined at the top level layer. And the processes of the IT business are e.g. software development processes that require development tools (IT-business applications).
Call for collaboration
I would like to make this premature IT Services Stack more consistent and supply an extended view on every component mentioned in the picture. The model should be defined one level deeper, with the following attributes:
- Function of the component
- Relationship with other components
- Sub-level components and models
- Related open standards
- Innovative products in the market
Don't hesitate, even the smallest bit of input is more then welcome to me. If you maintain your own blog, you could help by giving the initiative some attention on your blog.
I will maintain the model based on these inputs and keep all subsequent versions available to the public domain in a powerpoint- and JPG-format. Everybody is free to copy, use and republish the continuously maturing model for his/her own purpose.
Download Powerpoint 97-2003 document of the current version of the model.
Download Powerpoint 2007 document of the current version of the model.
Reactions may be supplied by email or by adding a comment to this posting. If appropriate feel free to use hyperlinks to your own blog or relevant web sites.
Wednesday, March 26, 2008
Transforming Canonical Message: answer to readers comment
A reader commented on my posting: Canonical Data Model is the incarnation of Loose Coupling. Let me walk through the comment:
I hope I understand you: A data provider sends its data in its own format.Yes, that is correct.
A data consumer receives this message, converts it to a canonical data model, possibly based on the message type, and then transforms it to its own format.No, that is not correct. The message is converted to a canonical format by a generic transformation service. This service queries the canonical data model to get the transformation rules. The canonical message is published for consumption by any interested endpoint. Before consumption by an endpoint, another generic service converts the message from the canonical format to the endpoint's format. So the endpoint consumes the message in its own format.
All of this is happening within the "global data space" layer.Yes.
(You probably would merge the transformation rules, instead of performing two transformations)No. The messages are converted near the endpoints; there will always be an intermediate canonical instance of the message traveling across the global data space. This simplifies the mechanism. If there are multiple data providers and/or multiple data consumers, merged transformation rules would lead to an exponential increasing number of transformations, and multiple instances (different formats) of the message would travel across the global data space. See picture below.
The picture shows one message type that is provided by two different sources and that is consumed by 4 targets. The left hand side shows direct transformations whereas the right hand side shows an intermediate canonical message instance.
If I am correct so far - please interrupt at any time ;-) - then both endpoints are completely decoupled.
Yes.
Let's assume that a new data consumer needs an additional piece of information, a piece of data which can be provided by the data provider. Wouldn't that mean that I have to change the transformation rules for both end points, because the canonical data model gets an additional field?Yes, if the new data was not foreseen at design time of the canonical message, you will have to extend the transformation rules in the canonical data model AND have the provider deliver the new data. But if the data were available, it would have been wise to model that data into the canonical message, even if it were not required at that moment.
If the data is not available you might add a new service that enriches the original message. This pattern is known as the VETO pattern.
By modeling the canonical messages from an event-driven perspective - messages representing relevant business events - and not from a "currently required data" perspective you might decrease the need for change.
No, not quite. You should think of federated infrastructures for the global data space as well as for the canonical datamodel.
From a deployment view the whole "global data space" layer would become an atomic unit: A piece that can only be deployed in one piece. Is that a good idea when talking about a major backbone in the corporate's IT environment?
Domains need only know there own formats and semantics plus the canonical formats and semantics. Not those of other domains. Relevant canonical formats and semantic definitions could be pushed to the domains in a federated model.
If you don't have a federated bus infrastructure, messages can yet be propagated across multiple bus implementations as depicted below.
A service subscribes to a published message in Bus 1 and calls (synchronously) a service in bus 2 to pass the message reliably. The called service republishes the message in Bus 2. This is a simple method to pass published messages across multiple independent service bus infrastructures that are unaware of each other and yet being part of one global data space.
See also a nice article I referred to in this blog about a distributed implementation of the global data space.
Tuesday, March 25, 2008
SOA Governance in a nutshell
SOA governance is about policies with regard to building as well as running SOA-based applications. This animation nicely explains SOA governance in a nutshell.
Saturday, March 22, 2008
Canonical Data Model is the incarnation of Loose Coupling
Quote from a reader of my blog with regard to the Canonical Data Model:
The main issue I have is that someone has to come up with a data model that includes the information required by everyone - a superset - rather than a subset what a point-to-point connection requires. It seems to me that this is very difficult to achieve, from a design point of view - capture everything - to a governance point of view - who is going to own this and define what an object is - to a technical point of view - very complex objects, different versions etc.
The "superset" he is talking about is merely a metamodel of the data that point-to-point connections would require. The canonical data model is a federated collection of local metamodels including the definition of the common semantics and the format transformation rules. It need not be "more" than you need and it does not contain any stored application data.
To enable loose coupling a layer of indirection is defined in terms of a global data space, a canonical data model and canonical messages. This enables the mapping of semantics and transformation of formats between mutually unknown (decoupled) endpoints.
A good way to understand the mechanism is to view the canonical messages as the formally defined carriers of specific information throughout the enterprise. Data providers (sending endpoints) fill the appropriate canonical message using the metadata defined in the canonical data model. Data consumers (receiving endpoints) consume the data from this canonical message, also using the metadata defined in the canonical data model. In this way the endpoints don't need to have any knowledge of eachother.
The endpoints don't even need to know the canonical data model. Services delivered by the infrastructure (global data space), which has knowledge of the canonical data model, will take care of loading the data delivered by an endpoint into the appropriate canonical message (carrier) and unload the data from the canonical message to be consumed by the receiving endpoint. The endpoints only use their own formats and are totally decoupled.
You might recognize that in fact - from a software architecture perspective - the canonical data model is the incarnation of loose coupling.
Indeed it is true that this addresses a governance-aspect that nowadays in most IT organizations is not represented very strongly. If you want to reach the next level of IT maturity based on the ideas of SOA and EDA, it is a prerequisite to extent your governance with regard to formal semantics and format definitions as well.
Conclusion
The idea of the canonical data model is to define the semantics and formats from the local endpoint perspectives. To be able to map the endpoint interfaces in a loose coupling context (endpoints do not know each other), an intermediate mediation layer needs to be in place. The canonical data model is the underpinning facility that allows for the mapping of the distinct local semantics and the transformation of the distinct local formats between decoupled and independent endpoints.
So yes, it is right that maturing your software architectures requires maturing the required governance: loose coupling comes at the price of a tighter governance. On the other hand: evolving SOA governance tools are coming to help.
Monday, March 17, 2008
Isn't SOA about technology? You bet it is!
Joe McKendrick quoted Anne Thomas Manes:
"It has become clear to me that SOA is not working in most organizations."
Anne also says: "...this technology discussion is irrelevant."
Here we go again!!! If you want to travel from A to B, cars and asphalt ARE relevant. If you don't recognize the IT-perspective you are missing 75% of your sight on SOA. Why are we so strongly turning our back to enabling technologies when we talk about SOA? From a technology perspective SOA is able to support even the most lousy business processes. It might be delightful to view SOA from that perspective as well.
SOA should not be sold to the business, but instead renovation and innovation of one of your most important business assets - IT - should be sold. Just to gain the biggest business benefit of all: SURVIVAL.
Friday, March 14, 2008
Getting SOA of the ground
More and more I come to the conclusion that a targeted innovation program - including funding - is the only way to seriously get SOA of the ground.
The top-down approach starting with componentizing the business into services is without strong forces from the highest level of management far to ambitious. I don't see any spin-off from the selling and convincing strategies by IT- or business consultants. The people responsible for doing business just don't have time and passion to play these "academic" games. Specially not if they find out that it may lead to changing their responsibilities and roles.
The bottom-up approach by convincing IT-projects to make use of a messaging infrastructure (not to mention breaking down silo's into components) doesn't work very well either. Projects are focused on releasing in time. Introducing new concepts are risks, high risks, and will take much more time to deliver. Yes, development will be easier, faster and cheaper... in future. But that is not what the project needs at the moment. By the way, are some of the developers losing their jobs if things go faster? Is it cheaper because you can do the job with less people? No way a developer will support his own dismissal.
This doesn't mean that you should stop motivating individual projects to move into the right direction. Some project may really be fit to chose one of the entry levels to SOA while maintaining their primary project goals. These projects will be your quick wins that you can show in the vitrine. E.g. you might have luck with a green field project staffed with highly motivated people that will make some of the SOA ideas come to life. And you may be lucky to have your ERP-vendor bringing in the SOA-concepts instantiated in his products. But by no means these local efforts will get SOA globally of the ground in enterprises where legacy systems play a dominant role (most if not all big companies today).
To really get started with SOA, a renovation strategy is needed. A strategy that decrees a structural redesign of the application landscape. This strategy may start at a low entry level by "simply" introducing an explicit physical messaging infrastructure on the application landscape and enforcing applications to make use of this infrastructure. Or - in some cases - silo oriented legacy applications are decreed to be redesigned and broken down into components and being reconstructed in a service oriented fashion. Higher entry levels like replacing entire legacy applications and introducing canonical data model principles may be too risky in the early phases, but are within the scope of interest. This also applies to ideas and initiatives for business-process redesign, the extensive introduction of BPM and required changes in IT-governance. First focus on the introduction and standards based use of a messaging infrastructure and the related operational management.
This renovation (or innovation) strategy must decree the definition of roadmaps and the execution of projects within one specially targeted program. The program is funded from an innovation budget. In this way the projects will have renovation and innovation as their primary goals, in contrast to current projects that must deliver functionality on a deadline.
From my own experience I believe this is the only way to succeed in getting structurally on the road with SOA and to get ready for the rapidly evolving globalized information age. In highly competitive industries this explicit approach may even be a matter of survival.
Monday, March 10, 2008
Sunday, March 09, 2008
Help expanding the WS-* list on Wikipedia
If you want to do a good charity job then you can help by expanding the Web Service specification list on Wikipedia, generally referred to as WS-*.
I think it's an honorable job as in my vision these standards are the technical basics for the global evolution of Service Oriented Architectures from an IT-perspective. They will last and evolve for decades from now. In the next century "we" will be talking about these specifications as the standards that moved the world into the Information Age. Together with the Internet these specifications will fundamentally change our world.
[Would someone read this prophecy in 2108 and conclude there lived some sort of lunatic blogger a century ago?]
SOA sounds like music
I don't say SOA is easy. Neither is it easy to compose music, being the architecture of notes, tunes and instruments... nor is it easy to play the tones in a way that makes good sounding music.
Where SOA is the product of the composer, BPM is the product of the conductor having the music sound in harmony by orchestrating the individual musicians.
Thursday, March 06, 2008
Guerilla SOA
Watch this amusing as well as instructive video-presentation of Jim Webber on "Guerilla SOA" where he presents some interesting conclusions about the future of messaging.
In a very entertaining presentation, Jim Webber debunks myths about the ESB concept and explains how a lightweight approach can yield real benefits without giving in to vendor pressure.
I doubt if he is right on all aspects, but there is some of his guerilla vision I tend to sort of agree with, as this previous post of mine testifies (pushing ESB to the infrastructure and make extensive use of WS-*).
He has published a bunch of other presentations.
Wednesday, March 05, 2008
About layers and tiers
I came across an interesting article of Arnon Rotem-Gal-OZ about the (mis)use of the layered architecture style. I found it an interesting article, although I have an essentially different view.
Logical versus physical
I think the model of layers and tiers is a services model. As it is a services model to me, I view the model of layers and tiers as a logical model. The services are physically delivered by components; ultimately one service by one component. So the counterpart of the logical services model is a physical component model, that needs not necessarily map one-to-one to the logical model.
Pragmatics like performance issues and availability is an aspect that may diverse the physical model from the logical model.
E.g. one single application (component) often contains conceptually the three well-known tiers (services): UI, business logic and data persistency. And - in a bad case - where-as there are conceptually three tiers, the application code may look like a clumsy bunch of spaghetti not being arranged in tiers at all, because of performance reasons (grrr, the worst and most "not-done" example I ever used).
Layers versus tiers
In my architectural designs, I distinguish between layers and tiers.
I use layers to create abstraction by encapsulation. A service at a higher layer makes use of services at the next lower layer, repeatedly till the bottom layer is reached. The interaction of services between two layers is always unidirectional; the lower level delivers to the higher level. So the layers form a stack of abstraction. The OSI-stack is an example of such a layered model. Another example is the distinction in SOA between business services, plumbing services and technical mapping services.
Communication between layers tends to be synchronous.
Tiers is another story. In my designs I use tiers to model services within a layer. Tiers is the arrangement of services into chains on one single level of abstraction. E.g. the layer of business services may be arranged in the tiers: front-office, mid-office and back-office. At the next lower layer, the application layer, services may be arranged in the tiers: UI, business logic and data persistency. The interaction of services between two tiers may be bidirectional (but may also be constrained to unidirectional).
Not all interacting services within a layer need to be modeled in tiers. There may be services that do not interact with other services in a layer at all, but exclusively deliver to the layer above. On the other hand there may be services that only deliver within the boundaries of a layer.
Communication between tiers may be synchronous as well as a-synchronous.
(interacting tiers are at a same level of abstraction)
Multiple views
Mind that layers and tiers - as they are a logical model - may be designed and viewed from different scope boundaries and perspectives. In an SOA you may limit your scope purely to business functions and design a layered and tiered model of business services. On the other hand you may focus broader on business services, plumbing services and technical mapping services, which is more of an implementation view. Or you might just focus on the technical mapping services, which leads to the Web Services view of SOA. Current SOA granularity discussions are often obscured by the lack of this insight that services modeling is multi-dimensional.
Conclusion
Simply said: layers are encapsulations and tiers are barriers. The use of layers and tiers is a way of enforcing architectural principles on a services model. For the sake of flexibility and manageability of complex structures a well designed model doesn't allow leakage between layers nor between tiers. And a well designed model offers well defined (standards based) interfaces between well defined tiers and well defined layers. And, finally, a well designed model is explicit about its overall scope and boundaries.
The next step is designing a physical component model of building blocks to implement the services model. Because of the non-leaking constraints and the well defined interfaces this should be a piece of cake - just joking... The component model will in turn guide the design of the deployment model (geographic distribution, topology, load balancing, clustering, dimensions, connectivity, etc).
Saturday, March 01, 2008
What is the purpose of Enterprise Architecture in your company?
Nick Malik challenged his readers by asking:
What is the purpose for EA in your company? How do you answer the question: "This is the measurement that we are paid to improve?"
First of all: what is architecture at all? As I posted before, I think architecture can be defined as "purposeful composition" or, in other words, "meaningful arrangement". No more and no less.
As most architectures Enterprise Architecture has more than one purpose. Those purposes may be conflicting and it is the architects job to balance them.
Back to the question of Nick: The most important purpose of EA - in my opinion - is to offer business continuity in an ever changing context from a holistic point of view. So:
The ability to smoothly follow change, measured in the rate of business continuity being agnostic to change.
Not the ability to change the internals of the distinct components, but the ability to follow changing contexts of the organization as a whole.
Recognized aspects - among others - of change in a business context are:
FunctionalityStrategic design-to-change is what EA is about, in contrast to the tactical design-to-release approach of solution architectures, where the purpose is deployment of "function". A strategic design-to-change cycle has focus on guidance. A tactical design-to-release cycle has focus on version deployment.
Changing vision and business scenario’s; marketing strategies and campaigns; propositions
Processes
Changing process chains and dataflows
Organization
Changing responsibilities; reorganizations; merging; splitting; out-sourcing; in-sourcing
Partners
B2B: connections with changing external environments and partners
Customers
B2C and C2B: Application access by ever changing intelligent user-devices
Suppliers
Contracts with changing facilitators and service providers
Risks
Compliancy to changing regulations; improvements because of security incidents
Dimensions
Growth of volume, frequency, functionality and geography
Technology
Innovation; new generations of software products and devices
I see Enterprise Architecture as the layer of indirection between the business and changing contexts.
I depicted this idea in the "donut" below.
The importance of Enterprise Architecture from the perspective explained above is higher than ever before. The current increasing pace of IT-driven technology evolutions changes the world more rapidly and more globally than ever before, socially as well as technologically. A design-to-change strategy is key to guarantee business continuity - or even business survival - in the current era of exponential rapidly and continuously changing contexts and enforcing compliancy regulations.
From an application and application infrastructure perspective the "donut" may be populated - as illustratively depicted below - with currently available technologies that all support ease of change.