One of my blog readers came up with the question:
What are the differences between Observer Pattern and Event-Driven Architecture?
The Observer Pattern is a technical listener solution. A kind of a notification construction. Event-Drive Architecture, however, is a system design style. EDA puts events in the middle of the design. It is about recognizing business events and how to design them in terms of data modeling. It is also about how to deal with transactions between unknown endpoints. So EDA is of a much higher magnitude than the Observer Pattern is. The Observer Pattern is an implementation pattern which is useful as listener/notification component when building event-driven systems The Observer Pattern is not aware of any higher level design style such as the design of events.
Tuesday, December 21, 2010
One of my blog readers came up with the question:
Saturday, October 16, 2010
The world is tilting. The balance of power in the world is radically changing. Perhaps you are not fully aware of it, but look for instance to Piraeus, the harbor of Athens, which since shortly is for nearly 100% in the hands of China, as well as the Argentine railways are and also the Cordoba subway.
But also in our own professional domains you may witness tilting:
- Companies are opening up their data, which has always been securely closed (British newspaper The Gardian)
- Rich proprietary solutions are changing from more to less popular than open standards based solutions.
- IT-demand is changing from company internal IT-suppliers to external suppliers, who are getting cheaper, more secure and more reliable than internal suppliers.
- Trust models are changing from less to more popular and reliable than former contract based models, because failure is starting to have much more consequences for the providers in competing markets than for consumers of the services.
- Own IT-supplies of individuals are getting much more sophisticated than those provided by companies to their employees.
- IT-supported connectivity between individuals has moved to a much higher degree of pervasion than connectivity between companies is.
- The market is shaping the enterprise and not the other way around as it used to be for times.
- Companies are changing focus from decision and planning cycles to adaptability, resilience and the ability to sense change.
- Management is changing from command and control to facilitating as education and knowledge is getting commodity for individuals and crowds.
- Power is changing from authority-based to influence-based in a world that is getting hyper empowered.
Thursday, September 30, 2010
Does your company still want to build its own user interfaces of the customer web-sites? Do you have headaches about all these new devices popping up? Android? iPad? New versions of HTML, Flash, Silverlight and so on, to be supported? All those browsers to be supported? Different screen formats? CSS-complexities? steep tooling learning curves? Cumbersome software version control? Does full support of all those (versions of) user interfaces cost you more than the content to be exposed?
Well, the answer is: Don't build user interfaces anymore. From now on they are free, they arise from nothing at the cost of not a single dime - in fact at no cost at all - within days or even hours after you unlock your data. At a diversity you never could have dreamed of. This is no fairytale, but reality at this very moment.
Months before we - at Dutch Railways - published our mobile app to supply travel information, a full high quality equivalent was made available to the public domain by someone we didn't know and we didn't pay.
The world is changing rapidly. Witness this great momentum and be part of it. After watching the video below your conception of user interfaces will never be the same anymore. This is only the beginning...
Saturday, September 18, 2010
Event-driven architecture (EDA) is an asynchronous pattern which can be implemented with a publish/subscribe (pub/sub) mechanism. Pub/sub is basically a decoupling mechanism in a landscape of communicating systems. Not only the technology of the communicating systems is decoupled, but also presence in time of the communicating systems is decoupled and even locations are decoupled as there is no endpoint resolution applicable.
There are lots of products implementing the pub/sub mechanism. The video below is about OpenSplice DDS (Distributed Data Systems), but that is of no importance. The reason I republish this video in this posting is the fact that the guys of OpenSplice did a splendid job in explaining pub/sub and the advantages compared to the client/server pattern.
Enjoy and get unlighted!
Friday, June 11, 2010
A respected colleague of mine explains how Dutch Railways (Nederlandse Spoorwegen) has been "greening" the IT by eliminating lots of physical servers by virtualizing them.
There is not only an environmental and financial benefit to this move to virtualization, it also creates the possibility to offer self-service hosting facilities on demand to our consumers, reducing platform delivery from weeks or months to hours or even minutes, at no effort.
[Click the little arrow]
My teenage son discovered blogging and the possibility to earn a (very) few cents with Google advertising banners. Would you please be so kind to pay some attention to his blog (and the ads) to give him a little head start?
Wednesday, May 12, 2010
How great leaders inspire action.
Great talk by Simon Sinek:
All organizations and careers function on 3 levels. What you do, How you do it and Why you do it. The Why is your driving motivation for action. The Hows are the specific actions that are taken to realize your Why. The Whats are the tangible ways in which you bring your Why to life.
The problem is, most don’t even know that Why exists.
Saturday, May 01, 2010
We are heading toward Cloud Computing. About one year ago I published a posting about this trend. But what is Cloud Computing at all? Does it replace the SOA and EDA hypes? Answer on this last question: No! Cloud software takes full advantage of the cloud paradigm by being service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability.
The National Institute of Standards and Technology - NIST - has defined Cloud Computing. This definition perfectly matches my own vision and awareness. I think it may be worthfull to share this vision. In this posting I'll add two of my own pictures to support the understanding.
First of all, to understand Cloud Computing it is very important to understand the viewpoint of IT-services from a layered perspective. The picture below is a simplified version of the model I've always at hand in my daily practice and which I published before on my blog.
IT-delivery offerings in the market tend to concentrate on each of these layers. Each layer provides services to the next higher layer in the stack adding abstraction and value to its lower level layer. This is a move-away from the stove pipes where every application relies on dedicated solutions throughout the stack.
(Honesty demands to mention appliances, which are hardware stove pipe boxes for the sake of - very - high performance requirements. The consumer of the services should however be unaware of these lower level implementation strategies.)
When you understand this layered view, you will be able to understand Cloud Computing. NIST defines Cloud Computing as follows:
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models.Essential Characteristics
On-demand self-serviceService Models
A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.
Broad network access
Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.
Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service.
Software as a Service (SaaS)Deployment Models
The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
Example: Google Gmail
Platform as a Service (PaaS)
The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
Example: IBM Cloud Burst
Infrastructure as a Service (IaaS)
The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).
Example: Amazon EC2
Private cloudThank you, Peter Mell and Tim Grance! In return feel free to reuse my pictures...
The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.
The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise.
The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).