Search
 
 
 

 

   
 
 
 
 

Innovative Software Architecture (Part I)
 

By:
Dr.Thomas J Mowbray
Chairman, iCMG

There are many active and successful schools of software architecture thought. Software architecture is a discipline unified by principles, but divided by terminology. The various architecture schools can be viewed as different branches of an evolutionary progression. The Zachman Framework has evolved from the traditional non-OO approaches. ODP is an outgrowth from object-oriented and distributed computing paradigms that has achieved stability, multi-industry acceptance, and formal standardization. Both Zachman and ODP approaches have enjoyed significant success in production-quality software development. Domain analysis has demonstrated its worth in defining robust domain-specific software architectures for reuse. The 4+1 View Model is an approach undergoing development, in parallel with the Unified Process. All of the above can be described as innovative software architecture approaches. They are being applied in practice, based upon various levels of proven experience. Academic research in software architecture is defining a baseline for architecture knowledge that resembles a lowest common denominator of the above approaches. Fortunately, the academic community has legitimized the role of the software architect, regardless of whether their guidance is useful to innovative architects.

In our opinion, software architects should have a working knowledge of the innovative approaches, described above. In addition, architects should utilize one of the product-quality architecture frameworks in daily practice. Component architecture development is a challenging area, requiring the best of stable conceptual frameworks supporting sound architectural judgment.

The Architecture Paradigm Shift

The nature of information system is changing from localized departmental application to large scale global and dynamic systems. This trend is following the change in business environments towards globalization. The migration from relatively static and local environments to highly dynamic information technology environments presents substantial challenges to the software architect.

A majority of information technology approaches are based upon a set of traditional assumptions. In these assumptions the system comprises a homogeneous set of hardware and software which is known at design time. A configuration is relatively stable and is managed from a centralized system management configuration. Communications in traditional systems are relatively predictable, synchronous and local. If the state of the system is well known at all times and the concept of time is unified across all the activities, another key assumption is that failures in the system are relatively infrequent and when failures do occur they are monolithic. In other words, either the system is up or the system is down.

When building distributed application systems, most of the assumption are reversed. In a distributed multi-organizational system, it is fair to assume that the hardware and software configuration is heterogeneous. The configuration is heterogeneous because different elements of the system are purchased at different time frames by different organizations and many of the decisions were made independently. Therefore you have a variety of information technology in a typical configuration. It is also the case that hardware and software configurations are evolving. Within any organization there is turnover in employees and evolution of business processes. The architecture of the organization impacts the architecture of the information technology. As time progresses new systems are installed, systems are moved, new software is acquired, and so on. When multiple organizations are involved, these processes proceed relatively independently and the architect must accommodate its diverse evolving set of configurations.

In distributed systems, the assumption is that there is remote processing at multiple locations. Some of this remote processing is on systems that were developed independently and therefore have their own autonomous concept of control flow. This reverses the assumption of localized and unified processing resources. In distributed systems, there are some interesting implications for the concepts of stake and time. The stake of a distributed system is often distributed itself. The stake information may need to be replicated in order to provide efficient reliable access at multiple locations. It is possible for the distributed stake to become non-uniform in order to get into error conditions where the replicated stake does not have the desired integrity and must be repaired.

The concept of time distributed systems is effected by the physics of relativity and chaos theory. Electrons are traveling near the speed of light in distributed communication systems. In any large system there is a disparity between the local concepts of time in that this system can only have an accurate representation of partial ordering of operations in the distributed environment. The total ordering of operations is not possible because of the time speed and distances between information process. In addition, distributed communications can get quite variable and complex. In a distributed system there are various qualities of service which communications systems can provide. The communications can vary by timeliness of delivery, the through put, the levels of security and vulnerability to attack, the reliability of communications and other factors. The communications architecture must be explicit designed and planned in order to account for the variability's in services.

Finally, the distributed system has a unique model of failure modes. In any large distributed system there are components failing all the time. Messages are corrupted and lost processes crash and systems fail. The kinds of failures happen frequently and the system must be architected to accommodate for these error conditions. In summary distributed processing changes virtually of the traditional system assumptions that are the basis for most software engineering methodologies, programming languages and notations. What is needed to accommodate this new level of system complexity are three things. The first need for architects is the ability to separate complex concerns in particular it is important to be able to separate the concerns about business application functionality from concerns about distributed system complexity. Distributed computing is a challenging and complex architectural environment unto itself. If systems are built with traditional assumptions, the architects and developers are likely to spend most of their time combating the distributed nature of real world applications. Problems and challenges of distributed computing have nothing to do fundamentally with the business application functionality.

 
Back
 
Copyright 2005 iCMG. All rights reserved.
Site Index | Contact Us | Legal & Privacy Policy