Sunday, November 30, 2008

Characteristics of professional transformation and optimisation practices

Professionals will see to work organisations that they can be proud of, that excel, that have integrity, that learn, and with people they can respect (and who respect them). And I don't mean this as some kind of platitude to be trotted out for the benefit for impressing customers, partners or the market, by some marketing person who could just as easily be working for a brewer or a manufacturer.

These practices are lead and directed by visionary people - who practice i.e. do (not by people who spruik, not accountants - unless of course it is an accounting practice). This is true of all great practices. These people genuinely believe in what they do, and evangelical about doing things better, smarter, etc. and lead from front. They need these characteristics to overcome the inherent resistence that they will encounter from those who have built their reputations doing things in some outdated way. This is true in any real profession.

In all cases the business leader(s) would actively and continuously seek way of doing things (steps, tools etc.). The leader would engage other professional around who specialised in different areas and oversee: consistency, quality and learning for the organisation as a whole. This of course requires a real and genuine interest in the domain (in doing - not selling), talent, great self discipline, consistency, honesty and bravey etc. These practice would select people would have talent, intelligence, skill, be willing to learn, change. They would meet challenges, take responsibility, and be persistent. Those who were most highly rewarded would be the professionals leading the practice (not sales, marketing, general management, or accounting types). You can see in established professions: Architectural, Engineering, Medical, Legal, etc.

I am interested in strategic transformation and optimisation practice. In practices oriented at this there needs to be a focus on things such as:
- strategic planning and architecture: so what to deliver would be known.
- project and design management: so it would be possible to manage delivery
- technology and product selection: so the best materials and components could be used (rather than being in some vendors pocket)
- design and constructions methods: as often industry best practice is less than ideal
- operations: both because often industry best practice is poor and because ensures an understanding of operational realities (and changes and improvements must be able to operationally effective).

Sadly many IT oriented practices that claim to focus on transformation and optimisation are not lead this way. This results is pitiful best practice, over pricing, under delivery and lack of any real will to improve or change. I have heard every excuse in the book as to why IT is different. The sad truth is that IT has historically been a sales and marketing driven industry lead by persuasive, charming, well connected people. They think business is best done over meals, and drinks and social events are the build professionalism. Secretly they look down on the professionals they have in their ranks. Over time this means that the best and brightest professionals leave.

Wednesday, June 25, 2008

Why organisations struggle to improve how they manage requirements

Why do organisations struggle to improve their approach to managing requirements.

Essentially because the industry doesn't present a common view of best practice - and frequently doesn't distinguish between different approach to requirements management i.e. high level vs low level, and approaches for low level requirements management.

Ideally at a high level enterprises should have a common way of defining requirements (and assessing that they are met). At lower levels the methods should diverge depending on how the requirements are to be met - and the role enterprise needs to be play.

We could think the ways that requirements might be met under four broad headings: development (where major components of software are developed), integration (where a number of essentially existing components are integrated), package customisation (where essentially an existing solution is customised, configured or extended) and package off-the-shelf (OTS)

Development may be when enterprise builds a solution inhouse e.g. developing in Java, C# etc. (the solution will fit like a glove i.e. and is bespoke). This is usually done when no OTS packages meets the requirements and/or the business seeks to achieve unique market differentiation based on the solution (software). This may or may not involve an adjustment to how the business operates.

Package customisation is where a package or service (such as SAP) is purchased and it is customised, configured or extended (as little as necessary). In this case there is also usually some recognition that some adjustment to how the business operates may be entailed (as presumably the package reflects industry best practice).

Package OTS is where a product or service is used as purchased (no changes a made) for example MS Word. This is usually done where an enterprise does not seek to achieve competitive advantage through the use of the tool e.g. few business seek to differentiate themselves based on how well they use a word processor.

Integration may involve all of the above and how different components (including external services) from each area work in concert.

It should be clear that the ideal approach to requirements in each of these fours area differs. The failure in most requirements management approaches is that the lower level methods pollute the higher level approach to requirements management (by way of analogy - the way the plumber or electrician, or block layers needs to define what they doe, and how they notate their detailed designs, is foisted upon people who wants to describe that they want a new garage).

The approach to requirements management also needs to align with the approach to project management. This introduces another issue i.e. project management in IT.

This failure is matched with failures to recognise the difference between project management in IT vs project management for buildings. The reality is that in IT the requirments and specifications often evolve right to the end. This because what people want to do with the technology elements is partially determined by what the technologies can do i.e the technologies change behaviour. To pick to generic consumer oriented examples: a GPS navigation phone changes how one navigates, a video conferencing phone changes how one communciations (as does a texting/SMS phone, instant messaging etc.), Google et al changed how one finds things. If we constrast this with buildings this is far less the case i.e. we pretty well know exactly how we want to use say: a toilet, a door, a bed, a bed room and lift.

In IT delivery a great deal of the time is focused on understanding precisely the requirements are, and therefore what the design is (including engineering calculations), and what construction is required, and how things should be tested. In buildings the requirements are far more precisely understood at the start, the design is well articulated and precise before construction starts, the testing is usually more obvious (or it is covered fairly well defined industry conventions), etc. In buildings the scope very well defined and the focus on project in buildings is far more on costs, time and variance - whereas in IT the scope is seldom well defined at the start, and the focus in project management is on management the refinement of scope (and cost, time).

Asking most organisations for whom this not their main focus how the they want to manage requirements is probably unrealistic i.e. it is a bit like a building architect asking a home owner how the plans should be drawn up, or an electrician asking the home owner how the electrical diagrams should be drawn.

So what is needed is to show some leadership with clients in explaining how requirements can be managed - and how this related to detailed SDLCs, approaches to project managment, and approaches to aspects of enterprise architecture (e.g. standards compliances, skills management etc.).

A starting point for requirements must be how the business operates or seeks to operate. This in turn needs to be ground in the business drivers and constraints (which include technological, organisational and environmental considerations).

Wednesday, February 6, 2008

Process Component Models

(prompted by this item on PCM)

This is a selective summary.

BPEL is not great for BPM modelling
BPEL's association with BPM is problematic from the viewpoint of the BPM modellers (i.e. based on good marketing). The only association is that a BPEL process can be shown as a diagram and that the language has support for wait states.

The BA is supposed to be non-technical, so the chance that the activities in the model correspond to available WS is small. BPEL is block structured, and this is too limited for modelling purposes. Analysts need the freedom to draw boxes and arrows, which always leads to graph structures and arbitrary cycles. A BPEL process doesn't have a notion of transitions. So usually, it's not possible to keep the analyst's analysis diagram in tact when translating into a BPEL executable process. This is exactly why the mapping BPMN to BPEL is hard and has so many limitations.

Another problem is the limited data manipulation capabilities. Extracting pieces of content from an XML document is most of what you need in service orchestration. But for BPM, often a lot of data processing needs to be done in between steps of the process.

If you consider using BPEL for BPM, ou should ask yourself is whether you want your core business process data in the ESB. The domain data that the BPEL engine needs to maintain is usually stored in relational databases. The information in your core business processes must be easily linked to the domain information in the relational databases. This information should be included in the domain model in a database. Not inside the BPEL engine. BPEL doesn't prevent that kind of information to be stored in the domain model database, but it makes it harder. You might have to implement a special web service to get the customer reference stored in the domain model. So BPEL seems to make it easy to get your domain model information partitioned.

BPEL is a good technology for scripting new services as a function of other services. But that it doesn't deliver on its promises for BPM, for which it is currently known. That doesn't mean that BPEL is bad - if you use it as an integration technology to build coarse-grained services out of smaller grained services.

BPMN for BPM modelling
As an analysis language to help with the documentation of a business process BPMN is a very good option. A BPMN diagram improves communication between analyst and developer, but the developer still remains responsible for all the technical details that are a part of the executable process.

A BA creates a description of the business process including a diagram. Then the translation needs to occur to the executable process language. The impact of that translation will depend on the analyst's technical skills and the capabilities of the executable process language. In any case, the goal is to have a minimal impact on the diagram so that the BA recognizes and understands the diagram of the executable process. Note that the diagram is only the tip of the iceberg, because a lot of technical details might be included in the executable process that are of no interest to the analyst. After the translation, the executable process is software, and therefore the non-technical BA is only allowed to see it in read-only mode.

The great advantage that BPM brings here is that analyst and developers are given a common language. The BPMN diagram helps to speed up the communication between BAs and developer. This indeed creates the 'agility' that is credited to BPM. But the illusion that BAs can just edit diagrams and press the 'Publish to production' button is optimistic and unrealistic

Portability at the modelling level is at least as important as portability on the implementation (executable processes) level i.e. portability of process logic from one platform to another is important but so is portability of people allowing us to move our skills from one process design tool to another. BPEL may be able to help with the first goal, but it's not appropriate for the second. When BPMN becomes widely supported it will make people's process design skills portable.

If process modelling is bound to executable processes, the graphical representation of the process should not contain too much information. Trying to express too much detail in the graphical diagram requires everyone to the details and means that there is less chance they will match with the executable language.

Process languages (executable and non executable) differ too significantly to unify them in graphical process designer based on BPMN. For each process language, a BPMN subset can be defined that matches well with that language. The designer tool should support the specific structures of the language and the appropriate finer details directly.

Round tripping
The mapping approach of BPMN has lead to the mistake of round tripping. The idea of round tripping is the continuous switching between the BPMN analysis model and the executable process. The BA works in a modelling language like BPMN, using the graphical notation and the BPMN properties and the developer works in an executable language like BPEL. The problem with this approach is that in practice, it turns out to be way too hard to maintain 2 sets of properties.

BPMN and BPEL mapping
BPMN is graph based, where as BPEL has a composite structure without transitions that corresponds to a tree. Secondly, the concurrency models differ substantially.

Process component models
The idea is that activities in a process graph are linked to a component that implements the runtime behaviour of that activity in a general purpose programming language. Each activity type in a process language corresponds to one implementation component. With this approach a common base layer is extracted from the BPM and workflow technologies.

Because they can support multiple process languages, process component models reduce the importance of the individual process languages. Instead, they let developers select the most appropriate executable process language for each process. This improves the separation of concerns between analysts and developers over a situation where a BPM engine only supports one process language.

In this perspective, we can identify 4 levels of detail, which perfectly fits with a smooth transition from analysis model to executable process model: 1. Process graph structurel; 2. Activity type selection (corresponds to the runtime implementation); 3. Configuration of the runtime implementation; 4. Plan B: Custom coding of an activity

Conclusion
BPEL is an executable process language, which is good for integration purposes, but it's not suited for supporting Business Process Management cause of its tight coupling with technical service invocations. BPMN serves the analysts in drawing analysis diagrams, but it's not executable.

We need to start by making a better distinction between analysis process models and executable process models. Once we abandon the idea that non-technical BAs can draw production-ready software in diagrams, we can come to a much more realistic and practical approach to business process management.

When linking an analysis process model with an executable process implementation, the clue is not to include too many of the sophisticated details of the analysis process notation in the diagram. By using only the intersection of what the analysis language and the executable process language offers, a common language can be created for the BA and the developers, based on one single diagram.

Different environments and different functional requirements require different executable process languages. The current idea that one process language would be able to cover all forms of BPM, workflow and orchestration is just too ambitious. And if such an effort would succeed, the resulting process language would be way too complex for practical usage.

New technologies create a component model so that activity types can be build on top of a common foundation.

Tuesday, January 22, 2008

Services a discipline

(based on IBM SJ paper)
The following are some notes of interest:

Economic statistics conclusively demonstrate that global economies are increasingly based on information and services, and that demand is growing and exceeding supply for people with the knowledge and skills to be effective workers in this new economy. A consensus is emerging that the cumulative and interconnected innovations in information and computing technology, industrial engineering, business strategy, economics, law, and elsewhere cannot be described and understood by a single academic discipline.

Many of the concepts, techniques, for service design and operations originate in and emphasize person-to-person services. However, they do not fit well when person-to-person services are replaced or complemented by self-service, and hardly fit at all for automated IT services provided by one IT process to another. We might conclude that the word "service", in person-to-person services, service architecture, are homonyms and not try to unify them conceptually and methodologically, but we will make little progress toward a service science if we do not find abstractions that unify them or establish clear boundaries between them.

Many seem to view it as unquestioned dogma that a customer-centric approach is inevitable and essential.However, while a focus on the customer and customer interactions (the front stage) has been shown to contribute to quality in person-to-person services, it is not straightforward to apply the same focus to the design of self-service and automated information-intensive services.

The questions that can be asked about a service science inquire about some activity in the life cycle of a service. We can ask, How is a service: designed, planned, forecasted, specified, provisioned, composed, integrated, deployed, delivered, managed, certified, used, reused, evaluated, optimized, archived, etc.

This list, while far from complete, but illustrates that a very large number of activities or processes could be important parts of the life cycle of a service or set of services. Because services can be people-to-people, people-to-technology (self-service), or computer-to-computer (e.g., Web services), a variety of methodologies apply to the service life cycle. These methodologies partition the life cycle differently, use different words to talk about each activity, and make different design decisions and trade-offs.

When different disciplines and perspectives come together, the outcome is unpredictable. One discipline can become dominant and absorb parts of the others, or the overlapping pieces can break away and form a new field. But if the new field never becomes more than the sum of its parts, it can fade away over time. Occasionally, however, a new and important discipline emerges as a synthetic combination. The multidisciplinary or transdisciplinary character of the transition to a service-dominated economy makes it intrinsically difficult to define what a new, unifying discipline might look like. We might posit that a new and synthetic discipline of service science is desirable, but we should not assume that it is inevitable.


Product Management – Overview and concepts

Product Management – Concepts – circa 2000

Introduction 

My focus has always been on quality, continuous improvement and excellence of execution. Delivering services using well defined approaches[1] is critical to achieving these things. 

Approaches [3] deliver products and services [2]. Each approach has:

  • a set artefacts (e.g. deliverables[4]) which there should be templates for and examples of. 
  • a process i.e. a sequence of Steps (and sub-Steps), Steps are performed by Roles who use Tools and apply Techniques); 

This ensures services are delivered in a consistent and repeatable way, and assists in classifying artefacts and developing metrics e.g. for Steps, Artefacts, etc.

Highly adaptable approaches which can be easily instantiated[6] and adapted allows for continuous learning. Approaches that attempt to obviate the need for judgement and experience are doomed to failure[7]. Approaches can always be improved and need to constantly be assessed for gaps, areas of improvement, changes in underlying technologies or environment. etc.

Common tooling (and language, notation, syntax and format) is key to effectively getting people using common approaches[8] and managing complexity[9]. Basic modelling that allow elements of types and associations between those elements is fundamental.  Models can be present in many ways e.g. tables, forms, diagrams etc. 

Approaches need to lead by owners who will act as a focal point for ongoing development of the Approach. They will also try and stay abreast of industry and academic developments and assist in knowledge transfer by understanding who is applying the approaches, what they have know and learnt, what they have produced and any problems encountered. Their knowledge will be useful during instantiation and their reviews they highlight where approaches can be improved. 

Approach owners would ideally QA all work done with an Approach until they are confident a practitioner has developed enough experience in the use of the Approach to apply it successfully.

Universal access is critical so that teams of people can work together independent of location. So repositories need to established that are accessible from anywhere (over the internet). All documents of record (e.g. contractual, deliverables and key supporting artefacts should in these repositories.

Critical to improving metrics is understanding the effort associated with each Step, and/or Artefacts[10]. For this reason we favour high level plans (or task lists) and the recording the effort against them. If we don't record how long a class of task or artefact has taken in the past we have no basis for estimating how long it can be expected to take in future.

Approaches can be defined at different levels of detail. There has been much debate regarding the best format for representing the products. What we are trying to achieve is a consistent way of representing approaches. The current view is that products will be represented by:
  • Brochure – that summarises the benefits and broadly outlines the approach.
  • Presentation – that summarises the approach, its objectives, challenges to be overcome and the steps to be undertaken. 
  • Model(s) – process models that provides an overview of process
  • Document(s) and/or Wiki – a narrative describing the overall process, sub-processes and discrete steps (often with reference sources, useful tips on techniques, etc.)
  • Artefact lists – derived from the approach model (and often presented with default metrics) as a spreadsheet. They may be elaborated in discrete documents.
  • Artefact templates – for each artefact/model
  • Artefact exemplars – for each artefact/model
  • Role descriptions – usually recorded in the approach model (may be elaborated in documents).
  • Work template – a simple set of steps and schedule (steps, key dependencies etc.).
Approaches should provide an overview[11] for experienced practitioners. The approaches may be terse and to the point[12]. It is not expected that anyone other than an experienced practitioner, or someone mentored by an experienced practitioner can successfully apply the approaches.

Approach use

When a project is established we will establish a discrete project repository. Copies of standard artefacts templates will usually be put in this repository (its categories, documents types will reflect the areas/steps of the approaches)
Approach model – will be either used as is or adapted as required.
Steps – will be recorded in a plan and each step will be sized and resourced as appropriate.
Work and Artefacts – work will be undertaken and artefacts will be created

Current technologies (i.e. what we should use)

Documents: spreadsheets, word, presentations, diagrams
Models: sets of components that are related
Diagrams: based on models where possible. 
Plans and schedules
Document repositories
Generic communications: email, IM, Wiki, etc.



[1] Sometimes called methodologies or processes.
[2] Sometimes called offerings
[3] Typically delivered in the context of a project or assignment
[4] I take a Miessian view of deliverables (i.e. “less is more”) and distinguish work products and deliverables
[5] Examples should be evaluated to see if they represent best practice
[6] The generic roles, artefacts and steps etc. be adapted, amalgamated, or put aside.
[7] Based on empirical experience (e.g. with organisations touting methodologies as silver bullets)
[8] Experience shows that Word document based approaches fail and Systems oriented approaches too expensive and inflexible.
[9] See: The Challenges of Complex IT Projects [British computer society; Royal Academy of Engineering]
[10] Often this is considered impractical in a plan because many tasks relate to many deliverables.
[11] Check lists, handy templates, memory joggers etc.

Monday, January 21, 2008

BPMN processes and pools

n addition to describing the internal process orchestration (or control flow), BPMN can represent choreography (the message exchange between processes). In the real world, an end-to-end business process may be composed of multiple BPMN processes interacting through choreography. A single BPMN process (as opposed to multiple processes) is confined to a pool (i.e. a pool is a container for a BPMN process), is within a single domain of control, and has a start/end (where the process state is changed by a set of activities in the process). The most common reason for multi-pool processes is that an instance of one process does not have one-to-one correspondence with an instance of the other. Unlike traditional process modeling notations, BPMN puts events and exception handling right in the diagram itself, without requiring specification, or even knowledge, of the technical implementation. This business-friendly “abstract” representation combined with precise orchestration semantics lets BPMN process models serve as the foundation of executable process implementations (with implementation properties layered on top of the model, usually be IT).