“Kicking the Tyres”

Evaluating Enterprise Platforms

The phrase “Kicking the Tyres” has two sources of origin;

  1. You might remember the phrase “kick the tyres  and light the fires” from the movie Independence Day. It’s a reference to the habits of fighter pilots doing a pre-flight walk around inspection of the aircraft before climbing into the cockpit and starting the engine
  2. The second is from the automobile industry where people would physically kick the tyres  of a car to check their condition and durability before making a purchase. 

In essence the act of “Kicking the Tyres” was a way to determine quality and ensure the vehicle was in good condition. Over time, the phrase has evolved to be used more broadly to mean evaluating or examining something before making a decision.

In my roles working on both Client and Vendor sides of the fence I have been involved in organising, conducting and participating in platform evaluation & selection exercises we referred to as “Kicking the Tyres”.  

These evaluation exercises were utilised in three main types of situation;

  1. Selecting new enterprise platforms
  2. Evaluating the technology of a prospective M&A target
  3. Evaluating the current state of an organisations technology landscape

The targeted platforms under assessment have included;

  1. Core Enterprise Business Platforms (products and homegrown)
  2. Generic Business Platforms
  3. Low Code Development Platforms
  4. Technology Enablement Platforms

These evaluation exercises have spanned numerous business verticals including Insurance, Pensions, Travel & Health, Relocation, Hospitality.

The degree of rigour we put into the evaluation process was extremely valuable in allowing all areas of the organisation to see what the result of a potential selection would be. 

The approach has evolved over the years and is geared towards the following objectives;

  • Preventing misperception around functional and technical capability of a platform
  • Building consensus within the business and technical communities on the final decision
  • Avoiding buyers remorse
  • Assessing how the RFP finalists performs in a near real-world situation
  • Qualifying the RFP responses of finalists
  • Building a solid foundation and relationship with the eventual winner
  • Assessing how the targeted company’s platforms perform in a near real-world situation
  • Qualifying the rationale of the deal from a technology perspective
  • In depth Due Diligence from a capability and technology perspective

The “Kicking the Tyres” approach can be used for any platform whether business or technology focused across the three main scenarios I have mentioned above.  It has also proven highly adaptable and relevant as platform technologies have evolved. Obviously the details of the “things to be evaluated” will differ, but the fundamental structure and objectives of the approach remain the same.

I have used this approach many times on both business and technology platforms ranging from “old school (aka legacy)” to “modern (low-code)”.  I can attest to its success in achieving the intended results.

Engaging the “Kick the Tyres” assessment approach does take time and effort, however in relation to the overall cost of the eventual project it is a very small % of final spend and time.  I always advise management to think of it as a risk premium.  The benefits far outweigh the costs of making a mistake and suffering buyers remorse.

Going back to the car purchasing analogy, you would walk away from any car dealership who will not facilitate a test drive.  The same rule applies to the selection and purchase of critical platform components for you business or assessing M&A opportunities.  I personally believe reluctance to engage in this type of process should ring alarm bells.

If you are interested in exploring in further detail the Principles and Steps on how to use this approach please feel free to download the pdf below.

I would be interested to hear your opinions on what you think of “Kicking the Tyres” please feel free to leave comments on the Linked-In post directing to this blog at Linked-In-Post

Posted in Architecture | Tagged , , , , , , , , | Leave a comment

Thoughts on Transformation – Part #4 – Enablement Architecture

There is unfortunately a common complaint in practically every large organisation today that “Delivery (aka IT) is broken or not working efficiently”. All troubled initiatives or initiatives that cannot be started are laid squarely at the feet of the CIO, CTO and their teams. Admittedly there is an issue around how we deliver projects/programmes in todays environment the situation is not helped by technology companies throwing out simplistic graphics such as the one below showing a view of the Delivery Gap and implying core IT are not able to meet the current trends of the modern technology landscape, they then follow up it by saying “we must empower the business and enable it to be more independent of core IT”, buy our platform, methodology, etc …… to address your problem.

As with most situations there is a degree of truth in the message but also a high degree of misrepresentation which leads to misunderstanding and false hopes that generally makes the situation worse.

I do agree that the approach to IT needs an overhaul and that the old approaches of a centralised IT eco-system are not fit for purpose in todays world. The IT organisation inside companies need a rethink and a reset. The modern message of empowering & enabling the business and reducing reliance on core-IT is the “right approach” but it is getting the “approach right” that makes all the difference.

I prefer the following modified view of the above image;

To understand why a lot of organisations are in this position today we need to consider the following;- (note – as with the other articles in the series, a full understanding of where we are coming from and how we got here is key to moving forward)

Why has the Technology Delivery Capability Decreased? There are a number of main reasons but generally they come down to a combination of a static or decreasing budget, an increase in the volume of technology and solutions to be managed, an increase in the degree of technical debt incurred due to business and market pressures. In short the IT team generally have more to look after with the same or less resources and pressure for speed overrideing quality is a common situation. In addition technology departments have got stuck in the trap of justifying price instead of promoting value.

What is the Delivery Gap? This is the difference in what the business needs and what the people in IT are able to deliver due to the situation in which they find themselves.

What is the Opportunity Gap? The opportunity gap is even more critical that the Delivery Gap. It is is the lack of ability of the organisation to act fast and innovate to meet trends and opportunities.

Looking at the above images a knee-jerk reaction would be to increase the resource availability usually through one of three ways;

  • Work harder – Only sustainable for a short period of time and really kills any morale left in the teams
  • Outsource to offshore teams – The dual challenges of distance and communication consistently cause problems in this area and I have yet to see an efficient out-source that both IT and the business users can hand on heart agree works well
  • Delegate the new (interesting) work to a partner – This leads to three main issues, firstly the inhouse IT staff get even more disillusioned as the “interesting” work is being done by outsiders, secondly you should never outsource innovation and IP and thirdly it will eventually cost exponentially more than transforming internally to address the problems

In an organisation that is working well (from an IT perspective) a good metric of IT spend would be 60% on BAU, 30% on new products and services and 10% on innovation. Most organisations are spending close to 80% if not more on BAU, dealing with the effects of technical debt and trying to integrate a technology landscape that has evolved through tactic’s than strategy.

Throwing people (or money) at the current situation will not solve the problem. Neither will bringing in new technologies and putting it in the hands of the business without a proper approach and mindset.

The new platforms that allow rapid development with little or no code may seem like the answer but in truth need to be implemented with the same due care and thought as any “solution”. A few simple examples I use when talking to people on this trend I believe illustrate the dangers of an ungoverned rosy-eyed approach to the rollout of Rapid Solution platforms to deal with the delivery and opportunity gaps.

  • Most people can drive a car, however put the average person in a Ferrari and they will go faster but have the potential to cause much more damage when they hit a situation they are not trained for, its the same with IT solution delivery.
  • With the advent of GDPR and its cousins around the world the ability of a company to know where its data is, what is stored and how it is being managed, used and secured is of critical importance (GDPR fines can potentially bankrupt organisations).  Imagine a company getting a request from a client for all his data or to be forgotten.  Allowing the proliferation of mini DB’s for single use or unapproved use makes it difficult if not impossible for a CISO and CDO to be sure of a companies data footprint and potential exposure.  Missing even one piece of information on a client is a breach of GDPR compliance.
  • The rise of the term “Citizen Developer”.  The message of replacing developers with Citizen Developers is quite frankly as ludicrous as introducing Citizen Accountants, Citizen Doctors and Citizen Airline Pilots.  This term is undermining the role of good design and making people believe anyone can develop a production ready, fully secured and approved solution for real world usage.

Some of the most rewarding experiences I have had in IT Delivery were around the early 90’s when working with a company that acted as a strategic IT partner for a major Dutch Insurer. We basically built insurance companies from the ground up across Central and Eastern Europe. What made experience enjoyable was that;

  • We were all aiming for the same goals
  • The business teams were small and dynamic
  • There was no boundaries between the business and IT teams
  • There was a high degree of trust
  • People were willing to take calculated risks
  • We were all focused on building what would turn out to be highly successful enterprises

Roll forward 20 years and the same experiences with difference organisations were highly bureaucratic, risk averse, many times more expensive and took double or triple the time to launch. This was I believe wholly down to a high degree of micro management from the business over IT, a lack of vision (from everyone) and lack of willingness to take chances.

We need to a way of thinking and working where innovation and technology is the responsibility of everyone in the organisation. And through the use of modern technologies & platforms enable and empower the business to be part of the delivery process while ensuring the Core IT Team retain oversight and governance of areas they are accountable for.

To allow a business to address the Delivery and Opportunity gaps as part of a transformation process it is critical that the core technology assets of an organisation “Data and Capability (Functionality)” are exposed in a controlled manner and consumable by as wide an audience as possible. This is not about giving everyone access to databases and backend systems, it is about implementing a Platform, Product and Persona mindset that empowers the business and enables innovation.

A good analogy to use is to think of Core IT and Enterprise Architecture as the city planners. They determine where the highways go, how the water mains, power and sewage networks are laid out. They identify the location for power stations, airports, etc and oversee the building and operation of all of the vital infra for the city’s operation. In this analogy the business would be the people building the office blocks, retail centres and apartment highrises. It is mostly their decision how these buildings operate and look but they are subject to planning guidelines city development visions. They cannot go and build a power plant for their own use or dump waste into the river, they must adhere to agreed principles.

Conceptually I would represent this model as follows;

In this new model: Core IT are not only responsible for the Infra, data, security, central business systems. But also for the provision of a key platform referred to as an Enablement Platform in the diagram above. It is these platforms that allow the business to operate in an innovative and flexible manner. The platforms are a mix of technologies, approaches, governance and support. The business is free to build (or get built) whatever it is they need using these platforms providing they keep the business safe by following the agreed guidelines and principles.

The enablement platforms should over time be he main conduit between Business IT and Core IT. Think of it as a relationship between Retailer and Wholesaler. Core IT are the wholesalers of the organisations Data and Capability (providing the user guides and warranties) while the Business and Business IT are the retailers developing the presentation facades and consuming the standardised data and capability in a manner that is suited to their needs at a current point in time.

Core IT view the world through the concept of “enablement platforms & services” while the business can view the world around the wrapping and presentation of data and capabilities based on products and persona’s.

Core IT have a responsibility & mission to work with the business to define and deliver these Enablement Platform’s to allow the business to explore and build its own solutions as required through the use of pre-agreed approaches, technologies and services. An enablement platform is a combination of the following;

  • Core Integrations (API’s) that release Data and Capability (transactions) to the business in a manner that is understood and well managed
  • Technology platforms to allow the business to deliver to its needs in a low/no code environment
  • Freedom to engage with technology partners but in a manner that is understood and well managed
  • A set of enablement and technology principles that are adhered to and overseen by a group of individuals from IT and Business
  • A governance protocol to ensure the organisation is adhering to data and data use rules it is legally complied to follow
  • Training and facilitation for the business to help them get best use out of the enablement platform
  • Shared responsibility for the end-to-end technology landscape. Core IT are responsible for the availability & residence of the data & capability API’s, the operation of the enablement platform data security and identify management. The business is responsible for engaging with the enablement platform based on agreed principles and guidelines and are responsible (in all areas) for all the solutions built and delivered.

Posted in Uncategorized | Tagged , , , | Leave a comment

Thoughts on Transformation – Part #3 – Building a Blueprint

To be successful a transformation journey needs to be Business Outcome driven. The focus for building a blueprint for transformation should be the outcomes expected from business strategy, products/services and journeys/experiences. In this regards at a high level the blueprint for an organisations “what we want to be” vision focuses on the top three layers of the “everything is interconnected” model illustrated below, in particular the elements associated with the business model and capabilities

Model is from David Clark at http://www.hivemind.com

Know your Starting Point

It is as important to understand where you are coming from, as it is to envisage where you are headed on your transformation journey. The current state is your foundation for moving forward. Every transformation journey is different, the triggers / drivers for an organisation are wholly dependant on the current state of the organisation and also on the prioritisation of elements that are driving the transformation.

  • It’s not possible to successfully change or modify a system you do not fully understand
  • Blueprints illustrating current and future state variances are a good basis for plannign and executing your transformation journey
  • Current state is essential for impact assessments and “rate of change” tolerance identification
  • Blueprints facilitate transparency.
  • When organisation level blueprints don’t exist — projects, programs, business units and individuals need to conduct research every time they have a question about the current state. This represents a duplication of effort that can be expensive for your organization.
  • It’s impossible for Business Outcome and KPI monitoring to function correctly without a current state blueprint.
  • Current state blueprints are an important input for business, organisational and technology strategic planning.

A clear business vision of why, how, what, who, where and when

These six words may seem like a no-brainer but yet are missing from many organisations starting points for considering transformation. They are a common approach in most business architecture discussions and are valuable in aiding people to take a step back and really consider the organisation they are part of. When defining your organisation within each of these six dimensions you should do it from an “as-is” and a “target” perspective. Understanding and comprehending where the organisation is “now” is vitally important in building both strategies and roadmaps for transforming to the target state.

There are many approaches that are used to model a business and build out a roadmap. You should pick the one that best suits your needs and environment. What I will describe below is a system I have evolved over the years taking components of other approaches and melding them together to produce a system that I feel allows a highly visual guide to the organisation and also to feed down to the activities that make up the transformation journey.

Domain and Capability Blueprint

The blueprint will form a model (or map) of your organisation, its capabilities (current and future) and the technology components required to operate it. The blueprint should be at a medium level of abstraction and ideally fit onto a single slide and still be readable.

Below is an example of a blueprint for an insurance organisation

I use the term Domains as it suits the nature of the blueprint and the rules around Domain Driven Design fit well into the approach to constructing the blueprint. However they can easily be described as entities. I dislike using the term entity in the context of teh blueprint as I wish to avoid people diving down to low level entity relationship modelling during the blueprint discussions.

When identifying capabilities be careful not to go to too low a level so that you end up describing business processes. A business capability is the ability to do something. Think of it as the “what”. Whereas a business process represents the “how”. In the context of the blueprint a business capability is the ability of the organisation to perform a specific set of functional activities (i.e. business processes) to deliver a defined and expected outcome. For example “new business acquisition” is business capability that involves a number of business processes.

In the technology segments of the blueprint the components are conceptual. They will often relate back to single technology platforms but there is no requirement for it. Technology components can be addressed by a number of Technology Solutions, Systems or Platforms or conversely one solution could cover many Technology Components. The blueprint is about the identification and representation of Domains, Capabilities and Technology Components not about specific data stores, business processes or IT solutions.

Benefits of a Domain and Capability Blueprint

The benefits of using the Domain and Capability Blueprint around which to structure your transformation are;

SIMPLIFICATION – Models the problem/opportunity domain correctly. A good domain contains only the information, which is relevant for solving the given problem.  Containing the right domains and capabilities is not enough. The associations of those domains and interaction of the capabilities is critical.

CONSISTENCY – Speaks the right language. Because a domain model is a representation of a subject domain, it is essential that its elements have been named to reflect the business.  This ensures that business and technology teams are speaking the same language.  This minimises the possibility of misunderstandings, increases quality and speed to market.

CONTROL –  A domain model controls the changes made to its information. It provides methods for manipulating the domain contents and prohibits all other changes to the information under its control.   Providing reduced access points to the domain information reduces duplication in processing and protects the integrity of the domain model.

FUTURE-PROOFING – A domain model allows the natural evolution of transformation & solution delivery in a controlled and future focused manner.

Components of the Blueprint

The model is built up from left to right and focuses on the domains and capabilities before any consideration should be given to the technology components. The five blueprint areas are defined as follows;

Getting Started

The starting point is as always the business, the blueprint is built up via workshops with key business stakeholders which explore the business strategies, user journeys and desired activities of the organisation identifying the Domains and Capabilities. It is important that this exercise is conducted with a business driven focus and that the blueprint is perceived as being defined by the business.

The purpose of the blueprint is to allow a starting point to accurately define;

  • Why the Organisation Exists
  • What it produces or delivers
  • How the products and services are delivered
  • Who are the key persona’s
  • When the products/services are delivered
  • Where the organisation operates and executes its business

The steps I follow to define the blueprint are;

  1. Identify the important persona’s
  2. Identify the lines of business and/or departments (whichever makes more sense to your organisation)
  3. Using business strategies and user journey’s identify the domains and capabilities that are needed to execute the operation of the business
  4. Identify the technology components needed to fulfil the objectives of the domains and capabilities

The buildup of the blueprint is an iterative process and no changes should be made to elements in the blueprint without explanation and agreement from key stakeholders involved in the creation of the document. Over the course of the workshops it is more than likely domains and capabilities will be added, removed, merged or sub-divided as they are perceived to be either too granular/complex for representation in the model or as is more often the case the bounded context of a domain / capability starts to encroach on other areas.

The blueprint is meant to be easily understood and agreed on by those involved in building it. Therefore if you find a high level of disagreement on aspects of the domains or capabilities then it indicates disagreement within the organisation about the why, how, what, when, where, who understanding of those involved.

The blueprint should also be viewed as an evolving asset in that it will change as the direction and strategic needs of the organisation evolve.

Blueprint as a Heat Map

The blueprint will represent the organisations model based on the business strategies. It is the target state. In order to determine how far you are from this state the concept of a Heat Map is used to show the journey to be taken. The heat map can be aligned to all three of the dimensions mentioned in the previous posts Business, Organisation, Technology. A heat map over our insurance example could look like this for an organisation that has an abundance of legacy technologies and wishes to transform its offerings, organisation and technology landscapes.

This example uses four distinctions when looking at components in the blueprint. You can use more, but be careful it does not get too granular for its intended purpose

  • Green – Fit for purpose and does not need replacement
  • Orange – Transformation / Introduction in progress
  • Purple – Existing structures and technologies work for today but are in need of transformation / replacement as they are practically end-of-life
  • Red – Required for the future vision but not yet started

The real value of the heat map is that when you are looking at user journeys, new ideas & strategies, new capabilities or products you walk your way through the domains, capabilities and technology components to determine what is currently available / usable and what is missing. It is much easier to have a conversation around why something is difficult or expensive to implement when you can visually lay the challenge over the current landscape. This should not prevent you from tactical initiatives but does highlight where these initiatives break the blueprint and need future strategic attention to avoid building a mountain of technical and organisational debt.

Just as the blueprint will evolve, the heat map will never be all Green. In fact if it is all green you are probably in a situation with no future looking strategies or not ready for the challenge of transformation as you think everything is OK.

The combination of the heat-map and business strategy priorities allow you to construct a roadmap of how to transform from “here” to “there”. You should build out heat-maps showing expected positions for 6, 12 and 18 months to visually illustrate the rate of transformative change the organisation is aiming at.

Build a Roadmap around the Blueprint

Transfer each element of the blueprint in a roadmap spreadsheet. Each line should represent one element of the blueprint (Domain, Capability, Technology). You can use any toolset you like for this. Below is an example of a roadmap from a real world situation.

Each line on the roadmap is backed up by data points, I generally use the following

  • Level (View, Core Domain, Business Domain, Capability, Technology)
  • Unique Identifier
  • Short Name (Mnemonic)
  • Name
  • Start Quarter (used for sorting)
  • FY xx, xx+1, xx+2 Impact (Dark Blue means Main Project, Lighter blues represent lower levels of work as item matures
  • Summary of Impact across Business Lines, Departments (one column for each)
  • Detail of Impact across Projects/Initiatives (one column for each)
  • Target Source for Acquisition / Development of component
  • Current State in the Business, Organisation and Technology Architecture Landscape
  • Notional % Complete as at date of Roadmap production
  • Solution / Approach / Platform in use today
  • Solution / Approach / Platform envisaged for end-state
  • Description of component –
  • Technology notes (as deemed relevant)
  • Rough Order of Magnitude – Cost
  • Rough Order of Magnitude – Operational Impact
  • Rough Order of Magnitude – Complexity

A roadmap should not look forward more than two or three years. The degree of fluidity and granularity (and certainty) decreasing exponentially as you progress from six months onwards. As with the blueprint the roadmap is fluid and should be revisited every three months or so to solidify the following six months and extend as required.

At this stage the Blueprint and Roadmap are at a conceptual level and are a direction rather than a set of directives. They form the foundation for the initiatives to be progressed on the Transformation journey.

Each item on the blueprint is then fleshed out to show its impact and dependancy on other items. A summary snapshot could look as follows

By using this model any changes in business strategy or priority can be easily overlaid on the blueprint and dependancies from/to highlighted when reprioritisations or radical changes in approach are proposed.

Summary

The blueprint and roadmap approach described above are valuable in that they force people to think conceptually about the business needs. The approach deliberately disconnects the “need” from the “solution” and ensures people understand the model of the organisation and its mission before trying to solve any problems.

The other main benefit of using this method is that it cannot be done in isolation by the business or technology communities. To be successful both areas must collaborate to define the model of the business and transpose it into the blueprint and roadmap.

Subsequent articles in this series will discuss;

  • Enterprise Architecture and the transformation journey
  • Overlaying a “digital mindset” on the transformation process
  • Defining a technology landscape to enable innovation from outside in
  • Using the Blueprint to map out your integration needs

Posted in Uncategorized | Tagged , , , , | Leave a comment

Thoughts on Transformation – Part #2 – Integration as a Strategy

When people (particularly in IT circles) talk about monoliths they are primarily discussing the backend core systems that have been around for 10, 20, 30 years, costed $m’s to implement and probably have a TCO of multiple times the implementation price. These core systems are often seen as inhibitors to a company’s ability to transform not just in technology but also in process, mindset and culture. In addition to the legacy monolith’s we also evolved to a state of Data Sprawl in the technology landscape across the organisation as outlier systems were built to try and cater for the needs of the business in optimising processes and delivering one-off solutions to these needs.

This of course leads to data duplication, integrity, reliability and security issues which in turn lead to a high degree of data movement, process complexity and integration initiatives to try and stitch it all back together. In essence the “monolith application” has evolved to a “monolith landscape” which is complex, brittle, consumes an inordinate amount of IT and Operations focus & spend and is perceived to be a blocker for transformation.

Go back 10 years and the focus was on “componentisation” to address the monolith problem. Componentisation was in its simplest terms the ability to identify key pieces of functionality and address them with purpose built / bought software that allows a degree of modernisation and segregation of responsibilities within the overall architectural landscape. With componentisation came the vendor promoted “silver bullets” of Domain Driven Design, Micro Services, Integration, Application Network, etc. While these areas are of critical importance to any modernisation and transformation approach they are very much still focused on the Technology rather than the organisation and all of its constituent pieces.

Having been involved in modernisation and transformation across Insurance, Hospitality, Gaming and Travel Risk Management verticals for the last 20+ years I see a lot of organisations that want to “digitally” transform but struggle to achieve it as they focus on the solutions (i.e. technology aspects) before the questions “what do we want to achieve”, “why do we want to do it” and “how should we approach it” have been discussed.

In this context the term “Monolith” could equally be applied to organisations as much as their technology landscape.

Considering the above, I want to explore Integration as a Strategy and what it means to the organisation as a whole not just a piece of a technology solution and then look at a number of the various “business and architecture patterns” that can be used. First let’s lay out what some of the terminology used above means to an organisation from a non-technology (or at least a technology neutral) perspective.

Componentisation and Domain Driven Design evolved as a technology solutioning approach to help assess which pieces of the technology landscape could be either logically or physically separated out into more manageable blocks of functionality with defined boundaries and responsibilities. This approach allows a gradual path to modernisation tackling the pieces of the technology landscape that are end-of-life, incurring high degrees of tech-debt and BAU spend or would bring high degree of business benefit by implementing more “user centric” capabilities/solutions. As examples think Product Management and Claims Handling for Insurance or Booking, Loyalty and Rating Engines for Hospitality.

From a non technology perspective Domain’s are how you model your business and a good understanding of the model by both business and technology staff is critical to implementing solutions to meet the needs of the business and fulfil its strategies

Integration is most often seen as a technology solution for connecting system A to system B in complex technology landscapes that have components/application which need to pass data or events between themselves.

From a non technology perspective Integration should be viewed as a means of exposing and providing Data (information) and Capability (functionality) in a consistent manner to allow easy consumption of these core pieces of value inside and outside your organisation.

The Application Network is a newer term used to describe an end state of loosely coupled “nodes of functionality” that can be viewed as building blocks for business processes and initiatives.

The Application Network is an end state that continually evolves as the transformation and modernisation initiatives progress. You should never set out to build an Application Network, it will evolve from a well managed strategy and set of principles.

Micro-services is one of the newer “silver bullets” that promotes advantages such as functional granularity, technology independence amongst components and separation of upgrade dependancies.

Micro Services are a technology approach that bring their own sets of benefits and complications. Outside of Solution Architecture they should be ignored when defining and addressing your transformation strategy.

The Composable Business

In 2020 Gartner gave a keynote on the topic of “The Composable Business“. Gartner’s understanding of a Composable Business means creating an organization made from interchangeable building blocks. The modular setup enables a business to rearrange and reorient as needed depending on external (or internal) factors like a shift in customer values or sudden change in supply chain or materials.

The four principles of a composable business are Modularity, Autonomy, Orchestration and Discovery

The definition of these four principles are;

  • Speed through discovery
  • Agility through modularity
  • Leadership through orchestration
  • Resilience through autonomy

When you focus on Integration as. a Strategy these four elements are key to success and map out to the items discussed above.

Everything is Interconnected

The following graphic is I believe a great starting point when focusing on Transformation and leveraging Integration to drive the initiatives. You should always start from the top down and the business needs drive the solutions. Too many organisations start from the bottom up and build solutions looking for problem and opportunities.

  1. Transformation should be measured by Business Outcomes and delivering to the needs and strategies of the organisation
  2. Transformation should Enable and Empower the Organisation to think different and adapt to suit the needs of the business
  3. Transformation should not be hindered by the current technology landscape but should look at how to release core value (data and capabilities) through the optimum use of the relevant technologies focusing on end-user experience and end-user engagement

All too often an organistion will start at the bottom of the stack and work upwards imposing technology solutions on itself and encumber the realisation of the strategies. In most cases they will approach this by buying/building integration toolsets and exposing “what we need now” to add-on systems. In a sense this is a continuation of the “great grandfathers house” software pattern and does little to transform, simplify or modernise.

This is not to say that tactical (or agile) approaches are not to be considered, most initiatives start off as a PoC however a lot of initiatives never refactor from the PoC to a stable component and in reality just add to the technical debt and complexity of the current technology landscape.

Business, Organisation, Technology

When thinking transformation consider it from a perspective of de-composing the business (needs & strategies), organisation (culture, structure and processes) and technology (data & functions) in a manner that allows you to compose the business needed for today via trusted integration patterns that allow the four principles of the Composable Business to be attained.

Speed through Discovery – Be able to react fast to market opportunities and threats, make it easy to understand and find the components (data, functions, processes, responsibilities) within the business

Agility through Modularity – By treating all areas as composable components you can plug together the necessary pieces to try new things and also pivot to meet the demands of employees, clients and partners

Leadership through Orchestration – Orchestration is a key component of Integration whether you are referring to technology, organisation or business outcomes. In a composable business model the key to succeeding is in orchestrating the integration of the components to deliver what is needed at the proper points in time.

Resilience through Autonomy – By focusing on components and building out a composable business you will achieve a high degree of resilience around technologies, processes and all areas that are critical to your business operations. Using a term from Domain Driven Design, consider the bounded context of each component of your business. What is it supposed to do, what does it need in order to be resilient and what do you do if it is temporarily overloaded or unavailable.

So what is Integration as a Strategy (INTaaS)

To transform to a composable business I propose using the term Integration as a Strategy. Firstly – this is not about running out buying an integration solution and then just building API’s on an as-required basis. By Integration as a Strategy I mean utilising patterns of integration to enable Modularity, Autonomy, Orchestration and Discovery at all levels of the organisation.

  • Apply Business Architecture principles of Why, How, What, How, Where and When whether talking strategy or tactical approaches
  • Understand where you are now and using the business strategies, where you want to be within two / three years – this end point should be evolving as time progresses (you should never end the journey)
  • Incident proof the roadmap to allow for external & internal shocks and opportunities. Remember decisions are based on what we know today and will change as new information is uncovered
  • Build a Domain and Capability model of the organisation using user journeys, use cases & persona’s
  • Utilise the Enterprise Architecture approach of Conceptual, Logical and Physical models for the current and future state across all of the dimensions – Business Organisation and Technology.
  • Build stepped transition models for each of the above models to ensure;
    • the journey is achievable,
    • everyone understands the steps involved
    • highlight tactical (throwaway) components
    • callout impacts to business, organisation and technology
  • Identify the data, capabilities and external needs (out of bounded context) of each domain, this will assist in identifying the level of integration needed.
  • Understand the value of the data and capabilities in delivering the “experience” needed for all types of consumers.
  • Involve all areas of the business and promote cross-department centres of enablement to promote, own, govern and drive your transformation journey.
  • Define easy to understand and execute principles around delivering on strategies and initiatives. These should encompass UX/UI, data responsibilities, security, approach, technology, governance and be constructed over the EA Conceptual, Logical and Physical model process across the Business, Organisation and Technology dimensions
  • Treat all components of your technology eco-system as products and push ownership into the business organisation.
  • Identify relevant and measurable KPI’s that make sense to all areas of the organisation and that can be positively impacted by personal actions.

Next Article

The next article will discuss Integration as a Strategy from a modernisation perspective with the Wrap-and-Replace pattern using the insurance vertical as an example.

Posted in Uncategorized | Tagged , , | Leave a comment

Thoughts on Transformation – Part #1

The aim of this series of blogs is to present my thoughts on transformation covering it from three perspectives; Business, Organisation and Technology. There are so many aspects and viewpoints to transformation that there will never be “one right way to execute it”, to be successful the transformation approach and strategy should be tailored to the explicit needs and dynamics of an organisation, there is however a consistency in defining and articulating the approach and strategy to aid success.

I deliberately do not use the phrase Digital when taking about transformation as I believe it obscures the need for transformation to encompass the whole eco-system of an organisation. The inevitable “digital landscape evolution”, introduction of new technology components and new technology ownership models occurs as a result of transformation execution and should not be looked on as the driver.

————-

Technology has and always will be an enabler. It allows organisations and individuals to realise the ideas and opportunities they come up with. In todays environment where we are shifting from full-contact to contact-less interactions and where people expect a consistent & seamless experience when dealing with all aspects of their life. This is regardless of the purpose of engagement whether personal, work, health or government. A consumers interaction with any aspect of an organisation has the power to drive highly appreciated & trusted levels of experience or to completely annoy and frustrate.

Whether commercial, government or cultural, an organisation’s focus on the experience they deliver to their “consumers” should push towards ease of service procurement and empowerment within delivery or execution of the procured service or product. Think of companies such as Deliveroo who make it easy to find and order from a diverse range of restaurants and then allow the customer insight into the “state of the service” and “who is doing the work” as the order progresses from accepted through to delivery as an example of putting the customer not just first but giving a sense of ownership of the whole process.

Organisations, regardless of what they do, that practise this engagement model put their customers front and centre in all things they focus on. However I would suggest that organisations that excel in this new model also put their staff at an equal level of importance to their customers.

New(ish) organisations that are “Mobile or Web” first tend to gravitate to point (b), organisations such as health and government tend to end up at (a), legacy and highly bureaucratic organisations get stuck at (c). Point (d) is the ideal end-state and should be the target for any organisation.

To put is in simple terms, how many times have you been wow’d by an organisations web or mobile experience and then when things go wrong been absolutely frustrated trying to work through the support services. Conversely organisations that have great support services can have abysmal to no self-service capability. Organisations stuck at (c) do tend to be;

  • Critical-in-nature in that you cannot do without them
  • Too big for customers to challenge effectively
  • Culturally unable to shift their mindset to transform to the new paradigm of experience driven expectations.

To transform an organisation towards (d) requires a shift in mindset, culture and strategy on how they are setup, how staff are empowered and the strategic use of technology to deliver the optimal experience to the end-point consumers. Note: I consider consumers to be anyone who interacts with an organisation be it Employees, Prospects, Customers or Partners.

In todays world you have access to a range of options that simplify the technology aspects. You also have a range of companies offering “silver bullet” Platforms and Digital Transformation Services & Methodologies that proclaim to make it easy to move to your desired area on the chart above.

External Advice, Methodologies and Technology approaches are important as they bring new perspectives and opportunities, however the most critical elements of successful transformation initiation and journeys are;

  • A clear business vision of why, how, what, who, where and when
  • A realistic understanding of your organisations cultural flexibility/rigidity
  • Honest assessment of the current state (Business, Organisation and Technology) and acceptable rate of change
  • A willingness to try new ideas
  • Flexible budgets and budget management focusing on KPI, ROI and not just on end-state $ amounts
  • An understanding of which approaches can provide benefits to transformation for your organisation
  • Resist falling for the “silver bullet” methodology, buzzword or technology trap
  • Buy-in, engagement and ownership from all levels and areas of the organisation
  • Change your world in baby-steps
  • Engagement, Empowerment and Enablement of employees, customers and partners on the journey
  • Acceptance that the transformation roadmap is fluid
  • Not everything will work as expected, change of direction and priorities are an integral part of transformation
  • No-one knows all the answers up front, but initiating the transformation activities is important as it is only possible to assess progress and if necessary change direction when moving

Jason Fried and David Heinemeier Hansson wrote a book titled Rework a number of years ago about their journey building the 37 Signals company (now trading as Basecamp). While the book is focused on their journey from startup, its messages are applicable to any company looking at a transformation as they reenforce a number of core principles that every company on a transformation journey should keep in mind. If you have not read the book yet, it is worth picking up.

Amazon.com: Rework eBook: Fried, Jason, Heinemeier Hansson, David: Kindle  Store

The messages from the book I particularly push when discussing transformation are;

  • Start changing (or making) something
  • You need less than you think
  • Embrace your constraints
  • Draw a line in the sand (small executable and measurable steps)
  • Ignore the details early on
  • Good enough is fine
  • Decisions are temporary
  • Planning is guessing
  • It is the stuff you leave out that matters, constantly look for things to remove, streamline and simplify
  • Make big decisions
  • Don’t copy – Innovate. Trying to copy another organisations successes means you are always playing catchup, never leading
  • Say no by default
  • Culture happens, it is not created

To make a transformation succeed it is critical to have a strategy that encompasses Business, Technology, Organisation. The strategy must be reflective of the business needs, culture & goals. It should also take into account where you are now and the constraints around change that need to be considered. A roadmap consisting of the current state, targeted state and step-states is vital so everyone can see how the transformation journey will be accomplished. The transformation roadmap should be business driven by modelling the organisations needs leading to the technologies required and not technology driven.

Transformation of an organisation is about the delivery of enhanced experiences and is essentially putting the correct data and capability (products, processes & functionality) in the hands of the consumers in a manner that is most appropriate to their role and needs at the time of engagement.

On a final note for this blog entry, Transformation should not be treated as a program but rather as a journey with a desired end-state described via a set of principles and guidelines, which if followed will evolve the organisation and its delivery of services/products over a targeted period of time. The end-state will be fluid, it will evolve over time as business, technology, marketplace, etc present opportunities and obstacles that need to be incorporated into your vision and strategies.

Posted in Uncategorized | Tagged , | Leave a comment

Architecting a Technology Strategy

This is the first in a series of collaborative posts with my technology co-conspirator Grant Matthews.  

What if the only roles of the IT Department was IaaS and Solution Selection & Implementation oversight?

What if all user departments were responsible for selecting,  implementing and supporting their own applications within a well defined but “solution-flexible” architectural landscape.

Strange question you might think but bear with me for a few more paragraphs.  Firstly the IaaS question was placed into my head by ex-colleagues working in a highly complex and dynamic environment.  Would such an approach work (and how)?

Enterprise Architecture and IT Delivery departments in large organisations with highly independent (i.e.  strong minded) business departments tend to be viewed as obstacles  by the business and conversely the business users tend to be viewed as indecisive, solution-focused, intransigent etc by IT.  Allowing each business department to “roll its own” solutions could solve a lot of internal issues but would have to be done in a manner that;

  • Allowed reasonably flexibility for the business departments
  • Did not endanger the operation of the organisation
  • Resulted in a self-selected but highly integrated set of solutions
  • Makes IT the provider of infrastructure, core services and most importantly knowledge (i.e. consultancy)
  • Makes the users 100% responsible for selection, implementation, operation of their own software.

I was mulling over this idea while out walking in my home town and stopped to admire the various yachts at the local marina.  All of the yachts were different classes and sizes and continually change as the owners upgrade, downgrade or move on.  The only constant was the moorings, power & facilities supply and the fencing around the marina.   Now translate this to our IT questions above;

  • Moorings = Infrastructure
  • Power & Facilities = Common Assets and Communications protocols
  • Fencing = Security, SSO, etc
  • Marina/Harbour Entrance = External Communication protocols
  • Yachts = Applications

Just for the fun of it lets call it the “Marina Architecture Pattern” or MAP for short.  The MAP would be a layered construct with a minimum set of rigid architecture principals all applications that wanted to be “docked” into the MAP would have to adhere to.

The MAP layers, referred to as the “Backbone” would be as follows (from foundational to operational);

  • Infrastructure – Mix of on-premises, off-premises DC’s and cloud.
  • Security
  • Bulk Data Transfer
  • SOA / ESB
  • Rules and Process Engines

No business department would be allowed to purchase software that copied or duplicated any of the above layers, cloud solutions being an exception for the infra layer.  Any software purchased would have to utilise / integrate into the MAP Backbone.   The operational costs of each layer would be paid for on an as-used basis by each department and treated as a true “aaS” model.  However the setup of the original framework would need to be a corporate level budgetary item.

A rigid but minimal set of Architecture Principles would be required to :pilot” the selection of suitable applications for use in the MAP;

  • Each Business Department (BD) is its own domain
  • Each domain is responsible for entities that are intrinsically linked to that domain
  • No BD may replicate data or have duplicate functionality of another department or outside its core domain, access to non domain entities must be done via the SOA layer
  • Applications purchased or built must conform to the Security, Data Transfer, SOA and Rules/PE standards defined within the MAP.
  • Extensions and enhancements to the MAP Backbone are the responsibility of the IT Department (e.g. a new service or rules-pattern is required)
  • BD’s are technology agnostic in that they can choose the OS and languages they wish to use, however all costs related to these choices are borne directly by the BD.
  • Only applications that are considered “componentised” and “domain focused” may be considered for purchase/build.  No “Do-it-all” solutions are allowed
  • No DW’s or Big Data solutions can be built within a business domain, this is a standalone BD and will be treated as such (Think of this as the “tender or committee boat” in a marina).  Everyone has use of it but it is owned and managed by the marina
  • There is no overall IT budget for applications (with exception of the MAP and Data Domain).  Each department decides how much of its own budget it wants to spend on its own solutions.  They must also fund the extensions to the MAP Backbone to facilitate these solutions
  • The “you bought it” you “own it” principle is enforced.  No centralised helpdesk or  operations are available from IT.

So where does this leave the traditional IT department?  The IT department transforms into a true “As A Service” provider for the MAP Backbone and a Consultancy Group for providing oversight and consultancy services for application selection and implementation.  At no point is IT directly responsible for any of the applications purchased by the business units.

When I showed this idea to Grant his initial responses were;

  • Plug and Play operating models are a desired direction
  • The challenge isn’t whats described above but in the Data and Integration Services required that permit interoperability between the business application & lines of business usage
  • Some corporations are aligned with the design outlined but do not realise it or understand how to utilise / enforce it
  • The biggest challenge for an organisation is to build an effective EA Roadmap around the approach and then allow it to be enforced so that “tactical decisions” do not wreck to vision
  • Also in the analogy presented, boats and yachts are self contained, but what if a boat suddenly relies on a second boat for fuel or they couldn’t refuel because all other craft had used up a scarce resource

The structure and focus of the IT Organisation would dramatically change and simplified to provide two broad functions to the organisation;

The first would focus on;

  • Availability Management
  • Security Monitoring
  • MAP Backbone Support
  • Infrastructure Management

In other words, they are a commodity service that provisions Keep the Lights On (KTLO) only. Obviously in a secure model to protect the marina and the yachts from
bad behaviours.  This reduces IT to a Utility, like electricity – to be consumed and paid for as needed.

The second function of the new IT organisation would involve Internal Consultancy and  Business Change helper function, outside of IT, that specialises in;

  • Enterprise and Solution Architecture Advisory
  • Innovation and PoC Advisory

The future of IT as a commodity is here now, the cloud providers are already there.
New ventures will rarely start with any on-premises IT.

The challenges with the MAP model come from data integrity, reconciliation and real business intelligence across disparate systems. This is a 20 year old problem
that IT has failed to find a good answer for: Back in 1970’s when systems moved
from single mainframe to multiple systems and client service solutions, gave birth
to the data reconciliation and consolidation problem: MI -> BI -> Predictive
Analytics; Database -> Data Mart -> EDW ->Data Lake; Predictably the most
expensive and most underwhelming solutions to the business problem. So a plug-in
Yacht for Reporting across the Marina is needed. Again, it shouldn’t be IT managed.
An enterprise analytics group should own this problem/solution domain.

This approach to IT by its very nature forces a change in the business operating model that seeks to free the constraints of the lack of quality & skills breadth of a single technology function.  Embedding the skills in the line of business is crucial.

Project management of all projects should be rolled into a organisation focused PMO.  All projects in the organisation regardless of deliverable or function should be driven by this organisation.

A Technology Committee should be formed and chaired by the CTO.  Members of this committee are CTO, Chief IT Architect (CA), senior representative of each business department.  All decisions to purchase a solution must be approved by this committee meaning it is up to a business department to convince other BD’s that its decisions are correct and will not jeopardise the business.  The CTO and CA are not allowed to veto solution selections by BD’s but can raise concerns and alternatives for consideration.

Grant came up with a cool name for the approach “Software Defined Enterprise” … more about that in later posts.

Our intention over the coming months is to flesh out this concept over a series of blogs co-published on each of our blog and LinkedIn feeds.  Each of the posts will deal with a specific aspect of implementing this type of architecture initiative for both business and technology perspectives.

Grant Matthews is an acknowledged thought leader in all things IT architecture focused, his profile on Linkedin is at https://www.linkedin.com/in/grantdouglasmatthews

Posted in Architecture, IT Strategy | Leave a comment

rtom #4 – Building a Rules Engine

rulesThis is a follow-on from “RealTime Offer Management” (RTOM) series of posts, the first of which can be found here.  A complete list of articles in this series is included at the bottom of this post.

 

 

When first visualising the RTOM Application I spent a lot of time thinking about how a user could define rules that would allow the RTOM processing to;

  • Filter inputs to Data-Nodes from Event Data
  • Allow conditions to turn-on-off Data Node Switches
  • Compute Data-Node values from other Data-Nodes
  • Evaluate whether an Offer should be made
  • Evaluate the rewards to be made within the offer based on patron’s profile and/or other information managed by RTOM
  • Allow the reward discounts/values to be calculated from RTOM data
  • Allow decision criteria to be flexible based on Events, Data-Nodes and/or Patron Attributes

I did not want to build a fully fledged rules language to rival a commercial product, but rather wanted to focus on a subset of rules that would merge seamlessly into a dedicated realtime offer management application.

In its simplest form the Rules Language required a very small number of statements that are divided into two classes, conditionals and calculators;

  • IF
  • AND-IF
  • SWITCH-ON-IF
  • SWITCH-OFF-IF
  • GET (Calculate)

In its simplest form a conditional statement would look like

if operand evaluation operand

and a calculation statement would look like

get operand calc_symbol operand

So far seems simple enough, however the beast known as complexity reared its head when I added in the “Operand Types” and “Operand Qualifiers” the Rules Language was requried to evaluate.

Each “operand” can be one of the following types;

  • Value, Number, Percentage
  • Keyword (reserved word to denote a language function.  e.g. #mon, #tue, #str_month, etc)
  • Quick List (name of a list that contains a set of items to be evaluated)
  • Array
  • Objects (Event, Data-Node, Offer)
  • Meta Data (attached to Event, Data-Node, Offer)
  • Data (from Event, Node)
  • Function

Note – The Array and Quick List types can be a list of any of the other operand types with the exception of the Function type.

In addition to the range of operand types there was an added element of complexity with “scope of data” for Event, Data-Node and Offer types and Operand Qualifiers to allow manipulation of results.

My goal was to build a rules language that met the above requirements and do it in a way that;

  • Allowed dynamic configuration data driven processing to be used for managing the simplest through to the most complex statement types with the scope defined above
  • The language and processing solution had to be flexible in that I could add new features easily with minimal to no coding on the syntax checkers and parsers

To put this goal in context, an example of a complex conditional statement would be;

if count { f&b_payment 
           where-log e.~business = @fine_dining 
                 and e.total_bill > 100
                 and e.~date > #str_year()
                 and e.~date > o.~date
            group-by [~business]
            group-drop count < 2
         }
   => match p.loyalty_tier 
      with "platinum" = 3 | "gold" = 5 default 7
and-if o~date < #today(-120)

The syntax breakdown is

if count of 
   { read through the f&b_payment events
     select events (e) based on the following criteria 
        e.~business - is in list of fine dining establishments
        e.total_bill - was over 100
        e.~date - since the start of the year
        e.~date - since the last time this offer was made
     group the results by business
     drop any groups with less than 2 entries }
is greater than or equal 
   3 for "platinum" members
   5 for "golds members"
   7 for everyone else
and if this offer has not already been made in the last 120 days

As you can see this is a cross between logic statements and SQL. style statements.

The biggest concession I made to my original vision was to accept that a Rules Language needs some structure and will not be in prose style English.  I am sure with enough time I could get to a version that would be close to the breakdown above but I have decided to forego this level of language verbosity and stick to a midway point in order to get the solution completed.  At some later point I will evaluate how much further the syntax needs to evolve to be prose based rather than condition/function based.

So on to how to build a Rules language.  What follows is my approach to building what I needed for the RTOM application.  It is by no means a definitive statement of “this is how it is done” it is just one approach that worked for me.

Early on I discovered that trying to define a full syntax of a language up front was extremely difficult to achieve.  I therefore adopted an iterative Test Driven approach where the Rules Language Syntax evolved from the Test Scripts;

untitled-diagram-1

As well as using the iterative test driven approach I also adopted a “full-on refactoring” mindset to the code development.  No addition/modification to working code was too big or small in order to make the code-base more flexible and deliver a smaller footprint.  The coding principles I used were in summary;

  • Test Driven Development
  • Data Driven Engine
  • Single Function Design
  • Iterative approach
  • Refactor as a Principle
  • Configuration over Coding
  • No hardcoding

There were three main pieces of code to be built for the rule language.  A Syntax Checker to validate the input, a Parser to transform the raw input into executable structure and the Execution Engine to process the rules language.

THE SYNTAX CHECKER

This was actually the most challenging piece of code to write.  It would have been easy to have just built a massive set of ‘if’ based coding conditions but that would have detracted from the goals mentioned earlier in this post and the coding approach I wanted to use.  The syntax checker evolved to perform four distinct functions, each one layered on top of the other as the rules passed through their area of focus.

The first step was to “standardise” the input from the user.  In order to build repeatable consistent syntax checking functionality the easiest way to drive the functionality is to ensure consistency in the input.  Rather than push strict formatting rules back to the user, the system first tries to standardise the inputs.  I decided on an approach of formatting whereby each distinct piece of the rule would be separated by a single space and all unnecessary whitespace would be removed.

// Step 1 - Standardise the Input

if event.~business   = ["abc",  "def",  "xyz"]

--> if event.~business = ["abc","def","xyz"]

if sum{event.value where-log e.~business =  @fine_dine}   >  200

--> if sum { event.value where-log e.~business = @fine_dine } > 200

The second step was to “categorise” each piece of the rule within one of the standard allowed categories.

// Step 2 - Categorisation of function pieces

if event.~business = ["abc","def","xyz"]

--> if                        statement
    event.~business           event-meta
    =                         eval
    ["abc","def","xyz"]       array

if sum { event.value where-log e.~business = @fine_dine } > 200

--> if                        statement
    sum                       function
    {                         func-start
    event.value               event-data
    where-log                 qualifier
    e.~business               event-meta
    =                         eval
    @fine_dine                qlist
    }                         func-end
    >                         eval
    200                       number

The third step involves “pattern matching” to check the structure of the rule was correct.  This was looking to be one of the most tricky parts of the syntax checking until I remembered a design pattern I used back in 2003 to construct an automated XSLT generator.  It involves breaking the allowed structure into a set of simple definitions defined by configuration data that drives a recursive OO based process for pattern checking and verification.  The actual code to do the pattern checking is less than 80 lines and relies on a pattern checking data structure to tell it what is valid.  The premise of the processing is to check through all viable patterns until a match is found.  As this processing happens during data input by the user a small sub-second time lag is of no issue.

The pattern checking definition looks as follows (note sample only);

$parse_patterns = [

'start' => [
 '#1' => ['statement', '__cond', '__and-if'],
 '#2' => ['statement.get', '__get'],
 '#3' => ['statement', '__cond'],
 '#6' => ...
 ],

'__and-if' => [
 '#1' => ['statement.and-if', '__cond', '__and-if'],
 '#2' => ['statement.and-if', '__cond']
 ],

'__cond' => [
 '#1' => ['__lhs', 'eval', '__rhs'],
 '#2' => ['__function', 'eval', '__rhs'],
 '#3' => ...
 ],

As you can see the patterns are described using the  “categories” from step 2.  The pattern checking object takes the “start” input pattern and tries to find a match within its valid pattern model stepping through each option.  It steps through each option until it either runs out or finds a valid pattern.  The “__” prefixed names are used to break the pattern model into “sub-patterns” which can be put together in different ways.  By doing this there is no need to try and define each unique instance of all possible patterns (an impossible task).

When a “sub-pattern” identifier is found the pattern checking object instantiates a new version of itself and passes the child object the “formula categories”, “starting point” and “sub-pattern” it is to focus on.  Child objects can also instantiate other child objects.   Hence we have a recursive approach that requires no coding to specifically refer to formula categories and valid patterns.  The codes only job is to compare the formula categories with the valid data patterns it has.  This allowsteh rules language to be extended without significant coding effort.  Also the sub-patterns can be recursively called from within themselves (e.g. “and-if”).

I also built in the capability for the categories to be qualified using a dot “.” notation to provide flexibility for checking for specific category element types in particular locations in the rule syntax.

 '__function' => [
 '#1' => ...
 '#2' => ...
 '#3' => ['function.[sum,count,maximum,minimum,average]', 
          'func-start', ['event', 'event-data', 'node'], 
          'qualifier.where-log', '__log', 'func-end'],
 '#4' => ...
 ],

The fourth step is “context checking“.  In this step the processing iterates through the categories and ensure the contents of each category are suitable for the processing to be performed.  For example all elements in an array are the same type, all elements in an array are of the type required, a number is compared to a number, etc.

THE PARSER

Once you have a valid input rule format the next step is to convert it to an “executable format”.  The purpose of this step is to build a structured representation of the rule which can be easily interpreted and executed by the “rule execution engine”.  Think of it as pseudo “code compilation”.  Within the RTOM the rules input by the user are always stored in the exact format they were keyed in (think source) and a second version in execution format.  As the execution format is not actually “executable code” but rather formalised rules to drive a “generic execution engine”, I opted to store the “code” in JSON format for ease of processing.  Taking our earlier examples the executable formats would look as follows;

if event.~business = ["abc","def","xyz"]

--> { statement: "if",
      lhs: { type: "event-meta",
             operand: "event.~business", }
      eval: "=",
      rhs: { type: "array",
             operand: [ "abc", "def", "xyz" ] }
    }

if sum { event.value where-log e.~business = @fine_dine } > 200 

--> { statement: "if",
      lhs: { type: "function",
             function: { name: "sum",
                         input-type: "event-data",
                         where-log: [
                              { lhs: { type: "event-meta",
                                       operand: "e.~business" }
                                eval: "=",
                                rhs: { type: "qlist",
                                       operand: "@fine_dine" }
                              } ]
                       } 
           }
      eval: ">",
      rhs: { type: "number",
      operand: 200 }
    }

THE RULES EXECUTION ENGINE

The rules execution engine (REE) is in itself a simple object that consists of a discrete set of functions (methods)  that do one and only one thing.  This “single function” approach allowed me to build fairly bullet proof code that can be “welded together” in various ways based on an input that is variable in structure (i.e. the Rule).  The REE takes that JSON structure and reads through it looking for specific element structures and handing them off to the aforementioned functions.  An object containing the relevant formatting functions and value is is created for each operand.  The execution engine operates as follows;

  • Determine the rule style to be processed (Condition or Calculation) from the statement element
  • Resolve the value of each operand based on its type
  • For quick list types get the array of items references
  • Pre-process specific #keyword instructions
  • Resolve all #keyword values (e.g. #mon = 1, #thu = 4, #str_month() = 2016-12-01, etc)
  • Perform statement processing using functions on the operand objects
  • Return the result.  True/False for conditions and Value for calculations

——–

So in summary, the rules language and its associated processing has been quite a challenge to complete and the version of the code I have now is v3. .  While there is room for expanding the range of statement types and enhancing the existing facilities I think the way I have approached the build has set a reasonably good foundation for futher growth.  The language format I have settled on (for now) is a bit more “codified” in structure than was my original ambition but is generally as readable as SQL.  As the RTOM application matures I will focus on taking the language to a more descriptive form and losing the “code” feel to it.  In total the three parts of the solution Syntax Checker, Parser, Execution Engine have a footprint of just under 2000 lines of code which is (I think) pretty good for the level of flexibility it provides in its current state.

One of the fun aspects about designing and building this type of solution and approaching it in the manner described above is that the true range of flexibility only became apparent when I started showing it to people who asked “can it do this …”, and realising it is flexibile enough to accommodate the request even though I had not specifically designed for it.

In the next articles I will describe how the Offers are structured and the facilities the RTOM provides in this area.

Complete list of articles in series;

 

Posted in Architecture, Gaming, Hospitality, IT Strategy, Product Development, Realtime Offers, Software, Software Development | 1 Comment

rtom #3 – Approach to Data

This is a follow-on from “RealTime Offer Management” (RTOM) post published on 19th November 2016 which can be found here.

The incentive for building the RTOM application was in looking at how to implement an Event Enabled Enterprise (E3) in a manner that was flexible, rule driven and easy to implement.  There are two what I will call “traditional” ways to deliver such a solution.  The first is “services” driven which requires all events to deliver “event triggers” when executed and requires an ESB/SOA driven approach, the second is a “data driven” approach where all data is gathered in one place (data warehouse, big data) and evaluated for required conditions.  Both of these approaches require significant degrees of architecture & development work plus the implementation of coded rules or a generic rule engine which then feeds back into a campaign or offer management system.

Within the world of Data we traditionally look at entity models, objects, attributes using a “real world” view but then end up in most (not all) cases storing the data in rigid tabular or document structures and try to mimic real world scenarios using complex code and queries to interrogate the data.  The NoSQL evolution has helped to soften this approach to a degree but the way we think about data and how to store it is still very “tabular” and “relational” in nature.  This in turn creates its own issues as the receipt of data needs to be processed stored, accumulated with scheduled processes running on a near continual basis seeking changes over a complete dataset to try and mimic realtime processing for triggered events.

db-1

In addition to this it is usually near impossible to provide a solution to users which allows them to implement rules referring to objects/data with real-world context.  You almost always end up referring to columns in tables and building additional data-marts, playing with the data in excel just to answer sometimes pretty simple questions and thus lose the capability to “offer in realtime”.

For RTOM the approach to data has been to look at a model where the data is a cloud of easily referenceable;

  • Events
  • Data Attributes
  • Meta Data
  • Higher Level “Data Nodes”

attached to a patron.

db-2

NOTE – For the remainder of this post I will use the term Patron to refer to Patrons, Guests, Customers of your business.

Events Are basically any transaction or data set generated by an operational application when the patron interacts with your property/business.  These events can be active or  passive.  Active events are transaction such as check-in, pay a bill, card-in, turn on slots, hand of cards, etc – essentially anything that is deliberately initiated by the patron.  Passive events are activities captured by virtue of the Patron’s interactional context with your business / property, these are mostly although not wholly related to location and environment aware data streams from a mobile device or interactions such as parking a vehicle, etc.

Data Attributes are the individual data items generated that describe the business context of an event.  (e.g. arrival-date, room-rate, F&B total, win/loss amount, etc)

Meta Data Is the descriptive data around the context of an Event, type of Data Node and various filter, computational & offer rules to be evaluated on creation or change of an Event/Data-Node.

Data Nodes are higher level data elements sourced from Event Creation, Event Data Attributes and/or other Data Nodes.  (e.g. a switch to indicate “on-property”, “gaming-in-progress”, accumulators to store count, sum, etc  persistant instances for Events/Data-Attributes at various levels of granularity.

The RTOM data design is based on an event driven model which uses “Data Change” agents to recognise events and send JSON formatted messages to the RTOM.  The data received by RTOM is considered “operational” and RTOM is not intended to be a Data Warehouse or mimic Big Data.  It is however an Analysis Engine that uses data changes to trigger rule processing for data nodes (switches, accumulators, etc) and for evaluating offer conditions based on events, data nodes, patron attributes, etc.

Data is within RTOM classified in two different layers;

The basic classification is foundational and applies to the Data Attributes of an Event;

  • Alphanumeric
  • Date
  • Time
  • Date-Time
  • Integer
  • Decimal
  • Money (Allows for automatic handling of exchange rates for monetary amounts)
  • Virtual (Allows for summary of raw data attributes)

The secondary layer of data classification is behavioural and applies to the Data-Nodes;

  • Static (new values always overwrite the old value)
  • Evolve (history of old values is maintained)
  • Switch (yes/no boolean based on rulesets)
  • Collection (list of values)
  • Sum (total of integer, decimal or monetary data-attributes)
  • Count (count of events)
  • Minimum (minimum data-attribute value received for data-node)
  • Maximum (maximum data-attribute value received for data-node)
  • Average (average data-attribute value received for data-node)
  • Points (special data-node type used for patron point management)

RTOM deals with four main classes of data around which all of its rules and processing have been defined.  These are;

  • Events
  • Data-Nodes
  • Patron Preferences and Attributes
  • Offers

Events are received in a JSON format which has the following layout;

{ meta: { // Standard Information about the event
          id:
          brand:
          property:
          business:
          ...
        }
  data: { // The payload area
          ...
        }
  analysis: { // Lower level detail about the payload
              ...
            }
}

RTOM enhances the meta data elements to attach additional criteria for evaluation and selection rules.  The Event “data” and “analysis” structures are stored in original JSON format with indexes composed of the meta data items.

By treating Events as JSON documents the RTOM application can store and refer to any data attributes required to be processed for offer evaluation.

A Data-Node is essentially a key-value paring with enhanced indexing and context rules.  A data node will hold a single value and be referenced by meta-data for  evaluation and selection rules.  Data nodes are stored in a traditional relational DB table structure.

Using an extended key-value construct allows for the RTOM application to be easily configured to create, store and process any data-node that can be derived from Events and their data-attributes.

Patron Preferences and Attributes are a very simple key-value pairing allowing referenceable information about the Patron to be addressed in the rules.  This information is also stored in a traditional relational DB table structure.

Offers consists of two parts, the Offer Definition data and the Offers Made data.  For flexibility and processing efficiencies the Offer Definitions are stored in a combination of relational DB table structures and JSON documents.  Offers Made data is “operational” in nature and more rigid in format so is stored in relational DB table structures.

Using the combination of the Events stored in JSON with Data-Nodes, Patron Pref’s and Offers Made stored in relational table structures and using a dot “.” notation it was possible to allow a rules language to be defined to access data with the following object.data references

  • e (current event in scope)
  • <event_name>
  • e.~meta_data_element / <event_name>.~meta_data_element
  • e.data_attribute / <event_name>.data_attribute
  • n (current data node in scope)
  • <data node> value
  • n.~meta_data_element / <data_node>.~meta_data_element
  • p.item for patron preferences and attributes
  • o (current offer in scope)
  • <offer_name>
  • o.~meta_data_element / <offer_name>.~meta_data_element

For for example:

A simple rule rewarding people who are gold or platinum members when checking in;

if check_in snd p.loyalty_tier = ["gold", "platinum"]

A simple rule looking for someone who is staying at a hotel every 5th time since the start of the year.  Assume a data-node accumulator has been configured to store stays;

if property_stays_year adjust-by-fulfill = 5

The adjust-by-fulfill "qualifier" adjusts the stored value by 
the value it was set to when the last offer was made

A simple rule looking to reward patrons who play between 9am and 5pm on Tuesday and Wednesdays and execute at least 100 plays.  Assume a data-node counter has been configured to store plays by day;

if card_in.~dow = [#tue, #wed] 
   and check_in.~time >< ["09:00", "17:00"] 
   and day_plays => 100

~dow is a "day of week" meta data attribute

A more complex rule to make an offer based on someone

  1. Visiting a minimum of five fine-dining restaurants
  2. At least twice per venue
  3. Spending a minimum of $200 at each sitting
  4. Within the last 10 months

the rule would be;

if count 
   { f&b_bill_pay // Event is payment of bill
       where e.~date => #str_month(-10) // Last 10 months
             e.~business = @fine_dining // Fine Dining business list
             e.total_bill => 200 // Bill is 200 or more
       group-by [e.~business] 
       group-drop count < 2 // Only keep groups of 2 or more entries
   }
   => 5 // 5 or more restaurants

@fine_dining refers to a Quick-List element which can be
configured to hold any list of information items
#str_month(x)is a keyword producing a start month date X months ago

So in summary, by focusing purely on the domain model being addressed and also taking consideration of the usage of the data with a flexible user orientated rules syntax structure the data model for RTOM has been evolved to be;

  • A combination of JSON, enhanced key-value pair and relational DB models
  • Fully capable  of storing any data provided by the operational systems
  • Fully capable of storing any data derived from the events/data-attribute
  • Is Patron Centric
  • Is driven by meta-data defining actions to be executed based on creation/update of events and data-nodes

In the next article I will delve more deeply into the rules language, how it has evolved and some compromises I needed to make on the starting vision.

Complete list of articles in series;

Posted in Architecture, Gaming, Hospitality, IT Strategy, Product Development, Realtime Offers, Software Development, Uncategorized | 2 Comments

rtom #2 – Design Overview

This is a follow-on from “RealTime Offer Management” (RTOM) post published on 19th November 2016 which can be found here.

Before diving into the details of what I ended up building, I want to document the overall “grand plan” and then focus on individual elements as these posts progress.  The approach I took to the RTOM Application was based on the set of Architecture Principles and Functional Design Guidelines outlined below.

ARCHITECTURE PRINCIPLES

  • Cloud based
  • Suitable for a SaaS pricing model
  • The database should be “Client Data Agnostic”
  • Definition of Events that drive the application must be 100% configurable
  • The application should be  “Data Reactive”
  • Mobile first for patron/client offer engagement
  • Integration with source and target systems in real-time
  • Offer evaluation, presentation and redemption should be “realtime”

Client Data Agnostic refers to a principle where the information stored by a user of RTOM has no impact on the structure of the RTOM database and the contents of the data while available to the rules and analysis engines for assessment are essentially “not of structural interest” to the RTOM application.

Data Reactive is a term I use to refer to a design principle where an update of an element of data causes the application to react and “do something”.  Essentially the data structure has meta-data attached to each data element which describes and initiates the relevant processing should an item of this data-element type be created or updated

FUNCTIONAL DESIGN GUIDELINES

  • Ease of Integration
  • The “Rules Language” should be close to natural language expressions
  • The rules language should allow references to events, data, and meta-data
  • Offers should be presented in a “Closed Loop” to track uptake and prevent fraud
  • Offers should be managed via a mobile platform
  • Possible to attach realtime “budget management” and tollgates to both offers and patrons/clients (comping limits, etc)
  • Realtime redemption management
  • Offer Redemption Throttles limiting number of redemptions allowed by time periods
  • Offer lifetime, availability and expiry controls
  • Offers can be customised to patron/client profiles and preferences
  • Offer values/discounts can be data/event driven and derived
  • Offers can be “fixed rewards”, “choice of reward options” or “rewards based on result of a game on the patrons mobile device”

Rules Language refers to the scripting language the RTOM uses to allow its users to define conditions for filtered data accumulators (nodes & switches), qualifying offers, targeting reward sets to specific profiles, defining reward discount %’s & amounts, etc.  It is essentially the glue that binds the processing together.

Closed Loop processing basically means that all actions taken against offers feed back to the RTOM for evaluation, verification and redemption / rejection.

Conceptually the RTOM Application looks as follows;

rtom-1

RTOM – Events feed into the Evaluation and Offer Management Engine

rtom-2

RTOM – The offers are managed via a mobile app

So far I have managed to keep to the architecture principles and function design guidelines without having to compromise to any great extent.  The two areas that have caused headaches and much rework have been the “rules language” and the “database design”.

Coming up with a rules language is far more complex and challenging than I had originally anticipated.  I originally wanted to get to a point where the language would be close to natural language expressions rather than computer focused “if x = y then z” syntax.  This is easier to envisage than actually implement.  In essence I ended up building a “Domain Specific Language” or DSL.  I am currently on the third iteration and think I have it cracked (famous last words as they say).  The key lessons I learned to get to this point are;

  • Raw English (and I presume German, French, Hungarian, etc) is a crazy language to try and articulate processing requirements in.  Regardless of how “non-programatically” you wish to make an approach we always resort to structured sentences and conditions when describing evaluating a situation.
  • Work within the confines of the Domain in which the new language is going to be used.  In this respect the RTOM domain has a fixed set of object types (Event, Data Element, Offer, Patron, etc) and a defined set of meta-data elements about each type.
  • Define a set of verb based functions that allow greater flexibility.  For example use keywords like “match, sum, count, average” to hide complexity and aid readability
  • Do not over-design that solution
  • Build a structure that is extensible and flexible (within reason) before trying to make it pretty
  • Do not design first, and test later.  I wrote hundreds of offer related conditions and expressions before and during the design process to visually and grammatically test out expression sequences.  In essence I wrote the examples and then built the code to fit them, rather than trying to build a fully generic piece of code the then seeing if the test worked – Test Driven Development principles really worked well here
  • Do not be afraid to start with a clean sheet rather than trying to rescue a lost-cause

I ended up writing a syntax validator, parser and processor.  While they are at their third iteration at present I know there is still room for improvement and will probably evolve them to v4 and v5 within six months.

I will discuss the approach to the data concepts & database approach and how they evolved in the next post.

Posted in Architecture, Gaming, Hospitality, Realtime Offers, Software, Software Development | 1 Comment

Real Time Offer Management

realtime_img-1A year or so ago for family reasons I needed to head back to Europe from a fantastic job at Marina Bay Sands in Singapore.  The experience of working in an Integrated Resort (Hotel, Casino, Retail, Conference Centre) is one that is still the high point of 30 years of working in IT.  Heading back to Budapest and wondering what to do with my time I decided to embark on a private project I called “Realtime Offer Management” hoping to get it into a startup.  While the startup part has yet to materialise the experience of building out a PoC for the concept has been enlightening and highly instructive on how to approach these type of applications.  Version 0.1 of the PoC is finished and I am now building v0.2 to capture the feedback received over the year and also refine the rough edges of v0.1

This is the first of a series of blogs about the application I am building and my thoughts on this area of focus for Hotels, Casinos, and virtually any organisation that has interactions with guests, patrons, customers in a manner that they want to provide rewards based on an immediate response to interactions with elements of their organisation.

So first to clarify what is “Realtime Offer Management”, in essence it is the ability to act immediately on any interaction of a customer with your property/business, examples would be;

  • Checking in/out of a hotel
  • Paying for a meal
  • Carding in/out of a slot machine/table in a casino
  • A spin on the slots, hand of cards
  • Parking your car, or using the valet service
  • Walking into the property/business
  • Being in a specific place
  • etc

Each of these activities is in essence an “event” and has a set of information (actually quite small) that is of significant relevance when plotting customer engagement with your business.  By having that ability to capture these “events”, register the individual pieces of information against a known entity you can build a set of rules to react to customer engagement and trigger relevant offers based on history, preferences and location.

To put it in context, simple examples would be;

  • Reward a person for the n-th visit to the Hotel, Restaurant
  • Reward a person for visiting X restaurants and spending an average of Y in Z days
  • Incentivise a person to remain on the property based on events such as good/bad luck in the casino
  • Interact with people while they are on property or even within their gaming environment
  • Incentivise a person to return to your property
  • +++

The Enterprise Architecture approach to this challenge has pretty much always been in one of the following formats;

  1. Build out custom code to monitor for specific events
  2. Use “big data” to massage the operational deluge of data and then write code to work off the results
  3. Use a combination of BPM, Event Management, ESB software (such as Tibco, etc) to build event management suites
  4. Use commercial rules engines and build out complex algorithms to meet individual requirements

Each of the above is not a wrong approach but is an approach that involves “building code”,  “analysing operational data” and is pretty much “invasive” to the operational landscape, they are also fairly expensive to implement.  They also have these drawbacks;

  1. Highly customised and requires coding to making it work for each individual event
  2. Needs significant investment to make if work fast and also falls prey to time lag due to ETL, analysis and other constructs of a data analysis environment
  3. Again whilst touted as code free, this approach is highly IT dependant and rarely free of some (or lots) of coding for decision making nodes
  4. Same as 3.

I decided to take a step back and try to look at the problem/opportunity in a completely different manner, the first was to rethink how we view data about a person within the organisation, the second was to look at how to structure rules focused on the specific problem domain that is “Hospitality and Gaming” within a “Realtime Environment”.  The third and most important was to try and arrive at a position where marketing could configure Offers without resorting to pseudo-coding or IT assistance.

Over the next few articles prior to the release of v0.2 of the PoC, I intend  lay out my though process for the approach to this area and in a sense document why I approached the problem-domain in the manner I have.  If you are interested in what I am doing I would ask you to please engage with me via the comments section, all opinions are welcome and will help me in the launch of the next version of the solution.

As a teaser for the next instalment  of this series, what would be your reaction if I said “that in an integrated resort (Hotel, Casino, Retail, etc) your need to monitor less than 10 transactions with a total combined set of less than 70 pieces of information in total to drive an efficient realtime offer management solution, and you do not need a data warehouse or to spend $m on a solution?”.

 

 

Posted in Architecture, Gaming, Hospitality, IT Strategy, Product Development, Realtime Offers, Software Development | 2 Comments