Architecting a Technology Strategy

This is the first in a series of collaborative posts with my technology co-conspirator Grant Matthews.  

What if the only roles of the IT Department was IaaS and Solution Selection & Implementation oversight?

What if all user departments were responsible for selecting,  implementing and supporting their own applications within a well defined but “solution-flexible” architectural landscape.

Strange question you might think but bear with me for a few more paragraphs.  Firstly the IaaS question was placed into my head by ex-colleagues working in a highly complex and dynamic environment.  Would such an approach work (and how)?

Enterprise Architecture and IT Delivery departments in large organisations with highly independent (i.e.  strong minded) business departments tend to be viewed as obstacles  by the business and conversely the business users tend to be viewed as indecisive, solution-focused, intransigent etc by IT.  Allowing each business department to “roll its own” solutions could solve a lot of internal issues but would have to be done in a manner that;

  • Allowed reasonably flexibility for the business departments
  • Did not endanger the operation of the organisation
  • Resulted in a self-selected but highly integrated set of solutions
  • Makes IT the provider of infrastructure, core services and most importantly knowledge (i.e. consultancy)
  • Makes the users 100% responsible for selection, implementation, operation of their own software.

I was mulling over this idea while out walking in my home town and stopped to admire the various yachts at the local marina.  All of the yachts were different classes and sizes and continually change as the owners upgrade, downgrade or move on.  The only constant was the moorings, power & facilities supply and the fencing around the marina.   Now translate this to our IT questions above;

  • Moorings = Infrastructure
  • Power & Facilities = Common Assets and Communications protocols
  • Fencing = Security, SSO, etc
  • Marina/Harbour Entrance = External Communication protocols
  • Yachts = Applications

Just for the fun of it lets call it the “Marina Architecture Pattern” or MAP for short.  The MAP would be a layered construct with a minimum set of rigid architecture principals all applications that wanted to be “docked” into the MAP would have to adhere to.

The MAP layers, referred to as the “Backbone” would be as follows (from foundational to operational);

  • Infrastructure – Mix of on-premises, off-premises DC’s and cloud.
  • Security
  • Bulk Data Transfer
  • SOA / ESB
  • Rules and Process Engines

No business department would be allowed to purchase software that copied or duplicated any of the above layers, cloud solutions being an exception for the infra layer.  Any software purchased would have to utilise / integrate into the MAP Backbone.   The operational costs of each layer would be paid for on an as-used basis by each department and treated as a true “aaS” model.  However the setup of the original framework would need to be a corporate level budgetary item.

A rigid but minimal set of Architecture Principles would be required to :pilot” the selection of suitable applications for use in the MAP;

  • Each Business Department (BD) is its own domain
  • Each domain is responsible for entities that are intrinsically linked to that domain
  • No BD may replicate data or have duplicate functionality of another department or outside its core domain, access to non domain entities must be done via the SOA layer
  • Applications purchased or built must conform to the Security, Data Transfer, SOA and Rules/PE standards defined within the MAP.
  • Extensions and enhancements to the MAP Backbone are the responsibility of the IT Department (e.g. a new service or rules-pattern is required)
  • BD’s are technology agnostic in that they can choose the OS and languages they wish to use, however all costs related to these choices are borne directly by the BD.
  • Only applications that are considered “componentised” and “domain focused” may be considered for purchase/build.  No “Do-it-all” solutions are allowed
  • No DW’s or Big Data solutions can be built within a business domain, this is a standalone BD and will be treated as such (Think of this as the “tender or committee boat” in a marina).  Everyone has use of it but it is owned and managed by the marina
  • There is no overall IT budget for applications (with exception of the MAP and Data Domain).  Each department decides how much of its own budget it wants to spend on its own solutions.  They must also fund the extensions to the MAP Backbone to facilitate these solutions
  • The “you bought it” you “own it” principle is enforced.  No centralised helpdesk or  operations are available from IT.

So where does this leave the traditional IT department?  The IT department transforms into a true “As A Service” provider for the MAP Backbone and a Consultancy Group for providing oversight and consultancy services for application selection and implementation.  At no point is IT directly responsible for any of the applications purchased by the business units.

When I showed this idea to Grant his initial responses were;

  • Plug and Play operating models are a desired direction
  • The challenge isn’t whats described above but in the Data and Integration Services required that permit interoperability between the business application & lines of business usage
  • Some corporations are aligned with the design outlined but do not realise it or understand how to utilise / enforce it
  • The biggest challenge for an organisation is to build an effective EA Roadmap around the approach and then allow it to be enforced so that “tactical decisions” do not wreck to vision
  • Also in the analogy presented, boats and yachts are self contained, but what if a boat suddenly relies on a second boat for fuel or they couldn’t refuel because all other craft had used up a scarce resource

The structure and focus of the IT Organisation would dramatically change and simplified to provide two broad functions to the organisation;

The first would focus on;

  • Availability Management
  • Security Monitoring
  • MAP Backbone Support
  • Infrastructure Management

In other words, they are a commodity service that provisions Keep the Lights On (KTLO) only. Obviously in a secure model to protect the marina and the yachts from
bad behaviours.  This reduces IT to a Utility, like electricity – to be consumed and paid for as needed.

The second function of the new IT organisation would involve Internal Consultancy and  Business Change helper function, outside of IT, that specialises in;

  • Enterprise and Solution Architecture Advisory
  • Innovation and PoC Advisory

The future of IT as a commodity is here now, the cloud providers are already there.
New ventures will rarely start with any on-premises IT.

The challenges with the MAP model come from data integrity, reconciliation and real business intelligence across disparate systems. This is a 20 year old problem
that IT has failed to find a good answer for: Back in 1970’s when systems moved
from single mainframe to multiple systems and client service solutions, gave birth
to the data reconciliation and consolidation problem: MI -> BI -> Predictive
Analytics; Database -> Data Mart -> EDW ->Data Lake; Predictably the most
expensive and most underwhelming solutions to the business problem. So a plug-in
Yacht for Reporting across the Marina is needed. Again, it shouldn’t be IT managed.
An enterprise analytics group should own this problem/solution domain.

This approach to IT by its very nature forces a change in the business operating model that seeks to free the constraints of the lack of quality & skills breadth of a single technology function.  Embedding the skills in the line of business is crucial.

Project management of all projects should be rolled into a organisation focused PMO.  All projects in the organisation regardless of deliverable or function should be driven by this organisation.

A Technology Committee should be formed and chaired by the CTO.  Members of this committee are CTO, Chief IT Architect (CA), senior representative of each business department.  All decisions to purchase a solution must be approved by this committee meaning it is up to a business department to convince other BD’s that its decisions are correct and will not jeopardise the business.  The CTO and CA are not allowed to veto solution selections by BD’s but can raise concerns and alternatives for consideration.

Grant came up with a cool name for the approach “Software Defined Enterprise” … more about that in later posts.

Our intention over the coming months is to flesh out this concept over a series of blogs co-published on each of our blog and LinkedIn feeds.  Each of the posts will deal with a specific aspect of implementing this type of architecture initiative for both business and technology perspectives.

Grant Matthews is an acknowledged thought leader in all things IT architecture focused, his profile on Linkedin is at

Posted in Architecture, IT Strategy | Leave a comment

rtom #4 – Building a Rules Engine

rulesThis is a follow-on from “RealTime Offer Management” (RTOM) series of posts, the first of which can be found here.  A complete list of articles in this series is included at the bottom of this post.



When first visualising the RTOM Application I spent a lot of time thinking about how a user could define rules that would allow the RTOM processing to;

  • Filter inputs to Data-Nodes from Event Data
  • Allow conditions to turn-on-off Data Node Switches
  • Compute Data-Node values from other Data-Nodes
  • Evaluate whether an Offer should be made
  • Evaluate the rewards to be made within the offer based on patron’s profile and/or other information managed by RTOM
  • Allow the reward discounts/values to be calculated from RTOM data
  • Allow decision criteria to be flexible based on Events, Data-Nodes and/or Patron Attributes

I did not want to build a fully fledged rules language to rival a commercial product, but rather wanted to focus on a subset of rules that would merge seamlessly into a dedicated realtime offer management application.

In its simplest form the Rules Language required a very small number of statements that are divided into two classes, conditionals and calculators;

  • IF
  • AND-IF
  • GET (Calculate)

In its simplest form a conditional statement would look like

if operand evaluation operand

and a calculation statement would look like

get operand calc_symbol operand

So far seems simple enough, however the beast known as complexity reared its head when I added in the “Operand Types” and “Operand Qualifiers” the Rules Language was requried to evaluate.

Each “operand” can be one of the following types;

  • Value, Number, Percentage
  • Keyword (reserved word to denote a language function.  e.g. #mon, #tue, #str_month, etc)
  • Quick List (name of a list that contains a set of items to be evaluated)
  • Array
  • Objects (Event, Data-Node, Offer)
  • Meta Data (attached to Event, Data-Node, Offer)
  • Data (from Event, Node)
  • Function

Note – The Array and Quick List types can be a list of any of the other operand types with the exception of the Function type.

In addition to the range of operand types there was an added element of complexity with “scope of data” for Event, Data-Node and Offer types and Operand Qualifiers to allow manipulation of results.

My goal was to build a rules language that met the above requirements and do it in a way that;

  • Allowed dynamic configuration data driven processing to be used for managing the simplest through to the most complex statement types with the scope defined above
  • The language and processing solution had to be flexible in that I could add new features easily with minimal to no coding on the syntax checkers and parsers

To put this goal in context, an example of a complex conditional statement would be;

if count { f&b_payment 
           where-log e.~business = @fine_dining 
                 and e.total_bill > 100
                 and e.~date > #str_year()
                 and e.~date > o.~date
            group-by [~business]
            group-drop count < 2
   => match p.loyalty_tier 
      with "platinum" = 3 | "gold" = 5 default 7
and-if o~date < #today(-120)

The syntax breakdown is

if count of 
   { read through the f&b_payment events
     select events (e) based on the following criteria 
        e.~business - is in list of fine dining establishments
        e.total_bill - was over 100
        e.~date - since the start of the year
        e.~date - since the last time this offer was made
     group the results by business
     drop any groups with less than 2 entries }
is greater than or equal 
   3 for "platinum" members
   5 for "golds members"
   7 for everyone else
and if this offer has not already been made in the last 120 days

As you can see this is a cross between logic statements and SQL. style statements.

The biggest concession I made to my original vision was to accept that a Rules Language needs some structure and will not be in prose style English.  I am sure with enough time I could get to a version that would be close to the breakdown above but I have decided to forego this level of language verbosity and stick to a midway point in order to get the solution completed.  At some later point I will evaluate how much further the syntax needs to evolve to be prose based rather than condition/function based.

So on to how to build a Rules language.  What follows is my approach to building what I needed for the RTOM application.  It is by no means a definitive statement of “this is how it is done” it is just one approach that worked for me.

Early on I discovered that trying to define a full syntax of a language up front was extremely difficult to achieve.  I therefore adopted an iterative Test Driven approach where the Rules Language Syntax evolved from the Test Scripts;


As well as using the iterative test driven approach I also adopted a “full-on refactoring” mindset to the code development.  No addition/modification to working code was too big or small in order to make the code-base more flexible and deliver a smaller footprint.  The coding principles I used were in summary;

  • Test Driven Development
  • Data Driven Engine
  • Single Function Design
  • Iterative approach
  • Refactor as a Principle
  • Configuration over Coding
  • No hardcoding

There were three main pieces of code to be built for the rule language.  A Syntax Checker to validate the input, a Parser to transform the raw input into executable structure and the Execution Engine to process the rules language.


This was actually the most challenging piece of code to write.  It would have been easy to have just built a massive set of ‘if’ based coding conditions but that would have detracted from the goals mentioned earlier in this post and the coding approach I wanted to use.  The syntax checker evolved to perform four distinct functions, each one layered on top of the other as the rules passed through their area of focus.

The first step was to “standardise” the input from the user.  In order to build repeatable consistent syntax checking functionality the easiest way to drive the functionality is to ensure consistency in the input.  Rather than push strict formatting rules back to the user, the system first tries to standardise the inputs.  I decided on an approach of formatting whereby each distinct piece of the rule would be separated by a single space and all unnecessary whitespace would be removed.

// Step 1 - Standardise the Input

if event.~business   = ["abc",  "def",  "xyz"]

--> if event.~business = ["abc","def","xyz"]

if sum{event.value where-log e.~business =  @fine_dine}   >  200

--> if sum { event.value where-log e.~business = @fine_dine } > 200

The second step was to “categorise” each piece of the rule within one of the standard allowed categories.

// Step 2 - Categorisation of function pieces

if event.~business = ["abc","def","xyz"]

--> if                        statement
    event.~business           event-meta
    =                         eval
    ["abc","def","xyz"]       array

if sum { event.value where-log e.~business = @fine_dine } > 200

--> if                        statement
    sum                       function
    {                         func-start
    event.value               event-data
    where-log                 qualifier
    e.~business               event-meta
    =                         eval
    @fine_dine                qlist
    }                         func-end
    >                         eval
    200                       number

The third step involves “pattern matching” to check the structure of the rule was correct.  This was looking to be one of the most tricky parts of the syntax checking until I remembered a design pattern I used back in 2003 to construct an automated XSLT generator.  It involves breaking the allowed structure into a set of simple definitions defined by configuration data that drives a recursive OO based process for pattern checking and verification.  The actual code to do the pattern checking is less than 80 lines and relies on a pattern checking data structure to tell it what is valid.  The premise of the processing is to check through all viable patterns until a match is found.  As this processing happens during data input by the user a small sub-second time lag is of no issue.

The pattern checking definition looks as follows (note sample only);

$parse_patterns = [

'start' => [
 '#1' => ['statement', '__cond', '__and-if'],
 '#2' => ['statement.get', '__get'],
 '#3' => ['statement', '__cond'],
 '#6' => ...

'__and-if' => [
 '#1' => ['statement.and-if', '__cond', '__and-if'],
 '#2' => ['statement.and-if', '__cond']

'__cond' => [
 '#1' => ['__lhs', 'eval', '__rhs'],
 '#2' => ['__function', 'eval', '__rhs'],
 '#3' => ...

As you can see the patterns are described using the  “categories” from step 2.  The pattern checking object takes the “start” input pattern and tries to find a match within its valid pattern model stepping through each option.  It steps through each option until it either runs out or finds a valid pattern.  The “__” prefixed names are used to break the pattern model into “sub-patterns” which can be put together in different ways.  By doing this there is no need to try and define each unique instance of all possible patterns (an impossible task).

When a “sub-pattern” identifier is found the pattern checking object instantiates a new version of itself and passes the child object the “formula categories”, “starting point” and “sub-pattern” it is to focus on.  Child objects can also instantiate other child objects.   Hence we have a recursive approach that requires no coding to specifically refer to formula categories and valid patterns.  The codes only job is to compare the formula categories with the valid data patterns it has.  This allowsteh rules language to be extended without significant coding effort.  Also the sub-patterns can be recursively called from within themselves (e.g. “and-if”).

I also built in the capability for the categories to be qualified using a dot “.” notation to provide flexibility for checking for specific category element types in particular locations in the rule syntax.

 '__function' => [
 '#1' => ...
 '#2' => ...
 '#3' => ['function.[sum,count,maximum,minimum,average]', 
          'func-start', ['event', 'event-data', 'node'], 
          'qualifier.where-log', '__log', 'func-end'],
 '#4' => ...

The fourth step is “context checking“.  In this step the processing iterates through the categories and ensure the contents of each category are suitable for the processing to be performed.  For example all elements in an array are the same type, all elements in an array are of the type required, a number is compared to a number, etc.


Once you have a valid input rule format the next step is to convert it to an “executable format”.  The purpose of this step is to build a structured representation of the rule which can be easily interpreted and executed by the “rule execution engine”.  Think of it as pseudo “code compilation”.  Within the RTOM the rules input by the user are always stored in the exact format they were keyed in (think source) and a second version in execution format.  As the execution format is not actually “executable code” but rather formalised rules to drive a “generic execution engine”, I opted to store the “code” in JSON format for ease of processing.  Taking our earlier examples the executable formats would look as follows;

if event.~business = ["abc","def","xyz"]

--> { statement: "if",
      lhs: { type: "event-meta",
             operand: "event.~business", }
      eval: "=",
      rhs: { type: "array",
             operand: [ "abc", "def", "xyz" ] }

if sum { event.value where-log e.~business = @fine_dine } > 200 

--> { statement: "if",
      lhs: { type: "function",
             function: { name: "sum",
                         input-type: "event-data",
                         where-log: [
                              { lhs: { type: "event-meta",
                                       operand: "e.~business" }
                                eval: "=",
                                rhs: { type: "qlist",
                                       operand: "@fine_dine" }
                              } ]
      eval: ">",
      rhs: { type: "number",
      operand: 200 }


The rules execution engine (REE) is in itself a simple object that consists of a discrete set of functions (methods)  that do one and only one thing.  This “single function” approach allowed me to build fairly bullet proof code that can be “welded together” in various ways based on an input that is variable in structure (i.e. the Rule).  The REE takes that JSON structure and reads through it looking for specific element structures and handing them off to the aforementioned functions.  An object containing the relevant formatting functions and value is is created for each operand.  The execution engine operates as follows;

  • Determine the rule style to be processed (Condition or Calculation) from the statement element
  • Resolve the value of each operand based on its type
  • For quick list types get the array of items references
  • Pre-process specific #keyword instructions
  • Resolve all #keyword values (e.g. #mon = 1, #thu = 4, #str_month() = 2016-12-01, etc)
  • Perform statement processing using functions on the operand objects
  • Return the result.  True/False for conditions and Value for calculations


So in summary, the rules language and its associated processing has been quite a challenge to complete and the version of the code I have now is v3. .  While there is room for expanding the range of statement types and enhancing the existing facilities I think the way I have approached the build has set a reasonably good foundation for futher growth.  The language format I have settled on (for now) is a bit more “codified” in structure than was my original ambition but is generally as readable as SQL.  As the RTOM application matures I will focus on taking the language to a more descriptive form and losing the “code” feel to it.  In total the three parts of the solution Syntax Checker, Parser, Execution Engine have a footprint of just under 2000 lines of code which is (I think) pretty good for the level of flexibility it provides in its current state.

One of the fun aspects about designing and building this type of solution and approaching it in the manner described above is that the true range of flexibility only became apparent when I started showing it to people who asked “can it do this …”, and realising it is flexibile enough to accommodate the request even though I had not specifically designed for it.

In the next articles I will describe how the Offers are structured and the facilities the RTOM provides in this area.

Complete list of articles in series;


Posted in Architecture, Gaming, Hospitality, IT Strategy, Product Development, Realtime Offers, Software, Software Development | 1 Comment

rtom #3 – Approach to Data

This is a follow-on from “RealTime Offer Management” (RTOM) post published on 19th November 2016 which can be found here.

The incentive for building the RTOM application was in looking at how to implement an Event Enabled Enterprise (E3) in a manner that was flexible, rule driven and easy to implement.  There are two what I will call “traditional” ways to deliver such a solution.  The first is “services” driven which requires all events to deliver “event triggers” when executed and requires an ESB/SOA driven approach, the second is a “data driven” approach where all data is gathered in one place (data warehouse, big data) and evaluated for required conditions.  Both of these approaches require significant degrees of architecture & development work plus the implementation of coded rules or a generic rule engine which then feeds back into a campaign or offer management system.

Within the world of Data we traditionally look at entity models, objects, attributes using a “real world” view but then end up in most (not all) cases storing the data in rigid tabular or document structures and try to mimic real world scenarios using complex code and queries to interrogate the data.  The NoSQL evolution has helped to soften this approach to a degree but the way we think about data and how to store it is still very “tabular” and “relational” in nature.  This in turn creates its own issues as the receipt of data needs to be processed stored, accumulated with scheduled processes running on a near continual basis seeking changes over a complete dataset to try and mimic realtime processing for triggered events.


In addition to this it is usually near impossible to provide a solution to users which allows them to implement rules referring to objects/data with real-world context.  You almost always end up referring to columns in tables and building additional data-marts, playing with the data in excel just to answer sometimes pretty simple questions and thus lose the capability to “offer in realtime”.

For RTOM the approach to data has been to look at a model where the data is a cloud of easily referenceable;

  • Events
  • Data Attributes
  • Meta Data
  • Higher Level “Data Nodes”

attached to a patron.


NOTE – For the remainder of this post I will use the term Patron to refer to Patrons, Guests, Customers of your business.

Events Are basically any transaction or data set generated by an operational application when the patron interacts with your property/business.  These events can be active or  passive.  Active events are transaction such as check-in, pay a bill, card-in, turn on slots, hand of cards, etc – essentially anything that is deliberately initiated by the patron.  Passive events are activities captured by virtue of the Patron’s interactional context with your business / property, these are mostly although not wholly related to location and environment aware data streams from a mobile device or interactions such as parking a vehicle, etc.

Data Attributes are the individual data items generated that describe the business context of an event.  (e.g. arrival-date, room-rate, F&B total, win/loss amount, etc)

Meta Data Is the descriptive data around the context of an Event, type of Data Node and various filter, computational & offer rules to be evaluated on creation or change of an Event/Data-Node.

Data Nodes are higher level data elements sourced from Event Creation, Event Data Attributes and/or other Data Nodes.  (e.g. a switch to indicate “on-property”, “gaming-in-progress”, accumulators to store count, sum, etc  persistant instances for Events/Data-Attributes at various levels of granularity.

The RTOM data design is based on an event driven model which uses “Data Change” agents to recognise events and send JSON formatted messages to the RTOM.  The data received by RTOM is considered “operational” and RTOM is not intended to be a Data Warehouse or mimic Big Data.  It is however an Analysis Engine that uses data changes to trigger rule processing for data nodes (switches, accumulators, etc) and for evaluating offer conditions based on events, data nodes, patron attributes, etc.

Data is within RTOM classified in two different layers;

The basic classification is foundational and applies to the Data Attributes of an Event;

  • Alphanumeric
  • Date
  • Time
  • Date-Time
  • Integer
  • Decimal
  • Money (Allows for automatic handling of exchange rates for monetary amounts)
  • Virtual (Allows for summary of raw data attributes)

The secondary layer of data classification is behavioural and applies to the Data-Nodes;

  • Static (new values always overwrite the old value)
  • Evolve (history of old values is maintained)
  • Switch (yes/no boolean based on rulesets)
  • Collection (list of values)
  • Sum (total of integer, decimal or monetary data-attributes)
  • Count (count of events)
  • Minimum (minimum data-attribute value received for data-node)
  • Maximum (maximum data-attribute value received for data-node)
  • Average (average data-attribute value received for data-node)
  • Points (special data-node type used for patron point management)

RTOM deals with four main classes of data around which all of its rules and processing have been defined.  These are;

  • Events
  • Data-Nodes
  • Patron Preferences and Attributes
  • Offers

Events are received in a JSON format which has the following layout;

{ meta: { // Standard Information about the event
  data: { // The payload area
  analysis: { // Lower level detail about the payload

RTOM enhances the meta data elements to attach additional criteria for evaluation and selection rules.  The Event “data” and “analysis” structures are stored in original JSON format with indexes composed of the meta data items.

By treating Events as JSON documents the RTOM application can store and refer to any data attributes required to be processed for offer evaluation.

A Data-Node is essentially a key-value paring with enhanced indexing and context rules.  A data node will hold a single value and be referenced by meta-data for  evaluation and selection rules.  Data nodes are stored in a traditional relational DB table structure.

Using an extended key-value construct allows for the RTOM application to be easily configured to create, store and process any data-node that can be derived from Events and their data-attributes.

Patron Preferences and Attributes are a very simple key-value pairing allowing referenceable information about the Patron to be addressed in the rules.  This information is also stored in a traditional relational DB table structure.

Offers consists of two parts, the Offer Definition data and the Offers Made data.  For flexibility and processing efficiencies the Offer Definitions are stored in a combination of relational DB table structures and JSON documents.  Offers Made data is “operational” in nature and more rigid in format so is stored in relational DB table structures.

Using the combination of the Events stored in JSON with Data-Nodes, Patron Pref’s and Offers Made stored in relational table structures and using a dot “.” notation it was possible to allow a rules language to be defined to access data with the following references

  • e (current event in scope)
  • <event_name>
  • e.~meta_data_element / <event_name>.~meta_data_element
  • e.data_attribute / <event_name>.data_attribute
  • n (current data node in scope)
  • <data node> value
  • n.~meta_data_element / <data_node>.~meta_data_element
  • p.item for patron preferences and attributes
  • o (current offer in scope)
  • <offer_name>
  • o.~meta_data_element / <offer_name>.~meta_data_element

For for example:

A simple rule rewarding people who are gold or platinum members when checking in;

if check_in snd p.loyalty_tier = ["gold", "platinum"]

A simple rule looking for someone who is staying at a hotel every 5th time since the start of the year.  Assume a data-node accumulator has been configured to store stays;

if property_stays_year adjust-by-fulfill = 5

The adjust-by-fulfill "qualifier" adjusts the stored value by 
the value it was set to when the last offer was made

A simple rule looking to reward patrons who play between 9am and 5pm on Tuesday and Wednesdays and execute at least 100 plays.  Assume a data-node counter has been configured to store plays by day;

if card_in.~dow = [#tue, #wed] 
   and check_in.~time >< ["09:00", "17:00"] 
   and day_plays => 100

~dow is a "day of week" meta data attribute

A more complex rule to make an offer based on someone

  1. Visiting a minimum of five fine-dining restaurants
  2. At least twice per venue
  3. Spending a minimum of $200 at each sitting
  4. Within the last 10 months

the rule would be;

if count 
   { f&b_bill_pay // Event is payment of bill
       where e.~date => #str_month(-10) // Last 10 months
             e.~business = @fine_dining // Fine Dining business list
             e.total_bill => 200 // Bill is 200 or more
       group-by [e.~business] 
       group-drop count < 2 // Only keep groups of 2 or more entries
   => 5 // 5 or more restaurants

@fine_dining refers to a Quick-List element which can be
configured to hold any list of information items
#str_month(x)is a keyword producing a start month date X months ago

So in summary, by focusing purely on the domain model being addressed and also taking consideration of the usage of the data with a flexible user orientated rules syntax structure the data model for RTOM has been evolved to be;

  • A combination of JSON, enhanced key-value pair and relational DB models
  • Fully capable  of storing any data provided by the operational systems
  • Fully capable of storing any data derived from the events/data-attribute
  • Is Patron Centric
  • Is driven by meta-data defining actions to be executed based on creation/update of events and data-nodes

In the next article I will delve more deeply into the rules language, how it has evolved and some compromises I needed to make on the starting vision.

Complete list of articles in series;

Posted in Architecture, Gaming, Hospitality, IT Strategy, Product Development, Realtime Offers, Software Development, Uncategorized | 2 Comments

rtom #2 – Design Overview

This is a follow-on from “RealTime Offer Management” (RTOM) post published on 19th November 2016 which can be found here.

Before diving into the details of what I ended up building, I want to document the overall “grand plan” and then focus on individual elements as these posts progress.  The approach I took to the RTOM Application was based on the set of Architecture Principles and Functional Design Guidelines outlined below.


  • Cloud based
  • Suitable for a SaaS pricing model
  • The database should be “Client Data Agnostic”
  • Definition of Events that drive the application must be 100% configurable
  • The application should be  “Data Reactive”
  • Mobile first for patron/client offer engagement
  • Integration with source and target systems in real-time
  • Offer evaluation, presentation and redemption should be “realtime”

Client Data Agnostic refers to a principle where the information stored by a user of RTOM has no impact on the structure of the RTOM database and the contents of the data while available to the rules and analysis engines for assessment are essentially “not of structural interest” to the RTOM application.

Data Reactive is a term I use to refer to a design principle where an update of an element of data causes the application to react and “do something”.  Essentially the data structure has meta-data attached to each data element which describes and initiates the relevant processing should an item of this data-element type be created or updated


  • Ease of Integration
  • The “Rules Language” should be close to natural language expressions
  • The rules language should allow references to events, data, and meta-data
  • Offers should be presented in a “Closed Loop” to track uptake and prevent fraud
  • Offers should be managed via a mobile platform
  • Possible to attach realtime “budget management” and tollgates to both offers and patrons/clients (comping limits, etc)
  • Realtime redemption management
  • Offer Redemption Throttles limiting number of redemptions allowed by time periods
  • Offer lifetime, availability and expiry controls
  • Offers can be customised to patron/client profiles and preferences
  • Offer values/discounts can be data/event driven and derived
  • Offers can be “fixed rewards”, “choice of reward options” or “rewards based on result of a game on the patrons mobile device”

Rules Language refers to the scripting language the RTOM uses to allow its users to define conditions for filtered data accumulators (nodes & switches), qualifying offers, targeting reward sets to specific profiles, defining reward discount %’s & amounts, etc.  It is essentially the glue that binds the processing together.

Closed Loop processing basically means that all actions taken against offers feed back to the RTOM for evaluation, verification and redemption / rejection.

Conceptually the RTOM Application looks as follows;


RTOM – Events feed into the Evaluation and Offer Management Engine


RTOM – The offers are managed via a mobile app

So far I have managed to keep to the architecture principles and function design guidelines without having to compromise to any great extent.  The two areas that have caused headaches and much rework have been the “rules language” and the “database design”.

Coming up with a rules language is far more complex and challenging than I had originally anticipated.  I originally wanted to get to a point where the language would be close to natural language expressions rather than computer focused “if x = y then z” syntax.  This is easier to envisage than actually implement.  In essence I ended up building a “Domain Specific Language” or DSL.  I am currently on the third iteration and think I have it cracked (famous last words as they say).  The key lessons I learned to get to this point are;

  • Raw English (and I presume German, French, Hungarian, etc) is a crazy language to try and articulate processing requirements in.  Regardless of how “non-programatically” you wish to make an approach we always resort to structured sentences and conditions when describing evaluating a situation.
  • Work within the confines of the Domain in which the new language is going to be used.  In this respect the RTOM domain has a fixed set of object types (Event, Data Element, Offer, Patron, etc) and a defined set of meta-data elements about each type.
  • Define a set of verb based functions that allow greater flexibility.  For example use keywords like “match, sum, count, average” to hide complexity and aid readability
  • Do not over-design that solution
  • Build a structure that is extensible and flexible (within reason) before trying to make it pretty
  • Do not design first, and test later.  I wrote hundreds of offer related conditions and expressions before and during the design process to visually and grammatically test out expression sequences.  In essence I wrote the examples and then built the code to fit them, rather than trying to build a fully generic piece of code the then seeing if the test worked – Test Driven Development principles really worked well here
  • Do not be afraid to start with a clean sheet rather than trying to rescue a lost-cause

I ended up writing a syntax validator, parser and processor.  While they are at their third iteration at present I know there is still room for improvement and will probably evolve them to v4 and v5 within six months.

I will discuss the approach to the data concepts & database approach and how they evolved in the next post.

Posted in Architecture, Gaming, Hospitality, Realtime Offers, Software, Software Development | 1 Comment

Real Time Offer Management

realtime_img-1A year or so ago for family reasons I needed to head back to Europe from a fantastic job at Marina Bay Sands in Singapore.  The experience of working in an Integrated Resort (Hotel, Casino, Retail, Conference Centre) is one that is still the high point of 30 years of working in IT.  Heading back to Budapest and wondering what to do with my time I decided to embark on a private project I called “Realtime Offer Management” hoping to get it into a startup.  While the startup part has yet to materialise the experience of building out a PoC for the concept has been enlightening and highly instructive on how to approach these type of applications.  Version 0.1 of the PoC is finished and I am now building v0.2 to capture the feedback received over the year and also refine the rough edges of v0.1

This is the first of a series of blogs about the application I am building and my thoughts on this area of focus for Hotels, Casinos, and virtually any organisation that has interactions with guests, patrons, customers in a manner that they want to provide rewards based on an immediate response to interactions with elements of their organisation.

So first to clarify what is “Realtime Offer Management”, in essence it is the ability to act immediately on any interaction of a customer with your property/business, examples would be;

  • Checking in/out of a hotel
  • Paying for a meal
  • Carding in/out of a slot machine/table in a casino
  • A spin on the slots, hand of cards
  • Parking your car, or using the valet service
  • Walking into the property/business
  • Being in a specific place
  • etc

Each of these activities is in essence an “event” and has a set of information (actually quite small) that is of significant relevance when plotting customer engagement with your business.  By having that ability to capture these “events”, register the individual pieces of information against a known entity you can build a set of rules to react to customer engagement and trigger relevant offers based on history, preferences and location.

To put it in context, simple examples would be;

  • Reward a person for the n-th visit to the Hotel, Restaurant
  • Reward a person for visiting X restaurants and spending an average of Y in Z days
  • Incentivise a person to remain on the property based on events such as good/bad luck in the casino
  • Interact with people while they are on property or even within their gaming environment
  • Incentivise a person to return to your property
  • +++

The Enterprise Architecture approach to this challenge has pretty much always been in one of the following formats;

  1. Build out custom code to monitor for specific events
  2. Use “big data” to massage the operational deluge of data and then write code to work off the results
  3. Use a combination of BPM, Event Management, ESB software (such as Tibco, etc) to build event management suites
  4. Use commercial rules engines and build out complex algorithms to meet individual requirements

Each of the above is not a wrong approach but is an approach that involves “building code”,  “analysing operational data” and is pretty much “invasive” to the operational landscape, they are also fairly expensive to implement.  They also have these drawbacks;

  1. Highly customised and requires coding to making it work for each individual event
  2. Needs significant investment to make if work fast and also falls prey to time lag due to ETL, analysis and other constructs of a data analysis environment
  3. Again whilst touted as code free, this approach is highly IT dependant and rarely free of some (or lots) of coding for decision making nodes
  4. Same as 3.

I decided to take a step back and try to look at the problem/opportunity in a completely different manner, the first was to rethink how we view data about a person within the organisation, the second was to look at how to structure rules focused on the specific problem domain that is “Hospitality and Gaming” within a “Realtime Environment”.  The third and most important was to try and arrive at a position where marketing could configure Offers without resorting to pseudo-coding or IT assistance.

Over the next few articles prior to the release of v0.2 of the PoC, I intend  lay out my though process for the approach to this area and in a sense document why I approached the problem-domain in the manner I have.  If you are interested in what I am doing I would ask you to please engage with me via the comments section, all opinions are welcome and will help me in the launch of the next version of the solution.

As a teaser for the next instalment  of this series, what would be your reaction if I said “that in an integrated resort (Hotel, Casino, Retail, etc) your need to monitor less than 10 transactions with a total combined set of less than 70 pieces of information in total to drive an efficient realtime offer management solution, and you do not need a data warehouse or to spend $m on a solution?”.



Posted in Architecture, Gaming, Hospitality, IT Strategy, Product Development, Realtime Offers, Software Development | 2 Comments

Software Build v Buy

build-vs-buy-colocation-data-centersDuring a recent discussion I was asked if I was a supporter of “buy” or “build” philosophy for software solutions. Whilst I think this is an unusual question as I feel you should evaluate each software component on its merits, it is still a valid question and  is getting more relevant as technology progresses. I actually think there are at four main options with nuances of within the “buy” option that need to be considered.

These are;

  • Re-use / Refactor existing solutions
  • Rent (SaaS)
  • Buy
  • Build

Every organisation should have a set of “Architecture Principles/Guidelines” that describe both the landscape into which the solution components must fit and  the decision making process behind what to consider when replacing (or adding) software components..

It is also important to note that the buy option can have three distinct flavours;

  1. Configuration only (no code changes allowed, pretty much the same as rent)
  2. Minimal Code changes allowed, Generally less than 20% of code base and mostly additions rather than modifications
  3. Product is a base that you will heavily modify to be your own after purchase

Note – Option 3 actually fits more into the Build category as you will be going your own route and not taking any upgrades from the vendor as your version will be significantly different within a short timeframe.

Option 2 is interesting in that unless you work with the vendor to make your enhancements in a controlled and valid manner you can also make upgradability very difficult and expensive.

So when should you build?  There are many reasons put forward for building your own software but in reality only one should be used as the catalyst for embarking on this route.

If the software component is a Business Differentiator then a Build approach MAY be a valid option.  A “business differentiator” means that the software solution(s) gives you a competitive edge in you marketplace, in essence the solution is you IP and an accountable asset to the organisation.

If you decide to “build” then you should treat the software component as a real software “product” and make sure there is a PLM (product lifecycle management) process in place for it. Owning a piece of software that is critical to your business can turn out to be very costly if not approached in the right manner. It needs to have a multi-year budget, roadmap, comprehensive support & management program and most of all a “product owner” whose responsibility is to make sure it is not “over engineered” and does not become a black box or bottleneck that people are afraid to touch.  Also be pragmatic and initiate regular reviews of the product’s technologies and viable commercial alternatives that may come to market.

If you do decide to build then the technology choice is key to ensuring your Internal Product is built in a manner that best meets your organisations needs.  Do not immediately decide to build on a particular technology base just because you have the skills currently in-house.  Conversely do not build on the latest and greatest new fad because your techies want to try something new.  Build on a technology base that makes architectural sense to the needs of the organisation, future growth / expansion / usage & models.  By all means be cutting edge if necessary, but beware of being “bleeding edge”.

It is also a good exercise to sit down and review your organisations software applications/components on a regular basis and look at the lifecycle management of each piece.  As part of this process you should also engage in a theoretical exercise  how would you would run the business just using the “rent” option, i.e. “cloud”.  As you discount cloud options for components, work back through the buy, re-use and build options until you get to a sensible architecture that meets the needs of the business and justifies the costs/benefits from all perspectives.  This will give you a realistic roadmap for a pragmatic approach as to what parts of your architecture landscape fit into which category for software ownership.

Posted in Architecture, IT Strategy, Software, Software Development | 2 Comments

Modernization – Its actually not that difficult

download (1) Within today’s environment where companies have spent tens of millions of dollars encapsulating business logic and rules into code it is becoming increasing difficult to sanction a full systems replacement due to the risk and impact it would potentially have on an organisation.  To address this issue many commercial organisations are advocating “Transformation (essentially Code translation)” led initiatives.  From practical experience this approach is both flawed and dangerous.

  • It is Flawed in that Business Rules encapsulated in language-A will be transformed into business rules encapsulated into language-B and a far greater issue is the fact that translating procedural languages to Java, C#, etc means pushing a procedural model into an OO model without any benefits and ending up with an OO model based on program structure of the original system and not a real world entity structure based on the business model of what is being processed. You also end up with a system that is incomprehensible to original team and a complete mess to any new developers in the targeted language
  • It is Dangerous in that it presents a false sense of hope to the business. I have yet to see a solid use-case where Transformation delivered on the anticipated cost and maintenance savings promised at program inception.

Rather than selling “Transformation” the goal should be to deliver  “Modernization”.  Modernization is path to;

  • Reducing Cost on implementing new initiatives (Products, Distribution channels, etc)
  • Releasing business rules from hard-coded logic models into configuration models
  • Protecting core value locked into the existing application stack
  • Decommissioning of some legacy functions and integration of modern components into the overall solution
  • Presenting a modern UI and UX to the user and client community

The architectural principles this is based on can be referred to as “Separation of Interests” and “Application Layers”.  The target is to provide a roadmap to clients which;

  • Delivers agility in meeting market needs
  • Mandates a rapid rollout of modernised UI and UX
  • Provides protection of “perceived value” from spend to date
  • Enables flexibility in choice of technology components that meet business needs
  • Results in a modern integrated solution stack
  • Protects the business from the hassles of technology change
  • Involves the business fully in the delivery of the business focused initiatives
  • Transitions to the desired state over a maximum 2-3 year timeframe

A solid modernisation offering must consist of the following components;

  • Demonstrated and proven knowledge of the business vertical being addressed
  • A fully functional and easily configurable UI / UX component that enables rapid transition to a modern experience for the user community
  • Deep knowledge of the incumbent legacy application(s)
  • A proven set of components to provide rapid configuration and management capabilities in key areas that will provide the best positive impact to the business
  • A solid roadmap that does not fundamentally change for each client
  • A core team of “experts” who truly understand the challenges from business, application, architecture and technology perspectives
  • A toolset that allows rapid legacy application analysis, rule extraction and code refactoring


Posted in Uncategorized | Leave a comment