The art of using Process as a path to DevOps

devops

I have spent the last week pulling together a Cookbook for our Architecture team.  It’s not that people do not know what it is they need to do but sometimes it helps to have our “process” laid out in clear concise terms & pictures.

And to be honest this got me thinking, as a delivery organisation we always use “process” as a whipping rod to drive what our users need to do and at times rail at them for not being willing to change how they approach their daily tasks to simplify the existing processes.

One of the things I got commented on in my previous job was the ability to “process-ise” any task.  I do tend to sit back and look at how we do our job and look for ways of introducing consistency and repetitiveness into our design & delivery mechanisms in order to ensure people know what to expect from us as a delivery team, we know what to deliver to meet these expectations and most importantly be willing to adapt our processes to suit the business.

One of the key wishes of our global CIIO is to move to a DevOps delivery model.  This got me thinking about how.  DevOps is the new Agile (or so I have been told) and to be honest an organisation that can react fast to issues and opportunities is in a much better position to win market share than one which cannot.  DevOps is a result of a consistent and reliable set of processes which ensure that all the necessary investigations, analysis, design, development, testing and deployment activities have taken place leading to a no-fail upgrade to production.

My personal belief is that DevOps is not something you should focus on achieving, it should be a situation that is delivered through mastering a consistent and repeatable set of  delivery processes.  Instead of focusing of a DevOps result, focus on ensuring that there is a reliable, repeatable, trusted and most importantly efficient set of processes in place at your organisation to convert business needs and fixes into production ready solutions.  Once in place then focus on refining these processes to get to the cycle time and cost model that best suits your organisation.

 

 

Advertisements
Posted in Agile, Delivery, DevOps | 6 Comments

The “App” Driven Enterprise

98889089If there is anything that the iOS and Android Smart Phone era has taught us it is that people like easy to use (single purpose) well thought out app’s.  The ability to use a clean fresh interface to manage one particular area of our social, work or recreation activities is a major transition from how software has been developed in the past.  This ability to go and find the one function you need to solve a specific problem has shifted perception away from the need for complex all-purpose pieces of software to a  “Do one thing – but do it well” model.

We are currently in the middle of an Application Asset study within our organisation, recently a colleague from a vendor was asking me how many pieces of software we had, when I told him the number he was aghast and followed up with the standard question of how we planned to reduce the number of systems.

I have to be honest and confess that traditionally I would have agreed with this assessment and looked for ways to reduce our software footprint, but in today’s world I feel this is the wrong way of thinking.  My current mindset is focused on the rule of “simplifying the technology base” not specifically “reducing the number of applications”.  This may at first appear self-defeating however if I have one technology platform or framework on which I can build out the applications landscape to suit our business needs with a mix of reliable Single Function commercial applications and in-house developed app’s using a consistent technology, interaction, data and communication base  then I have a means of being very agile in looking to address problems & opportunities within the organisation.

The Smart Phone world combined the emergence of API driven SaaS applications and deployment platforms (e.g. Salesforce, Azure, Google App Engine and newcomers such as GraphiteGTC) have shown how to build an eco-system of individual parts that can be easily joined up to build applications that meet people needs in an efficient manner.

Most enterprise focused applications or “Monoliths”, are not just deemed to be legacy from a technology standpoint they are also legacy” from a design and deployment perspective.  Even today companies that build enterprise class software still focus on building a single system, yes there is SAP and others that build components but within these components there is still a high degree of functionality which implies and leads to higher levels of complexity and cost of ownership.

Moving forward I believe we need a software development model where simplicity, communication and interaction is the core of any application being developed even in the enterprise world.  Just as OO was a paradigm shift in that it encapsulated behaviour within objects, applications should be viewed as a means of encapsulating “Function or Process”.

By approaching software development in this manner and building out an eco-system that can conform to the application map below we can deliver a significant range of benefits to the business and drastically reduce the complexity of our IT Landscape, and by complexity I am not referring to the “Number of Applications” but to the complexity within the applications themselves.

Blog-Jan-12

It will be a while before the old approaches disappear (and they may never go fully) but our Smart Phone App’s are showing the way and as these are the ones we use every day by choice, the business software developers will have no option but to follow.

Posted in Delivery, Software Development | Leave a comment

The Architecture Cube

Cube

One of the challenges we face in providing IT Direction and Solutions is ensuring that we have considered all (or at least all of the important) aspects of a request to order to provide a valid and solid recommended solution.  By simplifying down the “Areas of Consideration” when building out solutions we can work in a consistent manner across large teams.  I think there are six main areas we need address;

  • Core Entity/Object Model (What are we dealing with)
  • Process Model (What needs to happen)
  • Service & Integration Model (What communication channels can be used)
  • Data Model (What is the information to be managed)
  • Application & Functional Model (What tools can be used)
  • Event Model (What gets generated that is valuable)

Rather than building check lists for people to follow I decided to try something more visual and tactile and as we had six Areas of Consideration it seemed logical to build them around a Cube.  With the help of a couple of the graphics people in our HQ we produced the Architecture Cube.

The concept of the six faces is as follows;

  • Base of the Cube – Core Entity/Object Model
  • Opposing sides – Process & Data Models
  • Opposing sides – Service & Application Models
  • Top of the Cube – Event Model

The rules for use;

  • All solutions should start at the Base – Focus on the Real World Problem Domain
  • Transition through evaluation of Process, Service, Data, Applications in that Order – Figure out the What, before the How and the Where
  • Finally look at which Events are produced – Determine the value produced by the execution of the solution

It’s a simple idea, may not work for all situations but hopefully will produce some good results for us in 2015.  The template for the cube is below;

ArchitectureCubeModel-1

Posted in Uncategorized | 3 Comments

Writing Documents – Part 1

This is the first of a series of posts focusing on creating documents and presentations.  It has been triggered by reviewing several documents of late that needed a fair amount of remediation and various people asking me how do I approach building documents & presentations.

There is no secret or magic formula to creating a good document, the most basic thing you need is “Practice” combined with constructive feedback from your peers, managers and document recipients.  Over the years I have been fortunate enough to have had the opportunity to produce probably more documents and presentations than is considered healthy :).  But more importantly I have had a number of excellent peers, managers and customers who have been willing to offer advice and teach me the tricks in creating good documents.  Over this time I have adopted my own style and also developed a few tricks of my own to help in creating documents.

I deliberately use the word “Creating” rather than “writing”.  Whether it is a status report, solution design, business requirement or novel, the process of producing a document is a creative one.  Regardless of the content & purpose you are essentially telling a story and like all stories it needs to be a good one in order to keep your audiences attention.  And more importantly in the business world you are being paid to write the document so the “story” must justify and reflect the cost of production.

So how do you start creating a document?

Forget about templates and styling for the moment (we will deal with these in a later post) and think about how to begin.  Everyone has their own way of approaching a document or presentation.  I use the following graphic as my basis for approaching every document I create.

Writing-a

 

Coming from a technical background I have always found the following techniques work best for me, everyone has their own approach and style but they will probably use one or more of the following;

Output Driven

When people think of outputs in relation to documents or presentations they often think only of the document as the output.  Depending on the purpose of the document the outputs will be contained within the content, the document is only the container.  So for example;

  • A Solution Design would have outputs such as “the design”, “assumptions”, “required resources”, “next steps”, etc.
  • A feasibility study would have outputs such as “options”, “cost benefit analysis”, “recommendations”, etc.
  • A review document would have outputs such as “current issues”, “conclusions”, etc.

Knowing what the end-state needs to be is useful in helping you focus on the structure and message to be conveyed by the document using the available inputs.

Story Boards

As I mentioned earlier, every document must tell a story.  The concept of story-boarding is not only for film producers.  Take time to sit in front of a whiteboard and draw out the “flow” of the document.   This is a particularly good technique for creating presentations.  Before you even sit at a screen and start up Word, PowerPoint, Pages, Keynote stand back and envisage the flow of the story.  Get a feel for how it will flow and how many chapters / slides are necessary to convey the message in a precise unambiguous manner.

Sketch out the flow and “talk yourself through it”.  Working at this high level allows you to focus on the big picture and visualize how the final product will look and how the message will be conveyed.

Top Down Approach

This is an approach normally associated with computer programming and to be honest I have adopted “Jackson Structured Programming Methodology” (showing my age here) to work for me in creating documents, presentations & plans.  Virtually anything that needs to be organized can be done using this approach.

The value of this approach is it allows you to work at a high level and work down to the details without any of the details assuming a higher degree of importance than the others.  This allows you to ignore areas that you are not sure of at that point in time and focus on the bigger picture and story flow.  Using this approach in combination with a story board is a great way to get the story worked out and to build out the Chapters, Sections, Slides, etc.

In the next post I will examine influences, structure and content.

Posted in Documentation | 1 Comment

Delivery, Delivery, Delivery

What is it with a significant number of organisations that impairs their ability to deliver software to production in a timely manner with a high degree of quality?

Most delivery issues can be traced back to lack of decision making capability, too many changes being released at once and a failure in existing (or poor) organization & processes.

The best means of building a reliable software delivery organization is to get in place the pieces necessary to deliver one change to production in a timely, secure and organized manner.  If you can figure out how to do it for one simple change, and the process utilised ticks all the necessary boxes for decision making, control and speed then you have a roadmap for scaling up to delivering more functional changes & fixes at the same speed and with the same reliability.

Consider the following when defining a delivery process;

  • Define requirements as “simple unambiguous testable propositions”
  • Design solutions that can be divided into simple single function components
  • Build the software in a manner that is robust, fault tolerant and efficient. Focus on test driven development approaches
  • Deliver the software to a robust QA / UAT environment that is an accurate representation of production
  • Use automated testing tools where possible
  • Use automated delivery tools where possible

Having a rapid process controlled delivery model allows for;

  • Delivering small
  • Delivering often
  • Changes in decisions
  • Fast response to critical business issues
  • Agility in the analyse-design-build model

Focus on building a robust production line for delivering change to production systems.  Invest in the tooling and environments needed to ensure consistency and quality.  Make it easy and reliable to deliver change.  Once this is achieved press the accelerator and speed up to the point that best suits your organization.

 

Posted in Delivery | Leave a comment

Migrations should be Easy!

Migration of Data from a legacy application to a new application is probably one of the most fraught activities companies undertake.  Most companies will view this activity as the biggest risk in moving to a more modern system and a significant number will drag legacy data structures into a new application making it behave, look and feel “just like the old one”.  Just why this happens is open to debate but in many cases it is down to;

  • Lack of a strategy outlining primary reasons for bringing data over
  • Lack of a well thought out process structure
  • Fear of leaving something important behind
  • Uncertainty in what has to be moved

In its simplest form migration is comprised of;

  • Data Structure Assessment
  • Data Quality Assessment
  • Data Cleanup
  • Extraction
  • Transformation
  • Data Load
  • Reporting
  • Historical Records

Basic Principles

Before even getting into the details of the data the first deliverable should be a comprehensive report outlining strategy, detailed set of processes, RACI matrix and a broad set of timelines.  By defining in detail the end to end process to be executed, everyone involved can see what the target is (and is not), who is responsible for what and when the activities are to commence / finish.  The surprising thing is that most companies either fail to produce such a document as a first step or re-invent the approach from scratch each time a migration is approached.

Yes there will always be items that are specific to your migration but in principle every migration can be broken into a discrete set of steps that vary little from project to project.  The list of activities above can be considered level 1, each of these can be broken down into 20 level 2 activities and each of these should again be divided into 5-10 level 3 activities.  The purpose is to define the migration process to such a low level of activities that each one in itself is no longer seen as a “black hole” or “difficult to envisage”.

In addition the migration approach should be viewed and managed as an “automated” and “repeatable” set of activities.   The toolsets used during migration should allow the insertion of data fixes, data moves, etc as SQL or command scripts which are executed whenever the process is run.  This allows the team to build up the “Migration Instruction Path Repository” that can be signed off in UAT and then repeated in production without people “remembering” to perform X at a specific point in time.

Data Structure Assessment

Once the approach and process is signed off the detailed analysis can start.  Many projects that I have witnessed begin this process by looking at the source data, figuring out what is there and then using this to dictate what is dragged into the target system.  This is a dangerous practise as source data will then drive target system requirements and force changes to the target system to cater for source system processing models.  The right approach is to actually define the target system data requirements, define a staging database and then only take what is necessary from the source to satisfy the needs.  I have come across projects where people trawl through screens looking for data the users currently see / interact with to define “what is required” – this leads to unnecessary levels of change in the target system to accommodate old style data structures and perceived user needs.

A “staging Database” should always be used as the transition point for the data – you should never go directly from Source to Target.  Ideally the staging database is a de-normalised entity model of the business domain and should not reflect either the source or target physical database structures.  The purpose of a staging database is to capture the “Critical Data” needed to build the required data model in the target environment.

Data Quality & Cleanup

Data Quality and Cleanup are essentially activities that firmly belong to the Source Data system.  Before any attempt is made to extract data you should make sure it is valid.  Any inconsistencies or data errors should be fixed inside the source database.  At a minimum any quality issues that cannot be fixed must be logged and audited for final verification.  As a rule fixes to data quality should not be attempted in the staging database or in the Target database, particularly fixes to financial data.  What was in the source must be reflected in the target otherwise a migration is difficult to verify and signoff.  It is the responsibility of the Source Data owners to provide as clean a data set as possible for extraction to the staging database formats.

Data quality should be verified using a Data Quality Toolset which allows quality conditions to be defined and executed automatically.  This allows the rapid creation of data quality conditions in a non technical format which are then automatically translated into standard SQL’s and executed automatically.

A similar type of toolset can be used to build up data cleaning rules.

It is also advisable to have a set of automated quality checks which you can run over the staging and target databases.

Extraction and Transformation

The extraction process will have a degree of transformation involved as it must make the source data fit the staging database model.  Transformation of individual data items to new formats (e.g. dates) or values (e.g. codes, etc) can be done either inside the extraction process or as a separate transformation activity focused on the staging database.

The data provided to the staging database should be as clean as possible.  As mentioned before, no fixes to bad data should be made inside the staging database, it is a transition point only.

Only extract data for entities that will be loaded into the target database.  It is the responsibility of the extraction process to filter out anything that is not required (cancelled policies, old & unused clients, etc)

Data Load

The loading of data from the staging database is a process where the target data is created and processing nuances of the target system are built to allow this data to be processed going forward.  No transformation or data change activities should be part of the Load process.

Reporting

Apart from testing the migrated data to prove the transfer of data has worked the single most important task in any migration project is to prove that the data in the target database is a like-for-like record of the extracted data from the source database.  Reporting in this context should be performed as follows;

  • Define a comprehensive set of value summaries (hash totals, counts, financial, etc) to be tracked
  • Run the reports to produce these summaries over the source database, staging database and target database
  • Compare the source and staging summaries.  If there are discrepancies then either find the issue and correct them or document the reason for their existence
  • Compare the staging and target summaries.  If there are discrepancies then either find the issue and correct them or document the reason for their existence
  • Compare the source and target summaries.  Any differences should be accounted for in the documentation produced by the previous comparisons.

It is important to conduct the comparison in this way as direct comparison of Source & Target can be difficult to reconcile if major data structure transformations have occurred or any “fixing has been executed as part of the extraction process.

As with the data quality check, having a good reporting toolset that allows easy definition of summary totals, automated build of reporting assets, automated execution in a repeatable manner and result deviation logging is essential to making this activity easy to manage.

History

In every migration there is always the vexing question of how much history to take and how much history to build into the target database.  Firstly, history is expensive to create so at a minimum only historical data that is legally mandated and or critical to the management / processing of the migrated data should be considered.

Building out a full set of history inside the target database in a manner that resembles full processing from day one of a data entity’s life is extremely expensive, difficult and very rarely worth the effort.  From experience this degree of history generation (if possible) can more than double a migration projects cost and timeline.

However users will need some access to historical events for normal day to day business needs.  This can be solved in a number of ways.

  1. Keeping the old system available for enquiry only access is the easiest solution.  However this approach can incur significant costs and management effort particularly for commercial software licences and out of data hardware retention.  This approach is not generally recommended.
  2. Putting all the old data in the target database in “static data tables” and building enquiry facilities is a favoured approach.  However this also has its disadvantages in that the new system’s DB becomes cluttered with data it does not use and the temptation is always there to drag this “source” data deeper into the new systems engines corrupting its distance from the source system heritage.  An evolution of this approach is to dump the old data into a separate database retaining its structure and building a simple enquiry facility.
  3. A more flexible approach would be to gather the data in entity focused XML documents and build a simple intranet-site to display the information in the manner required by the users.  This approach makes it easier to display the data in different format’s suitable to individual users/departments as the data is  held in a flexible form.  It also makes it easier to print or export full copies of data entities as required.

and Finally

Every migration will have its own nuances, variations and special needs, however the overall approach and execution differs little from project to project.  Approaching the task with the following in mind will make migrations easier to plan, manage and execute;

  • Document the strategy and define the low level processes to be executed so that everyone knows what, why, when and who
  • Automate as much as possible and mandate “assured repeatability” is enforced in all activities
  • Use toolsets to help ensure repeatability, documentation & decision management
  • Do not let the source data model dictate the target state
  • Clean up at the source
  • Be frugal, import to the target database only what is critical to the success of the business
Posted in Migration | Leave a comment

Being Agile in a Waterfall World (Part 2)

… continued from post … “Being Agile in a Waterfall World (Part 1)

So how do we work in an agile manner in our waterfall world?

While people like the adaptive nature and speed to delivery of an agile approach in software deployment the one area that raises real concern in big projects is that the Iterative approach will lead invariably lead to “Scope Creep” and attitudes of;

  • “I do not know what I want but will tell you when I see it”
  • OR – “This is my chance to get everything I always wanted”
  • OR – Continuously “tweaking” the requirement and never getting to a conclusion

This is a valid concern where users are not trained/encouraged  to focus on what is really needed and/or the IT Delivery Staff are not empowered to push back.  It can and does lead to all sorts of problems with scope, deadlines, costs, etc.

What we really need is an approach that combines the accountability scope control, and predictability of a waterfall model, with the agility and adaptability of the agile model.  Lets call the approach the Agile-Waterfall and define a set of principles that we would like to be present to govern our projects;

Accurate Project Scoping is absolutely critical for determining commercial & delivery agreements between customer and supplier/IT Dept.  So our Agile-Waterfall philosophy should emphasis the following;

  1. Define the minimum necessary requirements to satisfy the business need upfront (our “must haves”)
  2. Capture the remaining requirements into a “would be nice to have” grouping
  3. Organise the project to deliver based on business architecture with realistic functional area sequencing
  4. Promote the ethic of requesting & building just enough to work effectively
  5. Measure and report on progress by a deliverable which is a working component not document artifacts
  6. Utilise Agile principles within the delivery of these components but keep the waterfall control elements to monitor spend of both time & money and also scope creep
  7. Build team-units that have all the requisite skills to take ownership of delivery of individual components – from requirement to working software
  8. Build an environment that encourages people getting invested in the project
  9. Involve the users actively in the execution of delivery

The following diagram is a representation of how a project could be represented to allow us to be Agile within the confines of our discipline scope

_Blog Picture #01 - Project Structure

We still perform a “Discovery” exercise to gather the vision of the project.  The level of detail gathered should be enough to Define Scope and in a Fixed Price Contract define Time and Cost.

The Project is then organized around logical blocks of work that are executed in a sequence that makes business sense.  Foundational elements such as major DB modifications, Product Definition & Parameterization Extensions, etc are done first to allow the business components to progress without too much inter dependency.  Foundational components also refer to big requirements that span many business areas – although these should be kept to a minimum.  High level approaches & guidelines need to be defined to allow the teams to work in a consistent manner.

The requirements are divided into blocks of work that focus on specific business areas and are sized to allow delivery within a maximum targeted sprint time frame, i.e. 2, 3 or four weeks each.  The reporting of progress to the project sponsors should be at the Business Component Completion level and no lower

Each business component should  have a maximum manday effort and cost allowance to achieve its target.  It is the responsibility of the team assigned AND the requirement(s) owner to ensure these limits are not exceeded.  Standard Change Control procedures can of course be implemented to cater for those situations where more time is necessary due to under estimation or unexpected complexity.  The teams must build the bare minimum necessary to meet the “Needed” requirement; excess features & additional requirements can and should be pushed to subsequent phases.  This encourages users to actively participate as they know the delivery organization is only saying “Not Now” rather than “No” to their requests.  The business component’s rollup  into the deliverable “Phase of Work” for the formal SIT & UAT activities that are carried out once all the components are completed.  There should be a Test Support & QA activity running in Parallel to sign-off on the individual business components as they are completed.

Each Business Component can be viewed as follows;

_Blog Picture #01 - Business Component Structure

Each Requirement goes through an iterative process to first deliver what is necessary and then within reason/scope deliver what is additional.  Each Business Component can be comprised of one or many requirements relating to a particular area.  A business component’s team will deliver;

  • Working and tested software to the project phase
  • Sufficient documentation to justify the deliverable and its approval by the user community
  • And Optionally they can deliver excess feature requests & requirements back to the master list for inclusion in later phases.

Each business component team must comprise people with the following skills Business Analysts, Developers, Testers and Users.  One member of the delivery team will assume the lead role for monitoring and reporting on progress and issues.

Consider the Principles contained in the Agile Manifesto below (also see http://agilemanifesto.org/principles.html)

  1. The highest priority is to satisfy the customer through early and continuous delivery of valuable software
  2. Welcome changing requirements, even late in development
  3. Deliver working software frequently
  4. Business Users and Development Teams must work together daily throughout the project
  5. Build projects around motivated individuals with the proper support and environment and trust them to get the job done
  6. The most effective and efficient way of conveying information to and within a development team is face-to-face conversation
  7. Working software is the primary measure of progress
  8. Agile processes promote a sustainable development cycle
  9. Continuous attention to technical excellence and good design enhances quality
  10. Simplicity – the art of maximising the amount of work not done
  11. The best architectures, requirements and designs emerge from self organising teams
  12. At regular intervals the team reflects on how to become more effective and changes accordingly

By viewing how a project is organized and enforcing the scoping boundaries (for budget, effort and schedule control) I believe it is possible to practice each of the above principles for a  Business Component delivery structure and work in an Agile manner inside a Waterfall focused project environment.

Some additional thoughts on various approaches and practices that can assist us with moving to Agile in a Waterfall world;

  • Get the business community socially invested and involved early and keep them involved throughout the project – To get maximum benefits of using Agile principles in a project it is advisable to setup and staff a Model Office at project commencement.  A Model Office is composed of a group of the best users who have an intimate understanding of the company and are empowered to make decisions for the betterment of the organisation.  The model office is the liaison between IT Delivery and the user community and from the IT perspective they are the user community.  The Model Office manager should in theory report to the CEO or COO.  I will deal with the concept, value and structure of a model office in a later post.
  • The Model Office should be comprised of people whom the company finds difficult to extract from the day-to-day operations due to there importance.  If you are going to spend $Xm on a project surely you want the best people to see the money is directed at the right requirements
  • Promote the following basic Design and Coding principles.  These are development language agnostic, I will also discuss each of these in more detail in a later post;

Test Driven Development (TDD) & Iterative Coding Write your program/object in steps and making sure each part works before moving on to the next part rather than knocking out 2000 lines of code and then trying to compile and test it.  This is a fundamental part of TDD.

Iterative Development Get the basic requirements finished first for assessment, then add in any additional features or requirements which are not beyond the original agreed scope.

Use of Single Function Design provides discreet blocks of code, programs, objects, methods that do “One Thing”.  These blocks of code either work or not, they are easy to test and then move on to the next one.  You then combine these pre-tested units into more complex structures where all you are doing is testing the combination not the core logic.  This leads to faster deliveries, higher quality code and a higher degree of re usability.

DRY (Don’t Repeat Yourself) is a coding model that promotes avoidance of duplication.  It is about expressing information/logic in ONE PLACE and leads to better consistency in code and is easier to maintain.  Be aware though that using the DRY approach does mean you need to refactor code as you progress to remove the duplication created by the introduction of new items.

Data Driven Engine programming is a technique where you build code that reacts differently to the data used to configure it and data passed to it for processing.  Where possible building this type of solution will allow greater flexibility for reuse and more solid code which does not have business logic imbedded in the code, so when change happens you change the control data not the coded logic.  This works best with the “Single Function Design” approach.

Defensive Programming is a technique where you check if something will fail BEFORE executing it and make a decision on corrective action or controlled degradation of the application.  (e.g. checking if divisor C is 0 before executing A = B / C or if program/object XYZ exists before using it)  Do not confuse defensive programming with checking the result of an operation AFTER execution.  That is “Reactive Programming”.

Structured Programming Techniques alludes to correct design of code before you sit down and write it and the proper use of structured elements within the coding model to allow for ease of maintenance and understandability.

And finally for any project – Test iteratively and Test Often – From a testing perspective let the users see and play with what has been built as soon as possible.  It does not have to be a full blown UAT style exercise but at least get confirmation that the deliverable is OK in the eyes of those who requested it.  Finding issues as close to the point of development as possible reduces cost in changes / bug-fixing.  If possible introduce a regime of “Continuous testing”.

Posted in Uncategorized | Leave a comment