Open Orbit Diagnostic Methodology
Data Model






- A Project has ideally one but possibly more than one core objective which defines success, e.g. reducing cost or improving customer experience
- It can be tagged with industry vertical(s) and process horizontal(s) to enable knowledge content matching and searching
- It uses Permissions to control data confidentiality and visibility across multiple users
- It goes through Stages starting with Scoping and finishing with Benefits Tracking, and has defined Start and End dates

Metric Type : | Example |
Wait Time : | Queue wait time, Lead time |
Touch Time : | Transaction touch time, call handle time |
Cost : | Cost per operator per hour |
Accuracy : | %First Time Right |
Capacity : | Scheduled capacity, Present capacity |
Volume : | Calls per hour, Transactions per day |
Revenue : | Conversion, Ticket size |
Open Orbit Concept Overview


Guidance and suggestions from the Mentored Thinking algorithm

"Read more" links from the Workbench to the Wiki

Links within the Wiki pages (also based on the same algorithms)
– Given a Process and an Objective, it recommends Metrics
– Given a Metric, it highlights possible Causes
– Given a Cause, it suggests potential Solutions
The suggestions for the multiple Metrics Types, and the software decodes the non-MECE (mutually exclusive and collective exhaustive ) list of options for the project objective, into the MECE-set of Metric Types. This is one of the fundamental building blocks of the successful projects and OO’s ability to guide the user to success – knowing which metrics are relevant in which circumstances. OO encourages the use of this list to ensure the optimum level of divergent thinking (looking where experience says one may find the root cause; as well as not wasting time on areas that experience says are not insightful) during the building of the data collection plan, followed by relatively convergent thinking later when it comes to Causes and Solutions.
This chain is the core of the diagnostic method, and is accessible in the workbench (with “suggested” metrics, causes and solutions) as well as via the “read more” links from the workbench to the wiki pages. On the workbench, one can traverse from metric to cause to solution, and at each stage look up the relevant wiki page. On the wiki site, one can traverse in either direction between all three.
The business process may have a gap in performance (a symptom) for a given metric, with the baseline being different from the target. Each such gap in a metric may be explained by one or more root causes, each of which in turn may have one or more potential solutions. The solutions finally get tracked by a metric, very often the same as the metric which helped expose the symptom in the first place. But on occasion, a different metric may also need to be tracked for the solution to be sustained. Thus, metrics, causes and solutions work in a circular chain of interdependence.
(*) There is another step prior to processes and objectives which has its own “suggestions” feature highlighting key processes for a given industry – to be used only if need be. However most projects already have processes identified.

Visual Aids
Diagnostic Reports
Diagnostic Map
A graphical representation of the process steps and connections, helps you identify where the waste and cause of the problem is. The report pulls together the key issues, their causes and data points for the entire process. It enables you to identify where & why a process isn’t working and where to focus your solution effort.
Fishbone
This visual helps you to validate if all causes are covered under the five factors.
Solution Checklist
Use the ‘Solution Checklist’ feature on Open Orbit to give you an insight on how a problem or solution has been managed and rated by another user in your team. It is recommended to feed in actions taken and rate each action as well to make a repository for a future user to use it as a reference for upcoming projects.
Data Representation
Data Collection Report
Based on entries in the data values screen, this report pulls together all the processes, steps and data values in any given project into one spreadsheet-like structure allowing for rapid data entry, visual comparison of gaps, and hence prioritization. One cannot create new rows in this spreadsheet, since that should be done on the main workbench with all it’s guidance features. However, targets and actuals (before the project, as a baseline) are entered in this module.
Performance Gap Analysis
Based on the data entered , this is a combination graph and spreadsheet which pulls together all entries for a given “metric type” for the various stages of your project. This is a better way to visually analyse, and where required annotate / comment on where the biggest gaps are Before you start the project and After against the Target set – once you start entering data during or post the project is completed.
There are three (3) subsets of this report which you can generate :
- Baseline vs Target
- Achieved vs, Target
- Before vs. After
These are reports to give you the comparisons of “Before vs. After” and more. With this report, one can easily answer questions like “where is the biggest bottleneck on Wait Time?” or “where is the biggest source of defects?” or “where are we from where we started?”. It is recommended that this be used to inform status reporting during the Data Collection phase. Similarly “Achieved vs. Target’ , helps complete the picture of the project and its impact on key metrics, by bring together the target and the after picture. It is recommended that this be used to inform status reporting during the benefits tracking phase.