– Who is at fault?

Credits: Arsalan Khan, Ed Heironimus, Francis Wisseh, Maryam Moussavi and Udhayakumar Parerikkal


This research paper analyzes the Center for Medicare and Medicaid (CMS)’s project in detail and makes recommendations on what could have been done differently. The project had 55 federal contractors working on it but this research paper will concentrate on only three. These federal contractors are:

  • CGI Federal who was developing and implementing the Federally-Funded Exchange (FFE). The estimated value of the contract was $93.7 million and it was awarded in December 2011.
  • Optum/QSSI was developing the Data Services Hub that would verify citizenship, immigration status and tax information. The estimated value of the contract was $144.6 million and it was awarded in January 2012. Optum/QSSI was also developing the Enterprise Identity Management (EIDM) that would provide enterprise-wide credentials and single sign-on capability. The estimated value of the contract was almost $110 million and it was awarded in June 2012.
  • Terremark Worldwide, In., (acquired by Verizon) was going to help increase CMS’ Platform-as-a-Service (PaaS) capabilities in the CMS cloud-computing environment. The total estimated value of the contract was $55.4 million and multiple task order were issued until summer of 2013.

The following tables summarize this research paper:

Table 1: Key Inputs

Key Inputs of the Project

CMS CGI Federal Optum/QSSI


·      Affordable Care Act

·      States

·      People/Team

·      FFE RFP

·      Requirements

·      Data Services Hub and EDIM RFP

·      PaaS RFP

Table 2: Key Components

Key Components of the Project

CMS CGI Federal Optum/QSSI


·      Agile Methodology

·      Project/System Integrator

·      Parallel “stacking” of phases

·      CMMI Level 5 Maturity

·      Agile Methodology

·      CMMI Level 3 Maturity

·      Agile Methodology

·      Data Services Hub Documents

·      EIDM Documents

·      Architecture diagram

·      Security

Table 3: Quality of Project Management – Qualitative View

Qualitative View of Project Management

CMS CGI Federal Optum/QSSI


·      Government vs. Private industry projects

·      Test plans and test reports

·      Requirement changes

·      Lessons learned from a state exchange

·      Requirement changes

·      Previous benchmarking and audits used

·      Issue escalation

·      Poor coordination

Table 4: Quality of Project Management – Quantitative View

Quantitative View of Project Management

CMS CGI Federal Optum/QSSI


·      HHS Enterprise Life Cycle ·      Highly metrics-driven ·      Use of charts ·      Delayed processing of orders

Table 5: Project Management Successes and Failures

Key Successes and Failure of Project Management



·      Pressure from White House

·      Lack of business processes

·      Miscalculated costs

·      Various technical options were not considered

·      FFE

·      Changing Requirements

·      Testing

·      Data Services Hub and EIDM

·      Buggy Data Services Hub and EIDM

·      Financial Success

·      Hardware Outage

·      Project Management

Table 6: Lessons Learned

Lessons Learned on Project Management Best Practices

·      Roles and Responsibilities Matrix

·      Full-Time Project Manager

·      Business Processes and Governance

·      Requirements Management

·      Communications and Sharing

·      Metrics and Measurements

·      Methodologies and Documentation

Table 7: Recommendations

Team Recommendations

Project Design

Project Implementation

·      Project Manager

·      Cross-Functional Team

·      Team Satisfaction Survey

·      Pilot Approach


The Affordable Care Act (ACA) is the nation’s healthcare reform law enacted on March 23rd, 2010. Under the law, a new “Patient’s Bill of Rights” gives the American people the stability and flexibility they need to make informed choices about their health. There were numerous reasons why healthcare reform was critically needed in the United States, including:

  • High health insurance rates and lack of coverage by many: In 2013 the Congressional Budget Office (CBO) estimated that 57 million Americans under the age of 65 are uninsured; representing 1 out of 5 in the population.
  • Unsustainable healthcare spending: Healthcare spending represented 17.9% of the nation’s Gross Domestic Product (GDP) in 2011.
  • Lack of emphasis on prevention: 75% of healthcare dollars are spent on treating preventable diseases, however only 3 cents of each healthcare dollar goes toward prevention.
  • Healthcare disparities: Healthcare inequalities related to income and access to coverage exists across demographic and racial lines.

On October 1st, 2013 went live as part of the technical implementation of the ACA reform to help Americans buy healthcare insurance, however the release was an astronomical failure. The cause and contributing factors that led to issues with this project are explored in detail. Focus is placed on looking at CMS’ capabilities from a Project Integration and Project Management perspective. Additionally, our analysis will assess the role of the major Federal contractors in the project. Examples are included to show how contributing factors such as scope creep, schedule constraints and lack of adequate testing led to a website which provided an inadequate customer experience.

This research paper provides a descriptive review and analysis of the project. During our analysis, we used Kathy Schwalbe’s (2014) Three-Sphere Model for System Management which entails organizational, business and technological perspectives of Project Management. We utilize these perspectives to determine what went wrong with the project from the Federal Government, CGI Federal (contractor), United Healthcare QSSI (contractor) and Verizon (contractor) points of view.   Furthermore, building on these unique perspectives, we analyzed the stated objectives and real implications of the project, the quality of the project management from a qualitative and quantitative perspective, key successes and failures factors, key lessons learned, project management best practices, and recommendations for what might have been done differently.


In order to set the context of the research paper, we have to understand what the CMS does, what the reason for the website is, and why the on-time completion of the website was a priority. Additionally, we will look at the key inputs, components and deliverables of this project.

3.1 About CMS

According to the and websites, CMS is one of the operational divisions of the Department of Human and Health Services (HHS). CMS is responsible for providing oversight on the Medicare and Medicaid program, Health Insurance Marketplace and related quality assurance activities. It has 10 regional offices with its headquarters in Maryland. The Administrator for the CMS is nominated by the US President and confirmed by the Senate. Figure 1 (Priorities, 2014) below shows CMS’s 2013 budget that accounted for 22% of the entire US Federal Government Budget.


Figure 1: Federal Budget Distribution


3.2 Why CMS was given the project?

CMS was designated as the chosen one by the Obama administration for obvious reasons. As discussed earlier the purpose of the ACA was to enable all Americans with the ability to buy health insurance. We also defined the CMS as an organization which provides healthcare to the elderly, disabled, and those who are not financially capable of buying healthcare. project began as a sub agency under HHS called the Center for Consumer Information and Insurance Oversight (CCIIO). This office’s charter was to support a successful rollout of the ACA. In 2011, the secretary of HHS, Kathleen Sebelius “Citing efficiency gains…” stated that the CCIIO would be moved under CMS (Reichard, John. Washington Health policy Week in Review Sebelius Shuffles Insurance Oversight Office into CMS, Shifts CLASS Act to Administration on Aging. January 10, 2011). The Obama administration insisted this was a way to control IT costs and leverage economies of scale through existing investments and infrastructure.   The Republican opposition believed this was another example of “…resources being diverted from seniors’ health care to be used to advance the Democrats’ new government-run health care entitlement” (Reichard, John. Washington Health policy Week in Review Sebelius Shuffles Insurance Oversight Office into CMS, Shifts CLASS Act to Administration on Aging. January 10, 2011).

3.3 About the Federal Contractors and their relationship with the CMS

3.3.1 CGI Federal

CGI Group Inc., a Canadian company acquired American Management Systems (AMS) in 2004 to enter the U.S. Federal Government market. Due to this acquisition of an American Federal contractor by a foreign entity, there was a “firewall” created so that CGI Federal (formerly called AMS) could continue to work on Federal contracts. This “firewall” entailed that CGI Federal would not share Federal client information with CGI Group Inc. thus CGI Federal became a wholly owned subsidiary of CGI Group Inc. This acquisition proved to be very lucrative for CGI Group Inc. CGI Federal became one of the most profitable business units due to the Healthcare and Human Services division. This division provides IT services in the areas of provider-based services, public health surveillance, portal integration, security, enterprise architecture, service orientated architecture, business intelligence and applications development.

On September 2007, CGI Federal, among 16 other Federal contractors was awarded the Enterprise Systems Development (ESD) Indefinite Delivery Indefinite Quantity (IDIQ) contract by the CMS. The purpose of the ESD IDIQ was to support CMS’ Integrated IT Investment & Systems Life Cycle Framework and various IT modernization programs and initiatives. Although there were no task orders issued under this contract at the time but it kept the door open for future task orders to the 16 contractors.

The project was competitively bid under the ESD IDIQ. This bid resulted in four contractors becoming the finalists. Out of those four contractors, CGI Federal was selected in September 2011 as the awardee based on “best value”.

3.3.2 Optum/QSSI

Founded in 1997, Quality Software Services Inc. (QSSI) is an established Capability Maturity Model Integration (CMMI) Level 3 organization with a proven track record of delivering a broad range of solutions with expertise in Health IT, Software Engineering, and Security & Privacy. Based in Columbia, MD Optum/QSSI is a subsidiary of UnitedHealth’s Optum division and was acquired by Optum in 2012.

Optum/QSSI is privately held with about 400 employees and collaborates with both the public sector and private sector to maximize performance and create sustainable value for its customers. In its 15 year existence, the company has cultivated a process driven, client focused method of IT solution development, which, has solidified its reputation as a capable IT partner in both the federal and commercial marketplace. In the federal landscape, Optum/QSSI has established itself as an industry in the field of Health IT and this reputation was key in the company’s selection as a federal contractor for

3.3.3 Verizon

Terremark is now a Verizon company, dedicated to combining the strongest cloud-based platform with the security and professional services that are necessary to conduct today’s enterprise and public sector business across the next-generation IT infrastructure. At the center of Verizon’s capabilities is its enterprise-class IT platform, which combines advanced IT infrastructure with world class provisioning and automation capabilities which is what CMS leveraged in this case. Verizon’s standards-based approach fully aligns with today’s enterprise business requirements driven by agility, productivity, and competitive advantage.

Verizon Terremark was a natural fit at the CMS through a long standing relationship. Verizon manages and maintains the entire HHS Wide Area Network (WAN) along with ancillary services such as security services, mobile solutions, and unified communications. Verizon also developed a homegrown fraud detection service it used to identify toll free fraud to pursue Medicare and Medicaid fraud saving the agency millions.

3.4 Key Inputs of CMS

For the project, we identified the following key inputs for CMS:

  • Patient Protection and Affordable Care Act (ACA): The ACA became law on March 23rd, 2010 under the Obama Administration. The legislation was passed to address various consumer health insurance purchase issues such as denial of coverage due to pre-existing conditions, stopping of coverage if patients became sick, lifetime benefit limits and access to affordable healthcare. The ACA mandated the creation of “exchanges” that would be used by consumers to compare and buy a Qualified Health Plan (QHP) based on their state of residency, income level, age and other factors. These exchanges could be created at the state level or at the federal level. If states decide not to create their own exchanges then they would have the option to redirect their constituents to the federal exchange to buy healthcare insurance.
  • States: The various states and Washington D.C. informed CMS if they intend to create their own exchanges or they would utilize the exchange developed by the Federal government. States also had the flexibility to utilize the federal exchange when they wanted and thus initially 26 states opted to have their constituents go to the federal exchange to purchase healthcare insurance.
  • People/Team: CMS assigned a part-time Project Manager to the project.

3.5 Key Inputs of for Federal Contractors

The project is one of the most complex systems in recent times. The project entailed 55 contractors working on various aspects of the system. These contractors were responsible for the creation of a robust network/infrastructure, the development of a website front-end, the Federally-Funded Exchange (FFE), the Data Hub and the EIDM. Additionally, this system receives eligibility and verification information from various other federal government agencies as the consumer fills out the online form. The following figure (Ariana Cha, 2013) below shows the complexity of the information flow of the entire system. contractors and agencies processes.png

Figure 2: contractors and agencies processes

3.5.1 CGI Federal

CGI Federal was one of the prime contractors for the project. It had the following key inputs:

  • Request for Proposal (RFP): An RFP was one of the first key inputs for the project. It required the establishment of a FFE that would be used for eligibility and enrollment, plan management, financial management, oversight, communication and customer service. The following Figure (Desai, 2013) shows the FFE Concept of Operations:


Figure 3: FFE Concept of Operations

  • Requirements: After the contract was awarded, first the CCIIO and then various other representatives within CMS provided requirements to CGI Federal for the FFE. These representatives came from policy, legal and Office of Information Services (OIS).


3.5.2 Optum/QSSI

  • Request for Proposal (RFP): This was the primary input for the project. It required the development of a data services hub for information exchange and the EIDM for user account registration.

3.5.3 Verizon

  • Request for Proposal (RFP): One of the first RFPs released in 2010 was for the infrastructure and essentially Platform as a Service (PaaS) for the ACA as dictated in the 2010 RFP. A Cloud Solutions Executive for Verizon Terremark said Verizon received an award prior to the additional contractors involved. Following the award, CGI Federal and others were asked to develop the system to conform to the Verizon environment.

3.6 Key Components of for CMS

  • Systems Development Methodology: A presentation from April 2012 by CMS’ OIS shows that an “Agile” methodology was used for the as shown in the following figure (Services, 2012).


Figure 4: CMS Agile Methodology

  • Project/System Integrator: CMS took on the role of “system integrator” to manage all 55 contractors.
  • Implementation Consideration: A McKinsey report shows the parallel “stacking” of all phases for this project as shown below (CMS, Red Team Discussion Document, 2013):


Figure 5: McKinsey Report for CMS

3.7 Key Components of for Federal Contractors

3.7.1 CGI Federal

  • Process Methodology: Patrick Thibodeau indicates in a Computerworld article that CGI Federal attained Capability Maturity Model Integration (CMMI) Level 5 maturity making it only the 10th company in US to achieve this level. By extension we can assume that CGI Federal brought the best practices of CMMI for the project.
  • Systems Development Methodology: Based on Federal contracting experience, the Federal contractor would either have their own development methodology or they would use the development methodology of the client (CMS in this case). Research indicates that CGI Federal used an Agile methodology to develop the FFE.

3.7.2 Optum/QSSI

  • Process Methodology: As a CMMI Level 3 organization, Optum/QSSI has a reputation for process driven and client focused methods of IT solutions development. Based on our research it is evident that the company implemented CMMI best practices on the project.
  • Systems Development Methodology: According to a senior analytics consultant with a major health provider who worked on the project, Optum/QSSI used an agile development methodology similar to the one depicted below (Group, 2014) based on iterative and incremental development with continuous visibility and opportunity for feedback from CMS.


Figure 6: Agile Methodology

  • Requirements Documentation for Data Services Hub: The Data Services Hub is a central function of the federal exchange which connects and routes information among trusted data sources including Treasury, Equifax, Social Security Administration (SSA) etc. Inputs from CMS which changed in late September 2013 to allow for an account creation before shopping for health plans.
  • Requirements Documentation for EIDM: EIDM enables healthcare providers to have the ability to use one credential to access multiple applications, serving the identity management needs of new and legacy systems. Inputs from CMS which changed in late September 2013 to allow for an account creation before shopping for health plans.

3.7.3 Verizon

  • System Architecture Design: There was an architecture diagram and overall design for the entire system that lost effectiveness due to a lack of accountability to ensure each component was delivered.
  • Security: Security was a huge component within the requirements for infrastructure, and Verizon Terremark offered a highly secure architecture designed to meet all of the critical compliance and certification requirements. Verizon had been audited against FISMA to the moderate-level and NIST 800-53 for federal customers. Verizon was also asked for advanced security options on the platform such as intrusion detection/intrusion prevention (IDS/IPS), log aggregation, and security event management.

3.8 Key Deliverables for CMS

  • Website: A website that should be able to provide residents ability to compare QHPs.
  • Exchange: A website that should enroll residents by verifying their eligibility based on income level, age and other factors.

3.9 Key Deliverables for Federal Contractors

3.9.1 CGI Federal

  • FFE: A fully functional FFE would be ready to go-live by October 1st, 2013. The FFE would be the backbone of and would seamlessly integrate with the website, the data hub and the EIDM.

3.9.2 Optum/QSSI

  • Data Services Hub: This system of determines eligibility for financial help. It sends customer data to various government agencies (VA, DHS, Treasury, etc.) to verify eligibility.
  • EIDM (Proof of Identity): Upon account creation, this system verifies identity with Experian. The system also enables healthcare providers to have the ability to use one credential to access multiple applications, serving the identity management needs of new and legacy systems.

3.9.3 Verizon

  • PaaS: Fully operational infrastructure which provides servers and hosting for the exchange.
  • Environmentals: Supports power, connectivity, and memory requirements for the environment.
  • Service Level Agreement (SLA): Rolling out the infrastructure in a timely fashion, offering and executing upon SLA’s required by the Government among other things.


4.1 Project Management Quality of CMS

4.1.1 Qualitative

  • Quality Planning:Quality planning for government releases is at a different scale than quality planning for private companies.  Many factors come into play such as redistributing of the resources through regulation, subsidization, and procurement. As part of CMS’ quality planning phase, the main scope aspects were functionality, features, and system outputs.  However, performance, reliability and maintainability suffered heavily due to time constraints as October 1st, 2013 was the hard deadline.
  • Quality Assurance: CMS used test plans and test reports to insure quality coverage were being met as per the requirements.  The front-end web interface was indeed completed in time.  However, identifying quality system integration was difficult due to the complexity of the back-end sub-systems.

4.1.2 Quantitative

  • Quality Monitor and Control:During the implementation phase, CMS didn’t take proactive measures to address the issues they found one week before launch. Specifically, when the testers reported server crashes at a scale of 10,000 concurrent users. Additionally, CGI Federal had reported more testing is required yet it appeared that CMS was insensitive to their recommendations.  Status reports were supposed to be read, understood and acted upon. HHS followed the Enterprise Life Cycle and CMS was supposed to follow these guidelines.

4.2 Project Management Quality for Federal Contractors

4.2.1 Project Management Quality of CGI Federal Qualitative

  • Quality Planning: As a CMMI Level 5 organization CGI Federal had optimized quality processes to deliver appropriate outcomes for the FFE. However, requirement changes seem to be one of the main issues with the project. Requirements were still being revised until summer of 2013 and kept on evolving even a week before go live. Additionally, the number of states that were going to join FFE increased from 26 to 34 that created another level of complexity in terms of maintaining quality on the project.
  • Quality Assurance: According to Cheryl Campbell, Senior Vice President at CGI Federal, in the Congressional hearings CGI Federal developed the FFE as per the contract requirements. It is interesting to note that CGI Federal was also one of the companies that developed the Massachusetts Health Exchange that was used as a model for the FFE. Hence, we can make the assumption that quality lessons were learned from that project which could have been used for the FFE. Quantitative

  • Quality Monitor and Control: CGI Federal is a highly metrics-driven organization. Each project is monitored and measured according to industry “best practices” and proprietary methodologies. Projects are evaluated based on scope, cost, schedule and other factors to check the health of the project and verify if they are keeping the customer satisfied. But if the requirements continue to evolve then even the best methodologies and measurements are not a match for customers changing their minds.

4.2.2 Project Management Quality of Optum/QSSI Qualitative

  • Quality Planning: As a CMMI Level 3 organization Optum/QSSI had a planned quality process to deliver appropriate outcomes for the Data Services Hub and EIDM project deliverables. However changing project requirements from CMS severely impacted quality planning efforts. For instance, the late requirement change in September that required consumers to create user accounts before browsing the exchange marketplace resulted in higher than expected simultaneous system usage and as a result impacted the functionalities of the EIDM tool which was originally designed to allow consumers to first access the systems, browse the marketplace, and if they wanted a product, create an account. Because the EIDM is only one tool for the federal marketplace registration system, this late requirement change made it impossible to coordinate and plan quality processes with other contractors who worked on portions of the registration system to ensure the appropriate performance outcomes before the go-live date of October, 1st.
  • Quality Assurance: Both the Data Services Hub and EIDM deliverables met quality assurance satisfying CMS’ requirements and all relevant quality standards for the project according to Andrew Slavitt’s, Group Executive Vice President at Optum/QSSI, during his Congressional hearings. It is important to also note that Optum/QSSI developed an EIDM tool for two other CMS systems. This EIDM tool followed benchmarking and quality audits taken from those existing EIDM solutions at CMS. Quantitative

  • Quality Monitor and Control: Requirements changes greatly impacted quality monitoring and control. Although Optum/QSSI used quality control tools such as charts to guide acceptance decisions, rework, and process adjustments, changing requirements severely impacted these controls. These changes introduced time constraint challenges and limited system-wide testing, and most importantly user acceptance testing.

4.2.3 Project Management Quality of Verizon Quantitative

  • Extensive delays in processing orders for additional capacity, provisioning resources, and implementation also caused Verizon a lot of angst with the CMS customer. Qualitative

  • Management within Verizon also failed to run some of the concerns up the executive flagpole to make leadership aware of issues that could have prevented delays or numerous escalations by CMS.
  • Verizon’s project management failed on many accounts. Poor coordination was to blame between multiple project managers assigned to the project within.

4.3 Project Management Key Successes and Failures

4.3.1 CMS

If we review the congressional hearings and documentation, they reveal that the project was a high priority project for CMS. In conversations with Federal contractors, CMS would start by saying “this is what the White House wants…”. It is still unclear whether this prefix was used because directions were actually coming from the White House or whether it was just used to indicate the importance of the project. Regardless of the intentions, one thing is for sure that they were not followed by action since there was not a dedicated full-time Project Manager to manage the project from kickoff to implementation. Most likely decisions were made by committees as is often the case with large government projects.

A big piece of the project included behind the scenes business processes even before the technology was to be considered. These business processes and governance entailed not only coordinating with 26 states but also with insurance companies. The picture below depicts an exhaustive list of stakeholders that were affected by the project:

FFE Stakeholders.png

Figure 7: FFE Stakeholders

From a business standpoint, CMS failed to calculate in advance the true cost of the entire project. Additionally, even after reports by McKinsey indicating a danger of not doing end-to-end testing and warnings from CGI Federal in their August 2013 status report (Federal, 2013) that testing could be an issue, CMS ignored these experts and went full steam in going live on October 1st, 2013.

Research indicates that some COTS products and custom software were developed to standup It also seems like CMS failed to look at the various internal and external “firewalls” the system needed to pass through.

4.3.2 Federal Contractors CGI Federal

  • FFE: According to Congressional hearings, the CGI Federal representative indicated that they had provided a fully functioning FFE as per contract requirements by October 1st, 2013. This was their success factor.
  • Changing Requirements: CGI Federal was responsible for developing the FFE. It was put under the spotlight for not providing recommendations holistically for the entire project. It is evident that requirements were changing and new states were being added. But there was no push back from CGI Federal to indicate that the requirement changes would result in quality issues on their end that would affect the entire system.
  • Testing: While the system did work for the initial couple of users but logging delays resulted in a poor customer experience. Research indicates that no end-to-end testing was performed to see holistically how the system would work. CGI Federal could have used their vast amount of industry expertise to inform CMS that no end-to-end testing would result in major issues. Optum/QSSI

  • Data Services Hub & EIDM: Based on Andrew Slavitt’s Congressional hearings on, Optum/QSSI successfully developed and delivered a fully functional Data Services Hub and EIDM tools. For example, according to Slavitt on October 1st the Data Services Hub processed over 175,000 transactions and millions more after the project launched.
  • Buggy Data Services Hub & EIDM: In the same Congressional hearings, however, Andrew Slavitt acknowledged that the Data Services Hub and EIDM tools, although worked functionally as designed, experienced performance bottlenecks when the project launched because of the late requirements changes requiring consumers to create accounts before browsing the marketplace. This change resulted in higher than expected simultaneous usage of the registration system and the Data Services Hub eligibility verification tool. Slavitt also admitted to the fact that Optum/QSSI identified and fixed bugs in the EIDM tool days after the October, 1st launch date. The release of code that had bugs was a quality failure and contradicts Slavitt earlier comments about delivering a fully functional EIDM tool. Verizon

  • Financial Success: The primary success story for Verizon was that financially the company did far better as a result of this project than initially predicted based in large part to the scope creep. Additionally Verizon specifically was not the cause for delays or outages on day one of the project and delivered the infrastructure to support the site by launch date. A significant underestimation of capacity would be to blame for the initial failures of
  • Hardware Outage: subsequent failure which Secretary Sebelius cites in her testimony was a hardware outage unrelated to project management (Krumboltz, Michael. suffers outage as Sebelius testifies that it never crashed.).
  • Project Management: Key project management failures within Verizon were inherent with the ordering, design or engineering phase, and eventually implementation suffered due to project management inefficiencies. As discussed previously, Verizon had several members of a project management team supporting the CGI Federal relationship, QSSI, and CMS relationships. There should have been one central program manager supported the ACA contract for Verizon. The communication failure through project management teams created bottleneck failures which spread throughout the engineering teams.

4.4 Lessons Learned on Project Management Best Practices

  • Who’s on First: Even though this project needed input from various internal and external stakeholders, a clear roles and responsibilities matrix should have been developed. This matrix should have been used by the contractors to see who is responsible for various activities of this large project and who specifically reach to out to incase they ran into bottlenecks.
  • Project Manager: CMS did not assign a full-time Project Manager for the project. For this large-scale project, it would have been prudent to have a full-time project manager responsible for coordinating various activities internally and across various contractors.
  • Business Process and Governance: Since this was the first-time such an endeavor was taking place with multiple stakeholders, it would have been useful to map the business processes of the future state prior to award. Overall business processes would also support governance structures that would help in checking progress and checking alignment with the stated objectives of the project.
  • Requirements Management: Ever-changing requirements and a change in strategy can affect projects dramatically. For, there should have been some baseline requirements established earlier in the project. The baseline requirements would entail the basic functionalities needed by CMS. It would have been useful to use a Requirements Traceability Matrix (RTM) that would be available to everyone on the project. The RTM would not only help all stakeholders be informed of what is going but it would keep the entire team honest as well and perhaps identify any issues before they become a problem later.
  • Communications and Sharing of Information: The way the project was setup it seems like the left-hand did not know what the right hand was doing. This can create problems in understanding issues from a holistic prospective and not knowing if there are dependencies that should be coordinated.
  • Metrics and Measurements: Reasonable metrics should have been created to assess the health of the project. These metrics should measure the stakeholder and team satisfaction at the beginning and during the project in the project life cycle to determine where adjustments need to be made. Based on these metrics, remediation processes should have been setup so that nothing falls through the cracks.
  • Methodologies and Documentation: Although it seems like CMS was suppose to follow HHS’ Enterprise Life Cycle (ELC) but research indicates that an Agile methodology was used to develop the system. This alludes to a conflict in terms of what is supposed to be used versus what was actually being used. Additionally, the vendors who were helping CMS came with their own methodologies. It seems like there was too many methodologies but not a consistent alignment of them across the organization so that all teams were on the same page. In this scenario, the advice would be to understand the various methodologies at play and select the most appropriate one. This selection might also entail having some sort of a hybrid methodology that everyone conforms to. Having one methodology would reduce the amount of documentation that needs to be developed thus freeing up resources to work on the actual needs of the project.


5.1 Project Design Recommendations (CMS only)

  • Project Manager (PM): The project had 55 federal contractors working on it at various times. The system components that these federal contractors were developing were dependent on each other. For example, the website needed to communicate with EIDM and Data Hub which would communicate with FFE. Due to these complex dependencies, there was a significant amount of communication and coordination that needed to happen on this project. Additionally, someone needed to see how not only if these system components would work with each other but how the thorough testing of these systems would be the difference between a failed project versus a successful project. Despite the complexity of managing such a large group of contractors and understanding the pros and cons, CMS did not have a full-time PM for While the real reasons for this decision are unknown, we can extrapolate from research that lack of an astute full-time PM was one of the major causes of the issues with

We recommend that a full-time PM should have been assigned to who had experience in large-scale implementations. This PM would have the authority to push back on unrealistic timelines, have a holistic view of the project and understand that even though individual system components are being developed, there should be time allocated in the project to perform effective end-to-end testing.

  • Cross-Functional Teams: As discussed earlier, the system components that the federal contractors were developing had many dependencies on each other. Despite these dependencies, no proof has been found that cross-functional teams were established.

We recommend that in designing the project, the development of cross-functional teams should have been given a high priority. These cross-functioning teams would comprise of government and contractors. These teams would not only create synergies among the various people but should be designed in a way where sharing of lessons learned and recommendations are encouraged.

  • Team Satisfaction Survey (TSS): When a project falls off track, often times it is because once management pays attention to it, they are at a point of no return. This is what seems to have happened at CMS. By the time people working on the project started showing their concerns that testing could be an issue, CMS either ignored this or that they did not have a choice and thus went ahead to release a buggy version of the system to the masses.

We recommend that a TSS should be incorporated into the project whose purpose would be to ask people at all levels about their concerns and recommendations of the project. The TSS should collect this information at the beginning and periodically during the project. The TSS should also have a mechanism where actions could be taken promptly by management if there are issues that seem to be recurring. The TSS is not a status report but actually a mechanism to check the pulse of the project.

5.2 Project Implementation Recommendations (CMS only)

  • Pilot Approach: The project used a “big bang” approach to release the software on October 1st, 2013. This approach resulted in overwhelming the system as users who tried to login found that their online forms were either taking too long to verify their information or that simply they were kicked out of the system. Additionally, it is apparent that the federal government did not anticipate the system errors it was going to get when the system went live.

We recommend that a pilot program should have been created for one of the states. This pilot program would be used to see how the system would perform when it goes live and what kind of issues it might have once it is open to the masses. Lessons learned from this pilot program could have been used to provide a better customer experience once the system went live.


To summarize, it should come as no surprise to those familiar with IT projects that most IT projects fail. A recent Gartner user survey showed that, while large IT projects are more likely to fail than small projects, around half of all IT projects fail in part due to unrealistic requirements, technical complexities, integration responsibilities, aggressive schedules, and inadequate testing. Causes that were all related to the project which resulted in its failure. As outlined in this case analysis, these fundamental mis-steps were the contributing factors that led to issues with this project and ultimately its failure. Based on our analysis CMS did not have the Project Integration and Project Management know-how to manage such a major project. The Agency’s assignment of a part-time project manager to the project is evident that leadership did not fully understand the magnitude and importance of the project, and what it took to implement IT projects. Based on our recommendations, it is our hope that such project management mis-steps are avoided for IT project implementations in the future.


  1. Ariana Cha, L. S. (2013, 10 24). What went wrong with Retrieved 04 16, 2014, from washington post:
  2. CMS. (2012, 04 01). Health Insurance Exchanges. Office of Information Services, 25.
  3. CMS.GOV. (2014, 04 01). CMS covers 100 million people. Retrieved 05 01, 2014, from cms:
  4. Conerly, B. (2013, 07 16). ObamaCare’s Delays: Lessons For All Businesses About Project Management. Retrieved 04 21, 2014, from
  5. C-SPAN. (2013, 10 24). Implementation of Affordable Care Act. Retrieved 03 15, 2014, from
  6. Daconta, M. (2013, 11 01). Media got it wrong: failed despite agile practices . Retrieved 04 21, 2014, from gnc:
  7. Desai, C. (2013, 03 05). Federally Facilitated Exchange. Health and Compliance Programs, 44.
  8. DHHS. (2012, 06 2012). Guide to Enterprise Life Cycle Processes, Artifacts, and Reviews. Center of Medicare & Medicaid Services.
  9. Dudley, T. (2012, 06 10). The Affordable Care Act and Marketplace Overview. CMS Office of Public Engagement.
  10. Federal, C. (2013). Monthly Status Report –August 2013. CMS. MD: CGI Federal.
  11. Hardin, K. (2013, 11 13). rollout lesson: Push back on unrealistic launch dates. Retrieved 05 02, 2014, from techrepublic:
  12. Office, U. S. (n.d.). Patient Protection and Affordable Care Act. DC: GAO.
  13. Simon & Co., L. (2013, 12 04). Major Contracts to Implelment the Federal Marketplace and Support State Based Marketplaces. DC: Simon & Co.
  14. States, C. o. (2013). 113 Congress. DC: House of Representatives.
  15. Thibodeau, P. (2013, 12 30). The firm behind had top-notch credentials and it didn’t help. Retrieved 03 04, 2014, from computer world:
  16. Thompson, L. (2013, 12 03). Diagnosis: The Government Broke Every Rule Of Project Management. Retrieved 04 01, 2014, from forbes :
  17. Turnbull, J. (2013, 12 12). What The US Government Learned About Software From The Healthcare.Gov Failure (And Some Of Us Already Knew). Retrieved 05 02, 2014, from gaslight:
  18. Walker, R. L. (2013). Response of CGI Federal Inc. to Additional Questions for the Record and Member Requests for the Record. DC: Wiley Rein LLP.




Lessons Learned in Creating a Corporate System


This article discusses the various strategic, political and cultural factors that were associated with the decision to develop an online employee portal at SmFedCon. The cause and contributing factors that led to the software development project are explored in detail. Focus is placed on SmFedCon’s decision processes to make the initial decision not to develop the employee portal.

Examples are included that show how factors such as person biases and financial conservatism influenced SmFedCon from realizing the potential of the online employee portal.


SmFedCon was a small US Federal government contractor that provided Information Technology (IT) services in the areas of strategic planning, project management and software development. It had approximately 240 employees and 98% of these employees worked onsite at various government locations across 14 states and Washington DC.  It was growing rapidly and was involved with multiple high-level projects. Due to this rapid growth, a decision needed to be made if SmFedCon was going to spend time and resources to create an online employee portal.


Within a few months of joining SmFedCon, CIO noticed a pattern where quality of documentation deliverables was declining due to lack of a version control system and no central document repository. What broke the camel’s back was an incident where one of the federal clients was about to receive different versions of the same document from the main office, project manager and the project team member. Although this was stopped in time, CIO realized that this as an issue and needed to be addressed. This issue was also confirmed by some of the employees who worked on site at federal client facilities.


The following table shows the reporting structure, roles and actors involved in the decision making process for creating an online employee portal:



Reported To

Chief Executive Officer (CEO)

  • Corporate priorities decision maker
  • N/A
Chief Financial Officer (CFO)
  • Corporate Financial Management
  • CEO
Chief Information Officer (CIO)
  • Corporate Technology Management
  • US Federal Government Projects Management
  • CEO

In regards to the online employee portal decision process, (1) the CFO’s role was to determine if new project budget requests made financial sense, (2) the CIO’s role was to provide a 2-page business case document to the CEO and (3) the CEO’s role was to decide.


Strategic Factors – Not in the Technology Roadmap

Although the initial decision not to develop the employee portal was overturned due to changing circumstances but at the beginning it was based on SmFedCon’s technology roadmap. The technology roadmap was written some year back before the company started to see rapid growth and did not take into account potential issues that might occur due to mismanagement and miscommunications.

Political Factors – Power

The CFO and CIO reported to the CEO however the CFO had more power at SmFedCon. CFO could easily influence the CEO in certain decisions. The CFO’s power came from the 20-year friendship with the CEO and as a trusted advisor to the CEO. The CFO was also responsible for IT prior to the CIO joining the company.

Cultural Factors – Small Business Mentality

While costs should be kept under control in all organizations but small businesses are especially sensitive to this. However, this sensitivity can blind the small businesses from what is possible. This was the case with SmFedCon. Even though they saw how an online employee portal could help solve some of the issue they had it was just not in the budget to pursue this direction.


After the issues were identified, CIO met with the CEO to discuss that an employee portal could be the answer. CEO requested a 2-page business justification document to show if the employee portal could address the current and perhaps future needs.

The 2-page business case linked current issues with quality degradation, loss of productivity and eventually loss of clientele. It also listed the various types of options that were considered to stand up an online employee portal. These options included proprietary software vs. open source, existing application customization vs. software development and associated costs. The document listed the only option that was most feasible for SmFedCon.

The CEO discussed the 2-page business case with the CFO during one late hectic evening. The next day CEO informed CIO that the company had decided not to move ahead with the online employee portal project.

Next week, the CEO was working on a federal solicitation response when the computer died. At that time CEO was the only person who had the latest version of the document. CEO was also collaborating with other writers but they only had previous versions. Although the documents were retrieved but CEO realized how the online employee portal with the documentation management system could have saved time and would have been beneficial. The next day SmFedCon won a contract that they were working on and the CEO asked the CIO to go ahead with managing the development of the online employee portal.

The following diagram shows the decision-making process at SmFedCon.

Decision Making Process at SmFedCon

Decision Making Process at SmFedCon


In hindsight, there are a number of things that could have been handled better.

As companies grow, they have to realize that what worked in the past when there were only a few employees would not be sufficient in the future. Processes and tools should be in place and scalable to the growing needs of the organizations. In regards to SmFedCon, this was not the case. Although the company was growing rapidly it did not invest in processes and tools that could have helped it become a well-oiled machine. In the case of an online employee portal it was a necessity not a luxury since a vast majority of the employees were not on corporate locations but they still needed to access correct version of documentation and be able to collaborate with other team members.

Framing the Problem

When CIO joined SmFedCon, the problem with documentation management, project management and team collaboration was not defined. There was no framing of what was going on. Although CIO was were not hired to improve operations but suspected that it might have been one of underlying Blink moments that the CEO had. CIO suspected this because CIO had worked with CEO as a consultant and helped one of SmFedCon’s clients improve operations. It would have been advantageous to the company if they had given CIO the opportunity to conduct a thorough study of the company to see what other areas could be improved upon.

Biases – We all had them

In the decision-making process for the online employee portal there were definitely some biases from all actors. CIO’s bias came from working with small business in the past where cost was always a major issue. Additionally, in those organizations CIO was responsible for recognizing patterns and improving operations and thus tried to do the same with SmFedCon. Due to CIO’s background in technology, CIO believed that most operational issues can be solved through well thought out management and technology systems which was another bias. CIO’s decision not to get buy-in from the CFO prior to giving the 2-page business case and not getting CFO involved in determining the project budget stemmed from an unpleasant experience working with a previous CFO. All of this played into the CIO developing the business case without working with the CFO.

There were some biases from the CFO as well. Since CFO handled IT before CIO joined, it seems like CFO was reluctant in giving up control. As CIO looks back, s/he remember an incident where the CEO had to have a closed door meeting with CFO so that CEO would give CIO login credentials for a corporate system. CFO was skeptical about IT projects and was quick to make judgments about them. CFO was also double the age of the CIO and might have not understood/accepted why SmFedCon hired a young CIO at the company only in their 20s. In regards to the online employee portal, all these biases might have played a role in the CFO convincing the CEO that it was not feasible to start this project.

Although the CEO was not quick to make judgments but the 20-year-old relationship with the CFO might have played a role in the decision. Additionally, the decision not to be move ahead on the project might have been exacerbated by that hectic late evening.

Alternatives to Recommended Direction

The 2 pages CIO chose to concentrate on stated what issues SmFedCon was having and how they could be solved through the online employee portal. The document did not have any alternatives to select from. It only listed that SmFedCon can create the online employee portal (1) using open source technologies, (2) CIO would guide the developers and vendors in the design and (3) CIO would manage its development.


Although a decision-making process was followed but initially it did not result in the desired outcome. As discussed earlier, while there are many reasons for this however establishing good relationships and getting buy-in would certainly have helped.  Some of the other decision-making processes that could have helped include:

  • Nominal Group Technique – This technique could have been helpful in determining the various issues employees were having. Since the online employee portal was the CIO’s idea even though s/he had been with the company only for few a months versus other employees who had been around for a long time. This might have created some resentment towards the idea. The Nominal Group Technique could have helped to make idea generation and problem solving more collaborative.
  • Framing – Proper framing of the issues would have helped too. CIO did not frame the issues correctly and jumped to the solution. It would have been great to just step back, frame each issue individually and then see how issues could be resolved.
  • Personality Types – CIO assumed that most people are people are like him/her. However, if CIO had understood the various personality types and their motivations then his/her recommendations could have appealed more to CEO and CFO.


Future Considerations for Hewlett Packard Enterprise

A year ago Hewlett Packard (HP) decided that it was going to split into two companies. This decision became real last week when HP officially split into HP Inc. and Hewlett Packard Enterprise (HPE) as announced by Meg Whitman on her LinkedIn post. The main reason given for this split was focus. HP Inc. would focus on selling consumer products such as personal computer and printers. HPE would focus on selling enterprise products, enterprise software and enterprise services such as cloud computing, big data, cyber security to improve operations.

It seems that on the surface the announcement of the split of HP into HP Inc. and HPE has received a mix bag of optimism and skepticism from different corners of the tech industry. On the optimistic side, this is a good move since it would help these companies focus on their core competencies and provide focused customer service and client experience. On the skeptical side, this is a little too late since the tech industry has been moving from merely selling computer products to selling more technology software and services for at least 20 years.

If we observe the tech industry from a modern economics lens we would find that this split is not something that is novel but it is very predictable. From a modern economics lens, the ‘primary sector’ for the tech industry focused on hardware and products, the ‘secondary sector’ for the tech industry focused on software and the ‘tertiary sector’ for the tech industry focuses on technology services. What is interesting is that this split lets HP Inc. focus on the ‘primary tech sector’ for consumers while HPE focuses on both the ‘secondary tech sector’ and the ‘tertiary tech sector’ simultaneously for enterprises. Eventually though, HPE would increase their focus on the ‘tertiary tech sector’ since the margins are much better in services as compared to just products and software. In order for HPE to become a bigger player in the services market, they should consider the following:


In the Future

Who is leading the services division?


Who should be leading the services division?
What processes are being followed to provide services? What processes should be followed to provide services?
Where mix of tech and non-tech services are being provided? Where mix of tech and non-tech services should be provided?
When are services bundled with hardware and software? When should services be bundled with hardware and software?
Why standalone services are provided? Why should standalone services be provided?

HPE leadership has to realize that any organizational splits are not without consequences. These consequences could entail: (1) Stocks becoming more volatile as any budget cuts with client enterprises could affect the bottom line, (2) Competitors might be able to provide same level of service at a cheaper cost with better client experiences and (3) Lack of optimized processes with no flexibility to adjust for enterprise clients needs could reduce overall reputation of HPE.

One of the ways to address the above mentioned split issues would be to create independent mock enterprise client teams that would rate how easy or difficult it was to deal with HPE in light of changing economic conditions, client experiences and efficient and effective processes. These independent mock enterprise client teams would be used to further refine HPE and put itself in the shoes of its enterprise clients.

Organizational Changes - HPE

Future Considerations for Alphabet Inc.

A couple of weeks ago Alphabet Inc. emerged as a parent holding company of Google as announced by Larry Page on Google’s blog. The two main reasons given for this move is to make the company cleaner and more accountable. By cleaner, it means that products that are not related to each other would become separate wholly owned subsidiaries of Alphabet Inc. which includes Google, Calico, X Lab, Ventures and Capital, Fiber and Nest Labs. By becoming more accountable, it means that leaders of these wholly owned subsidies would be held to even higher standards and accountability of where money is and should be spent. This move would help Wall Street understand that Alphabet is willing and structurally capable of going into areas that are unrelated.

It seems that on the surface the announcement of creating Alphabet Inc. has deemed to be a good move as many pundits and professors have pointed out ever since its emergence. The reasons of cleanliness and accountability are great for internal purposes. However, if we dig a little deeper we would find that there are external purposes that are at play here as well. Firstly, due to Alphabet Inc.’s cleaner approach, mergers and acquisitions of unrelated industries would become much easier and thus accountability of each wholly owned subsidiaries would be justifiable to Wall Street. Secondly, Alphabet Inc. would now be able to enter into industries or create new industries altogether. This move could mean that Alphabet Inc. could also be the next big 3D manufacturer of electronic equipment or even the next Big Bank that finally removes paper-based transactions. While both of these examples are interesting and achievable due to Alphabet Inc.’s deep pockets. In order for Alphabet Inc. to really disrupt or create new industries, strategic consideration should be taken into the following:


In the Future

Who is leading the organization(s)?


Who should lead the organization(s)?
What processes are being followed? What processes should be followed?
Where are products and services being deployed? Where products and services should be deployed?


When do people, process, technologies, products and services disrupt/create markets? When should people, process, technologies, products and services disrupt/create markets?
Why already bought companies make sense? Why companies should be bought?

Alphabet Inc. leadership also has to realize that any organizational structural changes are not without consequences. These consequences could entail: (1) Stocks could become more volatile as even any slightly negative news concerning the wholly owned subsidiaries could affect Alphabet Inc. stocks, (2) Due to autonomy and fiefdom creation, collaboration across people, process, technologies, products and services among the wholly owned subsidiaries could be compromised and (3) There could be rise of duplicative functional teams (e.g., HR, Finance etc.) across all wholly owned subsidiaries thus taking resources away from core business pursuits.

One of the ways to address the above mentioned conglomerate issues would be to create a task force with enough teeth within Alphabet Inc., and cross-organizational teams across all wholly owned subsidiaries who can help find and remedy these issues. This task force and its teams could be similar to internal consultants whose lessons learned and methodologies could help Alphabet Inc. become more efficient and effective. Perhaps these practices could also open the door for Alphabet Inc. to dominate the Management Consulting industry as well.

Organizational Changes

Understanding and Applying Predictive Analytics

Executive Summary

This article proposes looking at Predictive Analytics from a conceptual standpoint before jumping into the technological execution considerations. For the implementation aspect, organizations need to assess the following keeping in mind the contextual variances:




  • Top Down
  • Bottom Up
  • Hybrid
  • Organizational Maturity
  • Change Management
  • Training
  • Practical Implications
  • Pros and Cons of Technology Infrastructure
  • Providing Enabling Tools to Users
  • Best Practices

Describing Predictive Analytics

Predictive Analytics is a branch of data mining that helps predict probabilities and trends. It is a broad term describing a variety of statistical and analytical techniques used to develop models that predict future events or behaviors. The form of these predictive models varies, depending on the behavior or event that they are predicting. Due to the massive amount of data organizations are collecting, they are turning towards Predictive Analytics to find patterns in this data that could be used to predict future trends. While no data is perfect in predicting what the future may hold there are certain areas where organizations are utilizing statistical techniques supported by information systems at strategic, tactical and operational levels to change their organizations. Some examples of where Predictive Analytics is leveraged include customer attrition, recruitment and supply chain management.

Gartner describes Predictive Analytics as any approach to data mining with four attributes:

  1. An emphasis on prediction (rather than description, classification or clustering)
  2. Rapid analysis measured in hours or days (rather than stereotypical months of traditional data mining)
  3. An emphasis on the business relevance of the resulting insights (no ivory tower analyses)
  4. An (increasing) emphasis on ease of use, thus making tools accessible to business users

The above description highlights some important aspects for organizations to consider namely:

  1. More focus on prediction rather than just information collection and organization. Sometimes in organizations it is observed that information collection becomes the end goal rather than using that information to make decisions.
  2. Timeliness is important otherwise organizations might be making decisions on information that is already obsolete.
  3. Understanding of the end goal is crucial by asking why Predictive Analytics is being pursued and what value it brings to the organization.
  4. Keeping in mind that if the tools are more accessible to business users then they would have a higher degree of appreciation of what Predictive Analytics could help them achieve.

Relationship of Predictive Analytics with Decision Support Systems or Business Intelligence

University of Pittsburg describes Decision Support Systems as interactive, computer-based systems that aid users in judgment and choice activities. They provide data storage and retrieval but enhance the traditional information access and retrieval functions with support for model building and model-based reasoning. They support framing, modeling and problem solving. While Business Intelligence according to Gartner is an umbrella term that includes the applications, infrastructure and tools, and best practices that enable access to and analysis of information to improve and optimize decisions and performance. These descriptions point to the fact that Decision Support Systems or Business Intelligence are used for decision making within organization.

Interestingly, it seems like Predictive Analytics is the underlying engine for Decision Support Systems or Business Intelligence. What this means is the predictive models that result in Predictive Analytics could be under the hood of Decision Support Systems or Business Intelligence. It should be noted that organizations should proceed with caution with regards to the Decision Support Systems or Business Intelligence since if the underlying assumption are incorrect in making the predictive models then the decision making tools would be more harmful then helpful. A balanced approach would be to create expert systems where Decision Support Systems or Business Intelligence is augmented by human judgment and the underlying models are checked and verified periodically.

Implementation Considerations for Predictive Analytics

As the descriptions above have indicated that the aim of Predictive Analytics is to recognize patterns and trends that can be utilized to transform the organization. This requires organizations to firstly educate themselves on what value they want and what can be derived from Predictive Analytics. Predictive Analytics is about business transformation and it needs to show what value it brings to the organization. In this regard, we have to assess people, processes and technologies of the organization in terms of current state (where the organization is right now) and future state (where the organization wants to be). Typically, this revolves around Strategies, Politics, Innovation, Culture and Execution (SPICE) as shown below.

SPICE Factors

SPICE Factors

The assessment of people for Predictive Analytics means to understand what users will be leveraging Predictive Analytics and if they understand that simply relying on Predictive Analytics is not enough but in order to have an effective system they need to be part of the system. This means that the analytics insights need to be augmented by human expertise to make intelligent decisions. The assessment of processes for Predictive Analytics entails looking at how organizations make decisions right now and how future decisions would be made if Predictive Analytics is put into place. This includes having appropriate governance structures in place. The assessment of technology entails looking at what technologies exist within the organization and if they could be leveraged for Predictive Analytics. If not then looking at what Predictive Analytics products are in the market that would work for the organization and are they flexible enough in case the underlying assumptions for the predictive models change and when predictive models become obsolete.

The advanced techniques mentioned in the book, Seven Methods for Transforming Corporate Data into Business Intelligence would be applicable to Predictive Analytics. These methods are:

  1. Data-driven decision support
  2. Genetic Algorithms
  3. Neural Networks
  4. Rule-Based Systems
  5. Fuzzy Logic
  6. Case-Based Reasoning
  7. Machine Learning

Technologies Used for Predictive Analytics

Gartner has been publishing their Magic Quadrant on Business Intelligence and Analytics Platforms since 2006. Due to the increased importance of Predictive Analytics in the marketplace, Gartner decided to create a separate Magic Quadrant for Advanced Analytics Platforms which focuses on Predictive Analytics and published its first version on February 2014. Since it is the first version of the Magic Quadrant, all vendors listed are new and no vendors were dropped.


Gartner's Magic Quadrant for Advanced Analytics Platforms

Gartner’s Magic Quadrant for Advanced Analytics Platforms

As we can see from this Magic Quadrant that it includes well-known vendors but also vendors that are not as big or as well-known. It is interesting to note that open-source vendors such as RapidMiner (a Chicago company) and Knime (a European company) are in the same Leaders Quadrant as well-established vendors such as SAS and IBM. While there are some issues with these open-source vendors as stated in the report but perhaps this Magic Quadrant is also an indication of where the next generation of analytics would come from. Due to the very nature of open-source, there are more opportunities for cheaper customization which would give the organizations the flexiblity to be as granular as they want to be. Ofcourse code stablity and lack of proper documentation are issues that organizations need to be cognizant about. Organizations may also want to “try out” these open source tools before they make a big commitment to propertary software to see if Predictive Analytics is something they want to invest heavily in.

Using Predictive Analytics in Specific Industries

There are many industries that utilize Predictive Analytics. The organizations in these industries either use Predictive Analytics to transform their business and/or to address certain areas that they would like to improve upon. Following is a list of some of the industries that utilize Predictive Analytics:

Industry How is Predictive Analytics used?
  • Customer Retention
  • Inventory Optimization
  • Low-Cost Promotions
Oil and Gas
  • Well and Field Asset Surveillance
  • Production Optimization
  • Equipment Reliability and Maintenance
  • Adjust production schedules
  • Tweak marketing campaigns
  • Minimize Inventory
  • Human Resources Allocation
  • Supply Chain Optimization
  • Electronic Health Records
  • Nation-wide Blood Levels
Social Media
  • New Business Models

While there are many examples of industries that have embraced Predictive Analytics but there are other industries that have not fully accepted it as a new reality. These industries have many excuses for not considering Predictive Analytics but typically revolve around scope, quality, cost and fear of the known. However, the tide might be changing for these industries as well since industry bloggers are beginning to insist how Predictive Analytics could be leveraged for competitive advantages.

My Opinion

Predictive Analytics can come in handy in making organizations analytical and becoming a better version of themselves. However, Predictive Analytics can be a deal-breaker if organizations have attempted and failed in the past and for this very reason Predictive Analytics should start as a discussion first. This discussion should revolve around asking which areas need improvements and among other things determine if Predictive Analytics could be something that could help. After a successful Predictive Analytics initiative other areas could be potential candidates as well.

An important thing to note is that Predictive Analytics is an organization-wide initiative that has touch points across the organization and thus the maturity of the organization has to be seriously considered prior to going on a Predictive Analytics journey. No matter how good Predictive Analytics can be for the organization but if the organization is not mature enough and it does not have the right governance, processes and feedback mechanisms in place then it might turn out to be another attempt at glory but nothing to show for it.


  1. Predictive Analytics for Dummies
  2. Seven Methods for Transforming Corporate Data Into Business Intelligence
  3. IBM Journal Paper on A Business Intelligence System by H.P. Luhn
  4. Gartner report (G00258011) Magic Quadrant for Advanced Analytics
  5. Gartner IT Glossary on Predictive Analytics
  6. Gartner IT Glossary on Business Intelligence
  7. SAP Predictive Analytics
  8. Decision Support Systems by Marek J. Druzdzel and Roger R. Flynn
  9. 5 Questions to Ask About Predictive Analytics
  10. 5 Factors for Business Transformation