– Who is at fault?

Credits: Arsalan Khan, Ed Heironimus, Francis Wisseh, Maryam Moussavi and Udhayakumar Parerikkal


This research paper analyzes the Center for Medicare and Medicaid (CMS)’s project in detail and makes recommendations on what could have been done differently. The project had 55 federal contractors working on it but this research paper will concentrate on only three. These federal contractors are:

  • CGI Federal who was developing and implementing the Federally-Funded Exchange (FFE). The estimated value of the contract was $93.7 million and it was awarded in December 2011.
  • Optum/QSSI was developing the Data Services Hub that would verify citizenship, immigration status and tax information. The estimated value of the contract was $144.6 million and it was awarded in January 2012. Optum/QSSI was also developing the Enterprise Identity Management (EIDM) that would provide enterprise-wide credentials and single sign-on capability. The estimated value of the contract was almost $110 million and it was awarded in June 2012.
  • Terremark Worldwide, In., (acquired by Verizon) was going to help increase CMS’ Platform-as-a-Service (PaaS) capabilities in the CMS cloud-computing environment. The total estimated value of the contract was $55.4 million and multiple task order were issued until summer of 2013.

The following tables summarize this research paper:

Table 1: Key Inputs

Key Inputs of the Project

CMS CGI Federal Optum/QSSI


·      Affordable Care Act

·      States

·      People/Team

·      FFE RFP

·      Requirements

·      Data Services Hub and EDIM RFP

·      PaaS RFP

Table 2: Key Components

Key Components of the Project

CMS CGI Federal Optum/QSSI


·      Agile Methodology

·      Project/System Integrator

·      Parallel “stacking” of phases

·      CMMI Level 5 Maturity

·      Agile Methodology

·      CMMI Level 3 Maturity

·      Agile Methodology

·      Data Services Hub Documents

·      EIDM Documents

·      Architecture diagram

·      Security

Table 3: Quality of Project Management – Qualitative View

Qualitative View of Project Management

CMS CGI Federal Optum/QSSI


·      Government vs. Private industry projects

·      Test plans and test reports

·      Requirement changes

·      Lessons learned from a state exchange

·      Requirement changes

·      Previous benchmarking and audits used

·      Issue escalation

·      Poor coordination

Table 4: Quality of Project Management – Quantitative View

Quantitative View of Project Management

CMS CGI Federal Optum/QSSI


·      HHS Enterprise Life Cycle ·      Highly metrics-driven ·      Use of charts ·      Delayed processing of orders

Table 5: Project Management Successes and Failures

Key Successes and Failure of Project Management



·      Pressure from White House

·      Lack of business processes

·      Miscalculated costs

·      Various technical options were not considered

·      FFE

·      Changing Requirements

·      Testing

·      Data Services Hub and EIDM

·      Buggy Data Services Hub and EIDM

·      Financial Success

·      Hardware Outage

·      Project Management

Table 6: Lessons Learned

Lessons Learned on Project Management Best Practices

·      Roles and Responsibilities Matrix

·      Full-Time Project Manager

·      Business Processes and Governance

·      Requirements Management

·      Communications and Sharing

·      Metrics and Measurements

·      Methodologies and Documentation

Table 7: Recommendations

Team Recommendations

Project Design

Project Implementation

·      Project Manager

·      Cross-Functional Team

·      Team Satisfaction Survey

·      Pilot Approach


The Affordable Care Act (ACA) is the nation’s healthcare reform law enacted on March 23rd, 2010. Under the law, a new “Patient’s Bill of Rights” gives the American people the stability and flexibility they need to make informed choices about their health. There were numerous reasons why healthcare reform was critically needed in the United States, including:

  • High health insurance rates and lack of coverage by many: In 2013 the Congressional Budget Office (CBO) estimated that 57 million Americans under the age of 65 are uninsured; representing 1 out of 5 in the population.
  • Unsustainable healthcare spending: Healthcare spending represented 17.9% of the nation’s Gross Domestic Product (GDP) in 2011.
  • Lack of emphasis on prevention: 75% of healthcare dollars are spent on treating preventable diseases, however only 3 cents of each healthcare dollar goes toward prevention.
  • Healthcare disparities: Healthcare inequalities related to income and access to coverage exists across demographic and racial lines.

On October 1st, 2013 went live as part of the technical implementation of the ACA reform to help Americans buy healthcare insurance, however the release was an astronomical failure. The cause and contributing factors that led to issues with this project are explored in detail. Focus is placed on looking at CMS’ capabilities from a Project Integration and Project Management perspective. Additionally, our analysis will assess the role of the major Federal contractors in the project. Examples are included to show how contributing factors such as scope creep, schedule constraints and lack of adequate testing led to a website which provided an inadequate customer experience.

This research paper provides a descriptive review and analysis of the project. During our analysis, we used Kathy Schwalbe’s (2014) Three-Sphere Model for System Management which entails organizational, business and technological perspectives of Project Management. We utilize these perspectives to determine what went wrong with the project from the Federal Government, CGI Federal (contractor), United Healthcare QSSI (contractor) and Verizon (contractor) points of view.   Furthermore, building on these unique perspectives, we analyzed the stated objectives and real implications of the project, the quality of the project management from a qualitative and quantitative perspective, key successes and failures factors, key lessons learned, project management best practices, and recommendations for what might have been done differently.


In order to set the context of the research paper, we have to understand what the CMS does, what the reason for the website is, and why the on-time completion of the website was a priority. Additionally, we will look at the key inputs, components and deliverables of this project.

3.1 About CMS

According to the and websites, CMS is one of the operational divisions of the Department of Human and Health Services (HHS). CMS is responsible for providing oversight on the Medicare and Medicaid program, Health Insurance Marketplace and related quality assurance activities. It has 10 regional offices with its headquarters in Maryland. The Administrator for the CMS is nominated by the US President and confirmed by the Senate. Figure 1 (Priorities, 2014) below shows CMS’s 2013 budget that accounted for 22% of the entire US Federal Government Budget.


Figure 1: Federal Budget Distribution


3.2 Why CMS was given the project?

CMS was designated as the chosen one by the Obama administration for obvious reasons. As discussed earlier the purpose of the ACA was to enable all Americans with the ability to buy health insurance. We also defined the CMS as an organization which provides healthcare to the elderly, disabled, and those who are not financially capable of buying healthcare. project began as a sub agency under HHS called the Center for Consumer Information and Insurance Oversight (CCIIO). This office’s charter was to support a successful rollout of the ACA. In 2011, the secretary of HHS, Kathleen Sebelius “Citing efficiency gains…” stated that the CCIIO would be moved under CMS (Reichard, John. Washington Health policy Week in Review Sebelius Shuffles Insurance Oversight Office into CMS, Shifts CLASS Act to Administration on Aging. January 10, 2011). The Obama administration insisted this was a way to control IT costs and leverage economies of scale through existing investments and infrastructure.   The Republican opposition believed this was another example of “…resources being diverted from seniors’ health care to be used to advance the Democrats’ new government-run health care entitlement” (Reichard, John. Washington Health policy Week in Review Sebelius Shuffles Insurance Oversight Office into CMS, Shifts CLASS Act to Administration on Aging. January 10, 2011).

3.3 About the Federal Contractors and their relationship with the CMS

3.3.1 CGI Federal

CGI Group Inc., a Canadian company acquired American Management Systems (AMS) in 2004 to enter the U.S. Federal Government market. Due to this acquisition of an American Federal contractor by a foreign entity, there was a “firewall” created so that CGI Federal (formerly called AMS) could continue to work on Federal contracts. This “firewall” entailed that CGI Federal would not share Federal client information with CGI Group Inc. thus CGI Federal became a wholly owned subsidiary of CGI Group Inc. This acquisition proved to be very lucrative for CGI Group Inc. CGI Federal became one of the most profitable business units due to the Healthcare and Human Services division. This division provides IT services in the areas of provider-based services, public health surveillance, portal integration, security, enterprise architecture, service orientated architecture, business intelligence and applications development.

On September 2007, CGI Federal, among 16 other Federal contractors was awarded the Enterprise Systems Development (ESD) Indefinite Delivery Indefinite Quantity (IDIQ) contract by the CMS. The purpose of the ESD IDIQ was to support CMS’ Integrated IT Investment & Systems Life Cycle Framework and various IT modernization programs and initiatives. Although there were no task orders issued under this contract at the time but it kept the door open for future task orders to the 16 contractors.

The project was competitively bid under the ESD IDIQ. This bid resulted in four contractors becoming the finalists. Out of those four contractors, CGI Federal was selected in September 2011 as the awardee based on “best value”.

3.3.2 Optum/QSSI

Founded in 1997, Quality Software Services Inc. (QSSI) is an established Capability Maturity Model Integration (CMMI) Level 3 organization with a proven track record of delivering a broad range of solutions with expertise in Health IT, Software Engineering, and Security & Privacy. Based in Columbia, MD Optum/QSSI is a subsidiary of UnitedHealth’s Optum division and was acquired by Optum in 2012.

Optum/QSSI is privately held with about 400 employees and collaborates with both the public sector and private sector to maximize performance and create sustainable value for its customers. In its 15 year existence, the company has cultivated a process driven, client focused method of IT solution development, which, has solidified its reputation as a capable IT partner in both the federal and commercial marketplace. In the federal landscape, Optum/QSSI has established itself as an industry in the field of Health IT and this reputation was key in the company’s selection as a federal contractor for

3.3.3 Verizon

Terremark is now a Verizon company, dedicated to combining the strongest cloud-based platform with the security and professional services that are necessary to conduct today’s enterprise and public sector business across the next-generation IT infrastructure. At the center of Verizon’s capabilities is its enterprise-class IT platform, which combines advanced IT infrastructure with world class provisioning and automation capabilities which is what CMS leveraged in this case. Verizon’s standards-based approach fully aligns with today’s enterprise business requirements driven by agility, productivity, and competitive advantage.

Verizon Terremark was a natural fit at the CMS through a long standing relationship. Verizon manages and maintains the entire HHS Wide Area Network (WAN) along with ancillary services such as security services, mobile solutions, and unified communications. Verizon also developed a homegrown fraud detection service it used to identify toll free fraud to pursue Medicare and Medicaid fraud saving the agency millions.

3.4 Key Inputs of CMS

For the project, we identified the following key inputs for CMS:

  • Patient Protection and Affordable Care Act (ACA): The ACA became law on March 23rd, 2010 under the Obama Administration. The legislation was passed to address various consumer health insurance purchase issues such as denial of coverage due to pre-existing conditions, stopping of coverage if patients became sick, lifetime benefit limits and access to affordable healthcare. The ACA mandated the creation of “exchanges” that would be used by consumers to compare and buy a Qualified Health Plan (QHP) based on their state of residency, income level, age and other factors. These exchanges could be created at the state level or at the federal level. If states decide not to create their own exchanges then they would have the option to redirect their constituents to the federal exchange to buy healthcare insurance.
  • States: The various states and Washington D.C. informed CMS if they intend to create their own exchanges or they would utilize the exchange developed by the Federal government. States also had the flexibility to utilize the federal exchange when they wanted and thus initially 26 states opted to have their constituents go to the federal exchange to purchase healthcare insurance.
  • People/Team: CMS assigned a part-time Project Manager to the project.

3.5 Key Inputs of for Federal Contractors

The project is one of the most complex systems in recent times. The project entailed 55 contractors working on various aspects of the system. These contractors were responsible for the creation of a robust network/infrastructure, the development of a website front-end, the Federally-Funded Exchange (FFE), the Data Hub and the EIDM. Additionally, this system receives eligibility and verification information from various other federal government agencies as the consumer fills out the online form. The following figure (Ariana Cha, 2013) below shows the complexity of the information flow of the entire system. contractors and agencies processes.png

Figure 2: contractors and agencies processes

3.5.1 CGI Federal

CGI Federal was one of the prime contractors for the project. It had the following key inputs:

  • Request for Proposal (RFP): An RFP was one of the first key inputs for the project. It required the establishment of a FFE that would be used for eligibility and enrollment, plan management, financial management, oversight, communication and customer service. The following Figure (Desai, 2013) shows the FFE Concept of Operations:


Figure 3: FFE Concept of Operations

  • Requirements: After the contract was awarded, first the CCIIO and then various other representatives within CMS provided requirements to CGI Federal for the FFE. These representatives came from policy, legal and Office of Information Services (OIS).


3.5.2 Optum/QSSI

  • Request for Proposal (RFP): This was the primary input for the project. It required the development of a data services hub for information exchange and the EIDM for user account registration.

3.5.3 Verizon

  • Request for Proposal (RFP): One of the first RFPs released in 2010 was for the infrastructure and essentially Platform as a Service (PaaS) for the ACA as dictated in the 2010 RFP. A Cloud Solutions Executive for Verizon Terremark said Verizon received an award prior to the additional contractors involved. Following the award, CGI Federal and others were asked to develop the system to conform to the Verizon environment.

3.6 Key Components of for CMS

  • Systems Development Methodology: A presentation from April 2012 by CMS’ OIS shows that an “Agile” methodology was used for the as shown in the following figure (Services, 2012).


Figure 4: CMS Agile Methodology

  • Project/System Integrator: CMS took on the role of “system integrator” to manage all 55 contractors.
  • Implementation Consideration: A McKinsey report shows the parallel “stacking” of all phases for this project as shown below (CMS, Red Team Discussion Document, 2013):


Figure 5: McKinsey Report for CMS

3.7 Key Components of for Federal Contractors

3.7.1 CGI Federal

  • Process Methodology: Patrick Thibodeau indicates in a Computerworld article that CGI Federal attained Capability Maturity Model Integration (CMMI) Level 5 maturity making it only the 10th company in US to achieve this level. By extension we can assume that CGI Federal brought the best practices of CMMI for the project.
  • Systems Development Methodology: Based on Federal contracting experience, the Federal contractor would either have their own development methodology or they would use the development methodology of the client (CMS in this case). Research indicates that CGI Federal used an Agile methodology to develop the FFE.

3.7.2 Optum/QSSI

  • Process Methodology: As a CMMI Level 3 organization, Optum/QSSI has a reputation for process driven and client focused methods of IT solutions development. Based on our research it is evident that the company implemented CMMI best practices on the project.
  • Systems Development Methodology: According to a senior analytics consultant with a major health provider who worked on the project, Optum/QSSI used an agile development methodology similar to the one depicted below (Group, 2014) based on iterative and incremental development with continuous visibility and opportunity for feedback from CMS.


Figure 6: Agile Methodology

  • Requirements Documentation for Data Services Hub: The Data Services Hub is a central function of the federal exchange which connects and routes information among trusted data sources including Treasury, Equifax, Social Security Administration (SSA) etc. Inputs from CMS which changed in late September 2013 to allow for an account creation before shopping for health plans.
  • Requirements Documentation for EIDM: EIDM enables healthcare providers to have the ability to use one credential to access multiple applications, serving the identity management needs of new and legacy systems. Inputs from CMS which changed in late September 2013 to allow for an account creation before shopping for health plans.

3.7.3 Verizon

  • System Architecture Design: There was an architecture diagram and overall design for the entire system that lost effectiveness due to a lack of accountability to ensure each component was delivered.
  • Security: Security was a huge component within the requirements for infrastructure, and Verizon Terremark offered a highly secure architecture designed to meet all of the critical compliance and certification requirements. Verizon had been audited against FISMA to the moderate-level and NIST 800-53 for federal customers. Verizon was also asked for advanced security options on the platform such as intrusion detection/intrusion prevention (IDS/IPS), log aggregation, and security event management.

3.8 Key Deliverables for CMS

  • Website: A website that should be able to provide residents ability to compare QHPs.
  • Exchange: A website that should enroll residents by verifying their eligibility based on income level, age and other factors.

3.9 Key Deliverables for Federal Contractors

3.9.1 CGI Federal

  • FFE: A fully functional FFE would be ready to go-live by October 1st, 2013. The FFE would be the backbone of and would seamlessly integrate with the website, the data hub and the EIDM.

3.9.2 Optum/QSSI

  • Data Services Hub: This system of determines eligibility for financial help. It sends customer data to various government agencies (VA, DHS, Treasury, etc.) to verify eligibility.
  • EIDM (Proof of Identity): Upon account creation, this system verifies identity with Experian. The system also enables healthcare providers to have the ability to use one credential to access multiple applications, serving the identity management needs of new and legacy systems.

3.9.3 Verizon

  • PaaS: Fully operational infrastructure which provides servers and hosting for the exchange.
  • Environmentals: Supports power, connectivity, and memory requirements for the environment.
  • Service Level Agreement (SLA): Rolling out the infrastructure in a timely fashion, offering and executing upon SLA’s required by the Government among other things.


4.1 Project Management Quality of CMS

4.1.1 Qualitative

  • Quality Planning:Quality planning for government releases is at a different scale than quality planning for private companies.  Many factors come into play such as redistributing of the resources through regulation, subsidization, and procurement. As part of CMS’ quality planning phase, the main scope aspects were functionality, features, and system outputs.  However, performance, reliability and maintainability suffered heavily due to time constraints as October 1st, 2013 was the hard deadline.
  • Quality Assurance: CMS used test plans and test reports to insure quality coverage were being met as per the requirements.  The front-end web interface was indeed completed in time.  However, identifying quality system integration was difficult due to the complexity of the back-end sub-systems.

4.1.2 Quantitative

  • Quality Monitor and Control:During the implementation phase, CMS didn’t take proactive measures to address the issues they found one week before launch. Specifically, when the testers reported server crashes at a scale of 10,000 concurrent users. Additionally, CGI Federal had reported more testing is required yet it appeared that CMS was insensitive to their recommendations.  Status reports were supposed to be read, understood and acted upon. HHS followed the Enterprise Life Cycle and CMS was supposed to follow these guidelines.

4.2 Project Management Quality for Federal Contractors

4.2.1 Project Management Quality of CGI Federal Qualitative

  • Quality Planning: As a CMMI Level 5 organization CGI Federal had optimized quality processes to deliver appropriate outcomes for the FFE. However, requirement changes seem to be one of the main issues with the project. Requirements were still being revised until summer of 2013 and kept on evolving even a week before go live. Additionally, the number of states that were going to join FFE increased from 26 to 34 that created another level of complexity in terms of maintaining quality on the project.
  • Quality Assurance: According to Cheryl Campbell, Senior Vice President at CGI Federal, in the Congressional hearings CGI Federal developed the FFE as per the contract requirements. It is interesting to note that CGI Federal was also one of the companies that developed the Massachusetts Health Exchange that was used as a model for the FFE. Hence, we can make the assumption that quality lessons were learned from that project which could have been used for the FFE. Quantitative

  • Quality Monitor and Control: CGI Federal is a highly metrics-driven organization. Each project is monitored and measured according to industry “best practices” and proprietary methodologies. Projects are evaluated based on scope, cost, schedule and other factors to check the health of the project and verify if they are keeping the customer satisfied. But if the requirements continue to evolve then even the best methodologies and measurements are not a match for customers changing their minds.

4.2.2 Project Management Quality of Optum/QSSI Qualitative

  • Quality Planning: As a CMMI Level 3 organization Optum/QSSI had a planned quality process to deliver appropriate outcomes for the Data Services Hub and EIDM project deliverables. However changing project requirements from CMS severely impacted quality planning efforts. For instance, the late requirement change in September that required consumers to create user accounts before browsing the exchange marketplace resulted in higher than expected simultaneous system usage and as a result impacted the functionalities of the EIDM tool which was originally designed to allow consumers to first access the systems, browse the marketplace, and if they wanted a product, create an account. Because the EIDM is only one tool for the federal marketplace registration system, this late requirement change made it impossible to coordinate and plan quality processes with other contractors who worked on portions of the registration system to ensure the appropriate performance outcomes before the go-live date of October, 1st.
  • Quality Assurance: Both the Data Services Hub and EIDM deliverables met quality assurance satisfying CMS’ requirements and all relevant quality standards for the project according to Andrew Slavitt’s, Group Executive Vice President at Optum/QSSI, during his Congressional hearings. It is important to also note that Optum/QSSI developed an EIDM tool for two other CMS systems. This EIDM tool followed benchmarking and quality audits taken from those existing EIDM solutions at CMS. Quantitative

  • Quality Monitor and Control: Requirements changes greatly impacted quality monitoring and control. Although Optum/QSSI used quality control tools such as charts to guide acceptance decisions, rework, and process adjustments, changing requirements severely impacted these controls. These changes introduced time constraint challenges and limited system-wide testing, and most importantly user acceptance testing.

4.2.3 Project Management Quality of Verizon Quantitative

  • Extensive delays in processing orders for additional capacity, provisioning resources, and implementation also caused Verizon a lot of angst with the CMS customer. Qualitative

  • Management within Verizon also failed to run some of the concerns up the executive flagpole to make leadership aware of issues that could have prevented delays or numerous escalations by CMS.
  • Verizon’s project management failed on many accounts. Poor coordination was to blame between multiple project managers assigned to the project within.

4.3 Project Management Key Successes and Failures

4.3.1 CMS

If we review the congressional hearings and documentation, they reveal that the project was a high priority project for CMS. In conversations with Federal contractors, CMS would start by saying “this is what the White House wants…”. It is still unclear whether this prefix was used because directions were actually coming from the White House or whether it was just used to indicate the importance of the project. Regardless of the intentions, one thing is for sure that they were not followed by action since there was not a dedicated full-time Project Manager to manage the project from kickoff to implementation. Most likely decisions were made by committees as is often the case with large government projects.

A big piece of the project included behind the scenes business processes even before the technology was to be considered. These business processes and governance entailed not only coordinating with 26 states but also with insurance companies. The picture below depicts an exhaustive list of stakeholders that were affected by the project:

FFE Stakeholders.png

Figure 7: FFE Stakeholders

From a business standpoint, CMS failed to calculate in advance the true cost of the entire project. Additionally, even after reports by McKinsey indicating a danger of not doing end-to-end testing and warnings from CGI Federal in their August 2013 status report (Federal, 2013) that testing could be an issue, CMS ignored these experts and went full steam in going live on October 1st, 2013.

Research indicates that some COTS products and custom software were developed to standup It also seems like CMS failed to look at the various internal and external “firewalls” the system needed to pass through.

4.3.2 Federal Contractors CGI Federal

  • FFE: According to Congressional hearings, the CGI Federal representative indicated that they had provided a fully functioning FFE as per contract requirements by October 1st, 2013. This was their success factor.
  • Changing Requirements: CGI Federal was responsible for developing the FFE. It was put under the spotlight for not providing recommendations holistically for the entire project. It is evident that requirements were changing and new states were being added. But there was no push back from CGI Federal to indicate that the requirement changes would result in quality issues on their end that would affect the entire system.
  • Testing: While the system did work for the initial couple of users but logging delays resulted in a poor customer experience. Research indicates that no end-to-end testing was performed to see holistically how the system would work. CGI Federal could have used their vast amount of industry expertise to inform CMS that no end-to-end testing would result in major issues. Optum/QSSI

  • Data Services Hub & EIDM: Based on Andrew Slavitt’s Congressional hearings on, Optum/QSSI successfully developed and delivered a fully functional Data Services Hub and EIDM tools. For example, according to Slavitt on October 1st the Data Services Hub processed over 175,000 transactions and millions more after the project launched.
  • Buggy Data Services Hub & EIDM: In the same Congressional hearings, however, Andrew Slavitt acknowledged that the Data Services Hub and EIDM tools, although worked functionally as designed, experienced performance bottlenecks when the project launched because of the late requirements changes requiring consumers to create accounts before browsing the marketplace. This change resulted in higher than expected simultaneous usage of the registration system and the Data Services Hub eligibility verification tool. Slavitt also admitted to the fact that Optum/QSSI identified and fixed bugs in the EIDM tool days after the October, 1st launch date. The release of code that had bugs was a quality failure and contradicts Slavitt earlier comments about delivering a fully functional EIDM tool. Verizon

  • Financial Success: The primary success story for Verizon was that financially the company did far better as a result of this project than initially predicted based in large part to the scope creep. Additionally Verizon specifically was not the cause for delays or outages on day one of the project and delivered the infrastructure to support the site by launch date. A significant underestimation of capacity would be to blame for the initial failures of
  • Hardware Outage: subsequent failure which Secretary Sebelius cites in her testimony was a hardware outage unrelated to project management (Krumboltz, Michael. suffers outage as Sebelius testifies that it never crashed.).
  • Project Management: Key project management failures within Verizon were inherent with the ordering, design or engineering phase, and eventually implementation suffered due to project management inefficiencies. As discussed previously, Verizon had several members of a project management team supporting the CGI Federal relationship, QSSI, and CMS relationships. There should have been one central program manager supported the ACA contract for Verizon. The communication failure through project management teams created bottleneck failures which spread throughout the engineering teams.

4.4 Lessons Learned on Project Management Best Practices

  • Who’s on First: Even though this project needed input from various internal and external stakeholders, a clear roles and responsibilities matrix should have been developed. This matrix should have been used by the contractors to see who is responsible for various activities of this large project and who specifically reach to out to incase they ran into bottlenecks.
  • Project Manager: CMS did not assign a full-time Project Manager for the project. For this large-scale project, it would have been prudent to have a full-time project manager responsible for coordinating various activities internally and across various contractors.
  • Business Process and Governance: Since this was the first-time such an endeavor was taking place with multiple stakeholders, it would have been useful to map the business processes of the future state prior to award. Overall business processes would also support governance structures that would help in checking progress and checking alignment with the stated objectives of the project.
  • Requirements Management: Ever-changing requirements and a change in strategy can affect projects dramatically. For, there should have been some baseline requirements established earlier in the project. The baseline requirements would entail the basic functionalities needed by CMS. It would have been useful to use a Requirements Traceability Matrix (RTM) that would be available to everyone on the project. The RTM would not only help all stakeholders be informed of what is going but it would keep the entire team honest as well and perhaps identify any issues before they become a problem later.
  • Communications and Sharing of Information: The way the project was setup it seems like the left-hand did not know what the right hand was doing. This can create problems in understanding issues from a holistic prospective and not knowing if there are dependencies that should be coordinated.
  • Metrics and Measurements: Reasonable metrics should have been created to assess the health of the project. These metrics should measure the stakeholder and team satisfaction at the beginning and during the project in the project life cycle to determine where adjustments need to be made. Based on these metrics, remediation processes should have been setup so that nothing falls through the cracks.
  • Methodologies and Documentation: Although it seems like CMS was suppose to follow HHS’ Enterprise Life Cycle (ELC) but research indicates that an Agile methodology was used to develop the system. This alludes to a conflict in terms of what is supposed to be used versus what was actually being used. Additionally, the vendors who were helping CMS came with their own methodologies. It seems like there was too many methodologies but not a consistent alignment of them across the organization so that all teams were on the same page. In this scenario, the advice would be to understand the various methodologies at play and select the most appropriate one. This selection might also entail having some sort of a hybrid methodology that everyone conforms to. Having one methodology would reduce the amount of documentation that needs to be developed thus freeing up resources to work on the actual needs of the project.


5.1 Project Design Recommendations (CMS only)

  • Project Manager (PM): The project had 55 federal contractors working on it at various times. The system components that these federal contractors were developing were dependent on each other. For example, the website needed to communicate with EIDM and Data Hub which would communicate with FFE. Due to these complex dependencies, there was a significant amount of communication and coordination that needed to happen on this project. Additionally, someone needed to see how not only if these system components would work with each other but how the thorough testing of these systems would be the difference between a failed project versus a successful project. Despite the complexity of managing such a large group of contractors and understanding the pros and cons, CMS did not have a full-time PM for While the real reasons for this decision are unknown, we can extrapolate from research that lack of an astute full-time PM was one of the major causes of the issues with

We recommend that a full-time PM should have been assigned to who had experience in large-scale implementations. This PM would have the authority to push back on unrealistic timelines, have a holistic view of the project and understand that even though individual system components are being developed, there should be time allocated in the project to perform effective end-to-end testing.

  • Cross-Functional Teams: As discussed earlier, the system components that the federal contractors were developing had many dependencies on each other. Despite these dependencies, no proof has been found that cross-functional teams were established.

We recommend that in designing the project, the development of cross-functional teams should have been given a high priority. These cross-functioning teams would comprise of government and contractors. These teams would not only create synergies among the various people but should be designed in a way where sharing of lessons learned and recommendations are encouraged.

  • Team Satisfaction Survey (TSS): When a project falls off track, often times it is because once management pays attention to it, they are at a point of no return. This is what seems to have happened at CMS. By the time people working on the project started showing their concerns that testing could be an issue, CMS either ignored this or that they did not have a choice and thus went ahead to release a buggy version of the system to the masses.

We recommend that a TSS should be incorporated into the project whose purpose would be to ask people at all levels about their concerns and recommendations of the project. The TSS should collect this information at the beginning and periodically during the project. The TSS should also have a mechanism where actions could be taken promptly by management if there are issues that seem to be recurring. The TSS is not a status report but actually a mechanism to check the pulse of the project.

5.2 Project Implementation Recommendations (CMS only)

  • Pilot Approach: The project used a “big bang” approach to release the software on October 1st, 2013. This approach resulted in overwhelming the system as users who tried to login found that their online forms were either taking too long to verify their information or that simply they were kicked out of the system. Additionally, it is apparent that the federal government did not anticipate the system errors it was going to get when the system went live.

We recommend that a pilot program should have been created for one of the states. This pilot program would be used to see how the system would perform when it goes live and what kind of issues it might have once it is open to the masses. Lessons learned from this pilot program could have been used to provide a better customer experience once the system went live.


To summarize, it should come as no surprise to those familiar with IT projects that most IT projects fail. A recent Gartner user survey showed that, while large IT projects are more likely to fail than small projects, around half of all IT projects fail in part due to unrealistic requirements, technical complexities, integration responsibilities, aggressive schedules, and inadequate testing. Causes that were all related to the project which resulted in its failure. As outlined in this case analysis, these fundamental mis-steps were the contributing factors that led to issues with this project and ultimately its failure. Based on our analysis CMS did not have the Project Integration and Project Management know-how to manage such a major project. The Agency’s assignment of a part-time project manager to the project is evident that leadership did not fully understand the magnitude and importance of the project, and what it took to implement IT projects. Based on our recommendations, it is our hope that such project management mis-steps are avoided for IT project implementations in the future.


  1. Ariana Cha, L. S. (2013, 10 24). What went wrong with Retrieved 04 16, 2014, from washington post:
  2. CMS. (2012, 04 01). Health Insurance Exchanges. Office of Information Services, 25.
  3. CMS.GOV. (2014, 04 01). CMS covers 100 million people. Retrieved 05 01, 2014, from cms:
  4. Conerly, B. (2013, 07 16). ObamaCare’s Delays: Lessons For All Businesses About Project Management. Retrieved 04 21, 2014, from
  5. C-SPAN. (2013, 10 24). Implementation of Affordable Care Act. Retrieved 03 15, 2014, from
  6. Daconta, M. (2013, 11 01). Media got it wrong: failed despite agile practices . Retrieved 04 21, 2014, from gnc:
  7. Desai, C. (2013, 03 05). Federally Facilitated Exchange. Health and Compliance Programs, 44.
  8. DHHS. (2012, 06 2012). Guide to Enterprise Life Cycle Processes, Artifacts, and Reviews. Center of Medicare & Medicaid Services.
  9. Dudley, T. (2012, 06 10). The Affordable Care Act and Marketplace Overview. CMS Office of Public Engagement.
  10. Federal, C. (2013). Monthly Status Report –August 2013. CMS. MD: CGI Federal.
  11. Hardin, K. (2013, 11 13). rollout lesson: Push back on unrealistic launch dates. Retrieved 05 02, 2014, from techrepublic:
  12. Office, U. S. (n.d.). Patient Protection and Affordable Care Act. DC: GAO.
  13. Simon & Co., L. (2013, 12 04). Major Contracts to Implelment the Federal Marketplace and Support State Based Marketplaces. DC: Simon & Co.
  14. States, C. o. (2013). 113 Congress. DC: House of Representatives.
  15. Thibodeau, P. (2013, 12 30). The firm behind had top-notch credentials and it didn’t help. Retrieved 03 04, 2014, from computer world:
  16. Thompson, L. (2013, 12 03). Diagnosis: The Government Broke Every Rule Of Project Management. Retrieved 04 01, 2014, from forbes :
  17. Turnbull, J. (2013, 12 12). What The US Government Learned About Software From The Healthcare.Gov Failure (And Some Of Us Already Knew). Retrieved 05 02, 2014, from gaslight:
  18. Walker, R. L. (2013). Response of CGI Federal Inc. to Additional Questions for the Record and Member Requests for the Record. DC: Wiley Rein LLP.




Understanding and Applying Predictive Analytics

Executive Summary

This article proposes looking at Predictive Analytics from a conceptual standpoint before jumping into the technological execution considerations. For the implementation aspect, organizations need to assess the following keeping in mind the contextual variances:




  • Top Down
  • Bottom Up
  • Hybrid
  • Organizational Maturity
  • Change Management
  • Training
  • Practical Implications
  • Pros and Cons of Technology Infrastructure
  • Providing Enabling Tools to Users
  • Best Practices

Describing Predictive Analytics

Predictive Analytics is a branch of data mining that helps predict probabilities and trends. It is a broad term describing a variety of statistical and analytical techniques used to develop models that predict future events or behaviors. The form of these predictive models varies, depending on the behavior or event that they are predicting. Due to the massive amount of data organizations are collecting, they are turning towards Predictive Analytics to find patterns in this data that could be used to predict future trends. While no data is perfect in predicting what the future may hold there are certain areas where organizations are utilizing statistical techniques supported by information systems at strategic, tactical and operational levels to change their organizations. Some examples of where Predictive Analytics is leveraged include customer attrition, recruitment and supply chain management.

Gartner describes Predictive Analytics as any approach to data mining with four attributes:

  1. An emphasis on prediction (rather than description, classification or clustering)
  2. Rapid analysis measured in hours or days (rather than stereotypical months of traditional data mining)
  3. An emphasis on the business relevance of the resulting insights (no ivory tower analyses)
  4. An (increasing) emphasis on ease of use, thus making tools accessible to business users

The above description highlights some important aspects for organizations to consider namely:

  1. More focus on prediction rather than just information collection and organization. Sometimes in organizations it is observed that information collection becomes the end goal rather than using that information to make decisions.
  2. Timeliness is important otherwise organizations might be making decisions on information that is already obsolete.
  3. Understanding of the end goal is crucial by asking why Predictive Analytics is being pursued and what value it brings to the organization.
  4. Keeping in mind that if the tools are more accessible to business users then they would have a higher degree of appreciation of what Predictive Analytics could help them achieve.

Relationship of Predictive Analytics with Decision Support Systems or Business Intelligence

University of Pittsburg describes Decision Support Systems as interactive, computer-based systems that aid users in judgment and choice activities. They provide data storage and retrieval but enhance the traditional information access and retrieval functions with support for model building and model-based reasoning. They support framing, modeling and problem solving. While Business Intelligence according to Gartner is an umbrella term that includes the applications, infrastructure and tools, and best practices that enable access to and analysis of information to improve and optimize decisions and performance. These descriptions point to the fact that Decision Support Systems or Business Intelligence are used for decision making within organization.

Interestingly, it seems like Predictive Analytics is the underlying engine for Decision Support Systems or Business Intelligence. What this means is the predictive models that result in Predictive Analytics could be under the hood of Decision Support Systems or Business Intelligence. It should be noted that organizations should proceed with caution with regards to the Decision Support Systems or Business Intelligence since if the underlying assumption are incorrect in making the predictive models then the decision making tools would be more harmful then helpful. A balanced approach would be to create expert systems where Decision Support Systems or Business Intelligence is augmented by human judgment and the underlying models are checked and verified periodically.

Implementation Considerations for Predictive Analytics

As the descriptions above have indicated that the aim of Predictive Analytics is to recognize patterns and trends that can be utilized to transform the organization. This requires organizations to firstly educate themselves on what value they want and what can be derived from Predictive Analytics. Predictive Analytics is about business transformation and it needs to show what value it brings to the organization. In this regard, we have to assess people, processes and technologies of the organization in terms of current state (where the organization is right now) and future state (where the organization wants to be). Typically, this revolves around Strategies, Politics, Innovation, Culture and Execution (SPICE) as shown below.

SPICE Factors

SPICE Factors

The assessment of people for Predictive Analytics means to understand what users will be leveraging Predictive Analytics and if they understand that simply relying on Predictive Analytics is not enough but in order to have an effective system they need to be part of the system. This means that the analytics insights need to be augmented by human expertise to make intelligent decisions. The assessment of processes for Predictive Analytics entails looking at how organizations make decisions right now and how future decisions would be made if Predictive Analytics is put into place. This includes having appropriate governance structures in place. The assessment of technology entails looking at what technologies exist within the organization and if they could be leveraged for Predictive Analytics. If not then looking at what Predictive Analytics products are in the market that would work for the organization and are they flexible enough in case the underlying assumptions for the predictive models change and when predictive models become obsolete.

The advanced techniques mentioned in the book, Seven Methods for Transforming Corporate Data into Business Intelligence would be applicable to Predictive Analytics. These methods are:

  1. Data-driven decision support
  2. Genetic Algorithms
  3. Neural Networks
  4. Rule-Based Systems
  5. Fuzzy Logic
  6. Case-Based Reasoning
  7. Machine Learning

Technologies Used for Predictive Analytics

Gartner has been publishing their Magic Quadrant on Business Intelligence and Analytics Platforms since 2006. Due to the increased importance of Predictive Analytics in the marketplace, Gartner decided to create a separate Magic Quadrant for Advanced Analytics Platforms which focuses on Predictive Analytics and published its first version on February 2014. Since it is the first version of the Magic Quadrant, all vendors listed are new and no vendors were dropped.


Gartner's Magic Quadrant for Advanced Analytics Platforms

Gartner’s Magic Quadrant for Advanced Analytics Platforms

As we can see from this Magic Quadrant that it includes well-known vendors but also vendors that are not as big or as well-known. It is interesting to note that open-source vendors such as RapidMiner (a Chicago company) and Knime (a European company) are in the same Leaders Quadrant as well-established vendors such as SAS and IBM. While there are some issues with these open-source vendors as stated in the report but perhaps this Magic Quadrant is also an indication of where the next generation of analytics would come from. Due to the very nature of open-source, there are more opportunities for cheaper customization which would give the organizations the flexiblity to be as granular as they want to be. Ofcourse code stablity and lack of proper documentation are issues that organizations need to be cognizant about. Organizations may also want to “try out” these open source tools before they make a big commitment to propertary software to see if Predictive Analytics is something they want to invest heavily in.

Using Predictive Analytics in Specific Industries

There are many industries that utilize Predictive Analytics. The organizations in these industries either use Predictive Analytics to transform their business and/or to address certain areas that they would like to improve upon. Following is a list of some of the industries that utilize Predictive Analytics:

Industry How is Predictive Analytics used?
  • Customer Retention
  • Inventory Optimization
  • Low-Cost Promotions
Oil and Gas
  • Well and Field Asset Surveillance
  • Production Optimization
  • Equipment Reliability and Maintenance
  • Adjust production schedules
  • Tweak marketing campaigns
  • Minimize Inventory
  • Human Resources Allocation
  • Supply Chain Optimization
  • Electronic Health Records
  • Nation-wide Blood Levels
Social Media
  • New Business Models

While there are many examples of industries that have embraced Predictive Analytics but there are other industries that have not fully accepted it as a new reality. These industries have many excuses for not considering Predictive Analytics but typically revolve around scope, quality, cost and fear of the known. However, the tide might be changing for these industries as well since industry bloggers are beginning to insist how Predictive Analytics could be leveraged for competitive advantages.

My Opinion

Predictive Analytics can come in handy in making organizations analytical and becoming a better version of themselves. However, Predictive Analytics can be a deal-breaker if organizations have attempted and failed in the past and for this very reason Predictive Analytics should start as a discussion first. This discussion should revolve around asking which areas need improvements and among other things determine if Predictive Analytics could be something that could help. After a successful Predictive Analytics initiative other areas could be potential candidates as well.

An important thing to note is that Predictive Analytics is an organization-wide initiative that has touch points across the organization and thus the maturity of the organization has to be seriously considered prior to going on a Predictive Analytics journey. No matter how good Predictive Analytics can be for the organization but if the organization is not mature enough and it does not have the right governance, processes and feedback mechanisms in place then it might turn out to be another attempt at glory but nothing to show for it.


  1. Predictive Analytics for Dummies
  2. Seven Methods for Transforming Corporate Data Into Business Intelligence
  3. IBM Journal Paper on A Business Intelligence System by H.P. Luhn
  4. Gartner report (G00258011) Magic Quadrant for Advanced Analytics
  5. Gartner IT Glossary on Predictive Analytics
  6. Gartner IT Glossary on Business Intelligence
  7. SAP Predictive Analytics
  8. Decision Support Systems by Marek J. Druzdzel and Roger R. Flynn
  9. 5 Questions to Ask About Predictive Analytics
  10. 5 Factors for Business Transformation

Identifying Organizational Maturity for Data Management

The maturity of an organization is determined by how that organization can collect, manage and exploit data. This is a continuous improvement process where data is used to make strategic decisions and strategic decisions are made to collect data that creates competitive advantages. But in order to create strategic advantages through data, an organization needs to have data management and related processes in place to discover, integrate, bring insight and disseminate data within the entire organization. In terms of data, organizations need to understand where they are currently and where they want to be in the future and thus they need to ask the following questions:


In the Future

Who receives the data? Who should received the data?
What happens to data? What should happen to data?
Where does data come from? Where should data come from?
When is the data being shared? When should data be shared?
Why data is collected? Why should data be collected?

After an organization understands and documents the above then they need to develop metrics to measure the relevance of their data as it pertains to the entire organization. Since being a data-driven organization is a continuous improvement journey, organizations can use the following adaptation of the Capability Maturity Model (CMM) to determine their maturity of data management and related processes:

Data Management Maturity Levels

Data Management Maturity Levels

Additionally, organizations can have governance and processes that can help them assemble, deploy, manage and model data at each level of CMM as shown below:





  1. Khan, Arsalan. “5 Questions to Ask About Your Information.” Arsalan Khan., 16 May 2014. Web.

2 Management Challenges with Really Simple Syndication (RSS)

According to the Interactive Advertising Bureau (IAB) and PricewaterhouseCoopers (PwC) US, in the first quarter of 2014 Internet advertising revenues reached USD $11.6 billion. The President of IAB indicated that “Digital screens are a critical part…” of why these numbers are so high. Typically, these advertisements are done through images and/or text ads displayed with online articles and websites.

Really Simply Syndication (RSS) and other types of syndicated Internet sharing protocols strip away the images and/or text ads and only display content such as title, first sentence, summary or complete article. This content is read typically through third party feed readers. In addition to content ownership issues, the other two management challenges include tracking subscribers and higher traffic demands.

Tracking of Subscribers 

In order to address the tracking of subscribers, organizations should request that the RSS readers provide this information to them. In order to get this information, organizations should incentivize the owners of the RSS feed readers and also the content subscribers to provide tracking information. One of the other ways to track and direct subscribers to the their website would be to create some sort of paywalls that either ask subscribers to pay for content and/or ask them to create free login accounts to access more content.

Higher Traffic Demands 

One of the other issues that RSS feeds create are higher traffic demands on the servers that house the content. These feed readers access content on websites more frequently than if a person was reading the information. In order to address this, a possible solution is to integrate desktop applications into a P2P network that would distribute the load among hundreds of clients.

RSS Management Challenges

RSS Management Challenges

As we can see from the above management challenges, beyond ownership issues there are issues of maintenance (e.g., optimize server capacities for repeated requests) and standardization (e.g., creating standard ways of tracking subscribes from multiple feed readers).


  1. “Pros and Cons for RSS.” Pros and Cons for RSS., n.d. Web. 05 July 2014.
  2. Singel, Ryan. “Will RSS Readers Clog the Web?” WIRED. WIRED, 30 Apr. 2004. Web. 05 July 2014.

What is the relationship between Cloud Computing and SOA?

According to the publication from Mitre, Cloud Computing and Service Orientated Architecture (SOA), cloud computing has many services that can be viewed as a stack of service categories. These service categories include Infrastructure-as-a-Service (IaaS, Platform-as-a-Service (PaaS), Storage-as-a-Service, Components-as-a-Service, Software-as-a-Service (SaaS) and Cloud Clients. The following figure shows the service categories stack as depicted in the Mitre publication:

Mitre's Cloud Stack

Mitre’s Cloud Stack

SOA is a framework that allows business processes to be highlighted to deliver interoperability and rapid delivery of functionality. It helps system-to-system integration by creating loosely coupled services that can be reused for multiple purposes. The concept of SOA is similar to Object-Orientated Programming where objects are generalized so that they can be reused for multiple purposes.

Now that we have an understanding of the various types of Cloud Computing services and SOA, lets explore how Cloud Computing and SOA are similar and different.

Similarities between Cloud Computing and SOA:

  • Reuse – Conceptually speaking, the idea of reuse is inherent both in Cloud Computing and SOA.
  • As needed basis – In Cloud Computing, the services are provided to the users on demand and as needed. SOA is similar to this since the system-to-system services are on demand and as needed as well.
  • Network Dependency – Cloud Computing and SOA both require an available and reliable network. If a network does not exist then the cloud services provided over the Internet would not be possible. Similarly, if a network does not exist then the communications between systems would not be possible. Thus, both Cloud Computing and SOA are dependent on a network.
  • Cloud Contracts – In Cloud Computing, contracts entail the mutual agreement between an organization and cloud service providers. In cloud contracts, there is a cloud service provider and a cloud service consumer (the organization). In the case of SOA, contracts are important and can be either external (e.g., Yahoo! Pipes) and/or internal (e.g., organizational system integration). In SOA contracts, there are service producer(s) and service consumer(s) that are conceptually similar with cloud contracts.

Differences between Cloud Computing and SOA:

Despite the similarities between Cloud Computing and SOA, they are not the same. Following are some of the differences between them:

  • Outcome vs. Technology – In Cloud Computing, we are paying for the outcome but in SOA we are paying for technology.
  • External vs. External and/or Internal Point-of-View – In Cloud Computing, the services that organizations get are from external organization but in SOA these services can be either from external organizations (e.g., Yahoo! Pipes) and/or internally (e.g., system-to-system integration between two or more systems).
  • IaaS, PaaS, SaaS vs. Software Components – In Cloud Computing, the services provided can go up and down the stack but in SOA the services are software components.


  1. Raines, Geoffrey. “Cloud Computing and SOA.” The MITRE Corporation. The MITRE Corporation, Oct. 2009. Web.
  2. Gedda, Rodney. “Don’t Confuse SOA with Cloud.” CIO. CIO Magazine, 28 July 2010. Web.