Open Source Maturity Model for a Department/Enterprise/Institution

The acceptance and adoption of Free/Open Source Software (FOSS) is widespread and expanding rapidly across most industries. The use of FOSS is an attractive option for many parts of an organisation including Development, IT operations and IT strategy. While most of the enterprises today are using FOSS in some way or the other, there is no quantitative measure to characterize the extent of FOSS assimilation with in an enterprise. For example, one enterprise might be using just the mail client Thunderbird for their work, where as another might have migrated its entire IT infrastructure as well as applications to FOSS -- there is no way this difference can be stated in quantitative terms today. Both the enterprises can claim to be using FOSS but there is no way today to distinguish between the extent of their FOSS use as there are no tools for quantifying the same. Differentiating between the levels of FOSS use and adoption in an entity (department/institution/ enterprise) would be needed if say the government wants to give incentives to those who are at higher levels of FOSS adoption. If a unit today makes a claim for such incentives, there is no way to assess the veracity of that claim because there is no metric to objectively assess the extent of FOSS usage with in an enterprise.

Such “Maturity Models” however exist for FOSS products, and are widely used when an organisation wants to evaluate the suitability of a FOSS product for its use. There also exist models for assessing the “FOSS-friendliness” of countries and ranking them accordingly. The proposal here is to extent this idea to cover the FOSS maturity of departments/enterprises/ institutions in the usage of FOSS, borrowing many ideas and tools from similar other maturity models, including the CMM. A model will be built to quantify and assess the maturity of departments/enterprises/ institutions in their adoption of FOSS, and assign levels to it so that they can be ranked suitably. Such ranking can be used for example to identify “ FOSS Success Stories” and highlight them for FOSS promotion. The tool may also be used by the management to identify where the gaps in greater FOSS use are and how they can be overcome to move the organisation to higher levels of “FOSS Maturity”.

This activity would be an original Ph.D. level research work carried out in collaboration with experts in Software Quality Assessment. Primary data will be gathered from different types of Indian entities through appropriate surveys and models built using the same. Openly available data from other sources would also be used to validate the model and make it globally relevant and applicable.

This document has three sections: first section summarizes the existing Open Source Product Maturity models, the second section is about the Open Source Index developed to assess the adoption of open source in different countries. Finally the proposed model being developed for assessing the FOSS maturity of departments/enterprises/institutions is presented in Section-III

Section-I: Open Source Product Maturity Models

What is a Product Maturity ?
A product will be said to be mature if it has a full feature set, high quality, longevity in market, has good support , exhibits robust behavior in error situations.

There are four existing Open Source Product Maturity Models

1.Open Source Maturity Model from Navica
The OSMM assesses a product's maturity in three phases:

  • Assess each product element's maturity and assign a maturity score.
  • Define a weighting for each element based on the organization's requirements.
  • Calculate the product's overall maturity score.

The first phase identifies key product elements and assesses their maturity. Key elements are those that are critical to implementing a product successfully. The key product elements are as follows:

  • Product software
  • Support
  • Documentation
  • Training
  • Product integrations
  • Professional services

Each element is assessed and assigned a maturity score via a four-step process:
  • Define organizational requirements
  • Locate resources
  • Assess element maturity
  • Assign element maturity score on a scale of 1 to 10

The OSMM assigns a weighting to each element's maturity score. Weighting allows each element to reflect its importance to the overall maturity of the product. The weighted score of each element is summed to provide an overall maturity score for the product.

Organizations can choose to adjust the default weightings based on their preferences and requirement. The only requirement for adjusting the maturity weighting is that the element scores must sum to 10 to make the OSMM valid. 2. Qualification and Selection of Open Source Software (QSOS) The QSOS method (Qualification and Selection of software Open Source), conceived by Atos Origin.

The model is a four step process.

1. Definition: The objective of this step is to define various elements of the typology re-used by the three remaining steps of the general process. The various elements are software families, types of licenses and types of communities.

2. Evaluation: The objective of this step is to carry out the evaluation of the software. It consists of collecting information from the open source community, in order to:

  • Build the identity card of the software
  • Build the evaluation sheet of the software, by scoring criteria split on three major axis:

3. Qualification : This step is to define filters translating the needs and constraints related to the selection of free or open source software in a specific context. This is achieved by qualifying the user's context which will be used later in Step - 4 "Selection". The filters are : Filter on ID card Filter on Functional grid where each functionality of the functional grid is attributed a requirement level selected among required functionality, optional functionality, not required functionality. Filter on User's risks where the relevance of each criterion of this axis is positioned according to user's context. Filter on Service Provider's risks

4. Selection: This step is to identify software fulfilling user's requirements, or more generally to compare software from the same family. Weighting of functionalities : The weighting value is based on the level of requirement defined on each functionality of the functional grid. Weighting on User's risk axis- The weighting value is based on the relevance of each criterion on the user's risk axis. The weight's value sign represents a positive or negative impact relating to the user's requirements.

3. OSMM Capgemini model The OSMM is a part of the process used by Capgemini to produce an independent advice. Here an open source product is assessed using “product indicators” which are objective and measurable facts about the product and “application indicators” which are driven by the needs of the customers like maintainability, training facilities and interoperability with other products Product indicators are grouped into 4 different groups:

  • Product (age, licensing, human hierarchies, selling points, developer community)
  • Integration (modularity, collaboration with other products, standard)
  • Use (support, ease of deployment)
  • Acceptance (user community, market penetration)

Scoring criteria for the indicators: The product indicators are put a score of 1, 3 and 5 based on the amount of maturity they show. Some indicators are not measured in a purely numeric sense; the score is determined by a panel of Capgemini experts who have demonstrated knowledge of Open Source and have worked with similar products. If an indicator isn’t applicable for a particular product, the score is set to 3.

Application indicators: To properly assess product one must also take into account several environmental aspects and naturally the present and future demands of the user. The OSMM of Capgemini takes these factors into account by defining the following application indicators:

Usability, Interfacing, Performance, Reliability, Security, Proven technology, Vendor independence, Platform independence, Support, Reporting, Administration, Advice, Training, Staffing,Implementation

The data of all these indicators is collected, user requirements are determined so that it becomes possible to access if the product is suitable or not. In addition the importance the client attaches to each of these indicators is also recorded, scoring on a scale of 1 to 5, 1 being ‘not important’ and 5 being ‘extremely important’. All of this data is combined to the ‘final’ score that indicates the suitability of the product for the given demands. Determining one single score allows an easy comparison between candidate products.

4.OpenBRR- An Open Source Maturity Model SpikeSource, the Centre for Open Source Investigation at Carnegie Mellon West, and Intel Corporation jointly proposed the Business Readiness Rating (BRR).

This model, offer proposals for standardizing different types of evaluation data and grouping them into categories. To allow adoption of this assessment model for any usage requirements the software may have to meet, the process of assessment is separated into four phases.

Phase 1 – Quick Assessment

  • Identify a list of components to be evaluated.
  • Measure each component against the quick assessment criteria.
  • Remove any components that do not satisfy user requirements from the list.

Phase 2 – Target usage assessment Category weights. Rank the 12 categories according to importance (1 – highest, 12 – lowest). Take the top 7 (or fewer) categories for that component, and assign a percentage of importance for each, totalling 100% over the chosen categories. Metric weights. For each metric within a category, rank the metric according to importance to business readiness. For each metric within a category, assign a percentage of importance, totalling 100% over all the metrics within one category.

Phase 3 – Data collection and processing Gather data for each metric used in each category rating, and calculate the applied weighting for each metric. This includes normalizing metric measurements, processing functionality metrics and category rating weighting factors

Phase 4 – Data translation Use category ratings and the functional orientation weighting factors to calculate the Business Readiness Rating score. Publish the software’s Business Readiness Rating score.


Red Hat and Georgia Tech have published the results of a collaborative research project which attempted to measure relative open source software adoption by countries and regions. The factors used to evaluate the level of open source mojo in each region include government procurement and research policies, the number of Red Hat Certified Engineers and registered Linux users, the volume of discussion about open source topics in regional media, and Linux localization support for the region's dominant language.

These statistics were used as the basis for formulating a global open source activity index. Countries are ranked by overall activity and scores are also provided to indicate the level of open source adoption in government and industry. The following is a list of the top ten and their respective scores:

  • France (1.35)
  • Spain (1.07)
  • Germany (1.05)
  • Australia (1.04)
  • Finland (1.03)
  • United Kingdom (1.00)
  • Norway (0.95)
  • Estonia (0.89)
  • United States of America (0.89)
  • Denmark (0.79)

India is ranked 23rd among a total of 75 countries.
The OSPI Model

Index Construction. Two indices are developed here, following a large literature on the development of indices to measure (unobserved) environmental conditions (e.g., index of sustainability, human development index). The first index captures the OSS-related activity in a country. The second index captures the OSS-related potential of a country.
The conceptual layout of the indices follows this form:

INDEX = f(Dimension 1, Dimension 2, ...., Dimension I), where f is a function that aggregates I different dimensions (i.e., sub-indices). For the ith dimension of the index, it is based on several indicators.

Dimension i = g(Indicator 1, Indicator 2, ... , Indicator J) , where g is another aggregating function for the J different indicators in the ith dimension. In practice, each indicator is derived from a variable that measures or proxies for that indicator. The variables typically require some transformation and normalization, as they are generally measured in wildly different units.

Let Indicator j = h(Variable j), for some transformation function h that might include some imputation, transformation, and normalization of Variable j.
To summarize the general form: INDEX = f(Dimension 1, ..., Dimension i, ...., Dimension I)
Dimension i = g(Indicator 1, ..., Indicator j, ... , Indicator J)
Indicator j = h(Variable j)

In the present context, there are two different indices – each with the same general structure of multiple dimensions, several indicators for those dimensions, and variables measuring those indicators. The two indices are for activity (A) and potential (P). Both of the indices generated here follow the same dimensional structure: government (G), firms or commercial enterprises (F), and community and education system (C). Although the dimensions are the same in the A and the P index, the indicators for each of the dimensions are different between the two.
G = government;
F = firms or commercial enterprises;
C = community and educational system
Active = f(GA, FA, CA)
Potential = f(GP, FP, CP)
The final step in initially constructing the indices involves deciding on the aggregation function f to compile the three dimensions into a single index value.

A critique of the OSPI
  • The model is built on the secondary data i.e. the data from literature survey, published articles on government policy and open source adoption. There is no primary data or empirical data to prove their claims.
  • The Model does not clearly explains the variable classifications for example the classification “Best” is not explained but is used for classifying variables.
  • The mathematical model is not clear. How did they determine these numbers? The report never explains how they came up with the original numbers, just how they came up with the final numbers. And how much more is 1.35 than 0.79? Is it a ranking out of a certain number, a percentage, no maximum (which would make the numbers pointless)? Is 1.35 very low, is it very high?
  • The variable RHCE (Redhat certified engineer) per capita does correctly explain the adoption of open source software. There can be users with RHCE certification and still not using open source.
  • The choice of some of the variables like the no. of Linux user groups, discussion in media might not give the correct picture of the adoption of FOSS.
  • The install and users inductors cannot give the exact picture of the number of users since some people might just install and test. Some users might get the software distributed through external media (CD,DVD, usb-drive)
  • What would motivate a company (Redhat) to fund a study of the market of its core business around the world?
  • “Linux localization support for the region's dominant language" is probably not the ideal measure of linguistic compatibility, given, for example, the percentage of English speakers in India...
  • What does television per capita has to do with potential of open source. It can be potential for buying a computer rather that potential for open source.

Section-III: Development of an Open Source Maturity Model for an Enterprise/Institution

There is no quantitative data on the extent of FOSS usage in an institution, enterprise or a government department. Some enterprises may be using just one open source product while some may be working exclusively with FOSS based products and solutions. There is no way this difference can be stated in quantitative terms. So the proposal is to build a model and validate it by collecting data from different government departments, institutions, enterprises which will give ranking to the enterprise based on its usage, adoption, development ,support of open source software. There are open source maturity models for open source products but there is no model which assess the maturity of an enterprise in open source This open source maturity model for enterprise will be extended to assess the maturity of an enterprise/institution in Free and open source software.

This proposed model looks into the following elements for assessing the FOSS adoption in an enterprise/institution.

1.FOSS Policy and Management: A consistent set of policies and guidelines is needed to govern FOSS within an organization. These policies and guidelines must be carefully developed to insure that all issues that may affect the interests of the organization and its people are addressed. This element will look into three sub-elements

FOSS policy which will look into the presence of a written ICT policy with FOSS included for procurement,usage. Incentives for using this policy and the reasons for using FOSS.

FOSS product Policy will look into the maturity of an organization in choosing , acquiring, using, supporting a Foss product and tracking the FOSS product (for changes to versions, license, community).

FOSS and Management will look whether an organisation has effective management strategies such as FOSS program office which will manage FOSS at every point it touches in the organisation.

2.People and FOSS (HRD): Human resource is one of the main pillar of any organisation/department. This element will look at three things

Employees usage and contribution to FOSS. It will look at number of employees aware of FOSS, using FOSS, are trained in FOSS, require training in FOSS and have some FOSS related certification (RHCE,Web development..).

Documentation: It will look at whether there is FOSS policy documentation,product documentation,support documentation and contribution documentation for employees.

Training: It will look at whether employees are trained in FOSS policy, FOSS product usage. Also the type of training ie in-house or commercial, or community support or any other form of training. Is

3.Foss Usage and Support (IT- infrastructure): The usage of FOSS in an organization is increasing day by day. This element will measure the following
Hardware: Number of computers (Server and Desktops) using FOSS (fully or partially)
Software: Number of FOSS products being used.
Support: The support for FOSS products, which can be internal, or commercial or community support

4.Business / Domain specific applications: This element will look into any business/domain specific application the organisation is using, its license, and support. There can be some applications developed in-house using FOSS components.

Each criteria element is assigned a predefined weight (a value between 1 to 10). Weighting allows each element to reflect its importance to the overall maturity of the product. The data regarding each criteria is collected and a score is calculated for each criteria and then the weights will be applied to calculate the final scores. The element scores are summed to give an overall product maturity score on a scale of 1 to 100.

A survey has been conducted for collecting data from different TamilNadu state government departments (17) for validating this model and to further improve it. The survey will be soon available online.

Further proposed model can also be used to give ranking to organizations based on its usage, development, support of open source software.

Future work: More data will be collected to through surveys to validate this model for educational institution, Public sector undertaking and SME(Small and Medium Enterprises).