9
Configuration Management

According to CMMI-DEV v1.3, the goal of configuration management – or version management which is closely related – is to establish and maintain the integrity of deliverables (hardware and software) using the identification, control and monitoring of configurations, as well as configuration audits. In systems-of-systems, as in all slightly complex systems, configuration management is a very important element to master, given the multitude of components to manage, the complexity of systems and functionalities, the interactions between requirements, tests, test results, data structures, software components, equipment, products, and systems to be mastered, coordinated and managed. This form the design of the systems and the system-of-systems, as well as throughout the life of the systems and the system-of-systems for their maintenance and logistical supply.

9.1. Why manage configuration?

According to Humble and Farley (2011), if we can answer “yes” to all the following questions, we have a good configuration management system for our components:

– Am I able to reproduce any of my environments, including versions of operating systems, patches, network configuration, application stack for the various applications that interact with my system-of-systems?

– Can I easily make a change to any of the above and deploy to one or more of my environments?

– Am I able to visualize any change that has taken place on a particular environment and trace it back to the nature of the change, the author of the change and the date of the change?

– Can I meet each of the regulatory obligations that apply to my system in terms of providing evidence, traceability or coverage?

– Is it easy for each member of the team to have the information they need and to make the necessary changes, or does the deployment strategy negatively impact the delivery by increasing the delivery cycle?

We will notice that these questions apply for the code as well as for the data, the settings and the tests at each level.

If we answered “no” to one or more of the above questions, we will need to improve our configuration management.

9.2. Impact of configuration management

During the lifetime of the system – from initial design, through end of support and decommissioning – configuration management interacts with all engineering activities:

– requirements management, including interfaces;

– the design of hardware and software components;

– construction of hardware and software components;

– component architecture management;

– component integration;

– component testing;

– delivery (including management of delivery notes);

– operation and maintenance;

– project management and reporting.

The software components interact with each other and interact with the hardware components of the system. The interactions can be at the electrical level (e.g. power supply, connectors), physical (e.g. size, weight, diameter, heat released and its dissipation capacity, etc.) or logical (e.g. frames and messages exchanged, communication protocols, etc.).

In a system-of-systems, the ability to replace one component with another – which uses the same interfaces and provides the same service – is an element that will facilitate the maintenance of the system. The identification of each component – and the versions of the software that compose it – is the responsibility of configuration management and its organization will impact logistics and maintenance.

9.3. Components

Many different components must be managed in configuration, for example (non-exhaustive list):

– hardware components, equipment and tools (including test benches, maintenance, diagnostic tools, etc.);

– drawings, plans and diagrams, architectural descriptions, technical publications of products, etc.;

– requirements and specifications of the products and their components;

– development tools (IDE, compilers, etc.) and testing, versions of operating systems, their settings and configurations, installation logs, etc.;

– code and libraries, as well as the configuration data;

– application configuration files and data;

– test tools, test suites, test conditions, test cases, test scripts and test data (input data and expected results), test logs;

– defects corrected and functionalities developed;

– elements related to development processes (data architecture diagram – physical and conceptual data models – user stories, iteration backlogs, description of processes, etc.);

– reports, minutes of meetings (e.g. during important milestones on the project), results of FAI (First Article Inspection) or other elements to guarantee the integrity of the system in the event of an audit, etc.;

– contracts, amendments, standards, applicable standards and regulations, as well as any other contractual and/or reference document.

We realize that everything must be managed in configuration if we want to master our project and respond positively to the questions at the beginning of this chapter.

9.4. Processes

The primary configuration management processes are:

– planning and management;

– identification of configurations;

– management of configuration changes;

– configuration status accounting;

– configuration audit;

– delivery management.

The supporting processes are:

– interface control;

– third party/vendor controls.

9.5. Organization and standards

As configuration management will take into account the components for many, many years, the organization of the components must facilitate the processes throughout the life of the systems-of-systems, and not only during the development of the software. As the development phase is reduced compared to the lifetime of the system-of-systems, optimizing the management of components for maintenance and logistic support is more obvious. In a system-of-systems, the software elements are never (or almost) modified without changing the hardware components in which they are installed. In these elements (SALP: Software Predominant Systems), the software, parameterization or configuration files are an integral part of the hardware and the hardware configuration reference may include a reference to the version of software installed.

The naming rule that will be chosen for the components of the system-of-system must also take into account that the components coming from suppliers or co-contractors can be organized according to different criteria. It is recommended to have a system for making the components managed in configuration which makes it possible to follow both the hardware and the software which is installed in this hardware, even also the version of the parameterization files of this hardware and the particularities of the interfaces, in and out. In this way, two components with the same configuration reference will be replaceable by each other, which will facilitate logistical support throughout the lifetime of the system-of-systems.

Various standards exist regarding configuration management:

– ACMP-2009 GUIDANCE ON CONFIGURATION MANAGEMENT Edition A Version 2 MARCH 2017 published by NSO;

– MIL-HDBK-61B_DRAFT_10SEP2002 Configuration Management Guidance published by Department of Defense;

– IEEE 828-2012 Standard for Configuration Management in Systems and Software Engineering, published by IEEE;

– ISO 10007:2017 Quality management – guidelines for configuration management, published by ISO.

Further information on configuration management processes is available in CMMI-DEV v1.3.

9.6. Baseline or stages, branches and merges

The terms baseline (or stages), branches and merges are common when it comes to understanding configuration management. Take the example below:

Schematic illustration of version branches and merges.

Figure 9.1 Version branches and merges

The v1.1 component has just completed unit testing in the development environment and has moved into integration testing. Developers add new features on a new branch (evolution) of the component producing v1.2. At the same time, the testers identify a defect (on the v1.1 component) that needs to be corrected. The developers correct the anomaly and generate a version v1.1.1. As the anomaly must also be corrected in the v1.2 branch, the developers produce an evolution of v1.2, named v1.2.1. The component in v1.1.1 is tested in integration and validated, which makes it possible to pass it in system tests (for this subsystem). At the same time, v1.2.1 of the component is provided for integration testing, with the new features and fixes previously identified. During system testing of v1.1.1, an anomaly is identified that needs to be fixed. It will therefore be necessary to apply it to the v1.1.1 branch and generate a v1.1.2. Simultaneously, the v1.2.1 branch will also have to be updated, which will generate a v1.2.2. We will therefore have at this moment:

– v1.1.1 in system tests;

– v1.2.1 in integration tests;

– v1.1.2 and v1.2.2 in unit testing.

We realize that this complexity of various versions for a component increases exponentially when we add several components – hardware and/or software – to compose a system-of-systems.

In addition to this, it will be necessary to consider the versions that developers design for their own needs, their tests or the needs of designing plugs or drivers for testers.

If configuration management is not properly mastered, it will be impossible to manage everything correctly; no one will be able to ensure that a version or a set of components has been correctly tested.

9.6.1. Stages

The various components – hardware and/or software – that make up a system or a system of systems evolve at different speeds and are developed by separate organizations. It is necessary to identify those that are compatible and those that are not. Software components evolve quickly, hardware components more slowly, so we will call “Baseline” all the components that work properly together, whether hardware and/or software.

9.6.2. Branches

A branch occurs when a component is copied and then each of the two copies evolves separately. We could, for example, have a component in development (v1.1) which is copied to a test environment (so it is the v1.1 branch) and which is added – in development – a new feature (to become a branch v1.2). Testing identifies flaws and fixes applied to the v1.1 branch, moving it to a v1.1.1 branch. These corrections will also have to be applied to the v1.2 branch, thus creating a v1.2.1 version.

9.6.3. Merge

Branch merge activity occurs when two parallel and partially identical versions of a component are merged into a single component that will be managed in configuration.

9.7. Change control board (CCB)

The evolutions of the components, whether for the addition or modification of functionalities, or for the correction of anomalies, must be properly mastered, especially since these components evolve separately in the case of systems-of-systems. It is therefore imperative that the interactions of the components with each other and the need for several components guarantee the correct behavior of the functionalities. These changes must be overseen by a change control board (CCB).

The purpose of the CCB is to control the evolutions of the components delivered on the various environments, to approve or disapprove the changes and to ensure the implementation of the approved changes. Projects may have more than one CCB. In this case, there will be a hierarchy of CCBs according to the level of integration impacted, with the highest level CCB dealing with the product. In a system-of-systems, the CCBs will be specific to each of the systems (each of the co-contractors) and will feed the configuration information of the complete system-of-systems, through a higher-level CCB.

Each configuration item – both software and hardware – should be able to be uniquely identified and – in the case of software-intensive hardware – it may be necessary to reference the version of the hardware, the OS and each embedded software. This information can be grouped together in a single reference integrating the various elements.

Using a common configuration management tool reduces the hassle of disparate tools. However, it will be necessary to manage the references of configuration elements coming from the various co-contractors (and external suppliers) whose structure of the numbering of the elements will certainly vary.

The CCBs are also responsible for the test environments of each test level to be implemented.

9.8. Delivery frequencies

Various component delivery frequencies exist, each with its advantages and disadvantages.

It can be decided to deliver only once per period (e.g. per week, month, quarter, etc.), in order to guarantee a greater stability of the environment for the tests. However, this implies that a present defect will only be corrected – at the earliest – during the next delivery.

The other extreme is to deliver continuously (CD for Continuous Delivery), as advocated by DevOps. This makes it possible to respond as quickly as possible to business and customer needs but involves automated management of all tests at all levels.

Another major benefit of frequent component delivery is that changes to a component from its previous version will be minimal, for example, only what can be developed in a day. If configuration management is linked to automatic generation of builds – and testable environments – then component validation can take place immediately and anomalies can be detected quickly.

It is possible to merge – partially or totally – the delivery frequencies to respond to emergencies (e.g. blocking anomaly in production) while allowing an exhaustive search for anomalies in previous environments.

In any case, it is critical to identify the requirements (or user stories) developed or modified and the anomalies corrected in a delivery.

9.9. Modularity

Systems-of-systems are always complex and often very large. One way to simplify component management is to create smaller modules that can be managed more simply and easily. However, the increase in the number of components will imply an increase in the number of interaction interfaces, and therefore the need for coordination between services.

As much as configuration management makes it possible to guarantee traceability of components with each other and from a requirement to a component materializing this requirement, version management makes it possible to follow the versions of components to be installed in the environments.

9.10. Version management

In systems-of-systems, it is necessary to properly manage all versions of each of the components – hardware and software – that make up the system-of-system. This implies that each test level must have its own test environment, with the correct versions of the components – software, OS, data, hardware – which work correctly together.

Schematic illustration of example of test levels with multiple different configurations.

Figure 9.2 Example of test levels with multiple different configurations

9.11. Delivery management

Release management is not limited to delivering the system-of-systems to customers. There are indeed deliveries at multiple levels, as shown in Figure 9.3.

Schematic illustration of test Levels in systems-of-systems.

Figure 9.3 Test Levels in systems-of-systems

Delivery of a software release, a system release, or a system-of-systems release involves several phases or stages. They can be synthesized in checklists that will be specific to each environment and each system-of-systems.

Various points must be defined and verified:

– Completeness of the software component(s): it must be ensured that all the components are identified and correctly managed in configuration. This implies that all components, their interfaces, their settings files, their URLs, APIs and all other supporting applications (ETL, etc.) are correctly identified and validated in the starting environments and in the target environment.

– Quality level of the software component(s): this involves ensuring – through testing – that all the components defined for the target environment work correctly with each other in an environment (for example, qualification environment). Sometimes called “golden build”, these components can be considered as a stage and should be tested together. Depending on the level of criticality of the system, it is recommended to run on this “golden build” all the tests run previously or only a subset.

– Availability of user profiles: it is certain that we will have to deal with the authorizations and access rights of users and administrators. In addition to these profiles, it will also be necessary to think of test profiles so that – throughout the production phase – tests can be run in order to validate the behavior of the system in the target environment.

– Availability, sizing and validation of migration tools: many components will have to be migrated to the new system. It will be necessary to ensure that these components will be correctly migrated within a defined period of time. It is particularly important to ensure that all operations for converting production data and/or transferring components of the new version are carried out within a period compatible with the use of the system-of-system. This involves validating the sizing of the environments and the capacity of the systems to support the load.

– Identification of the components to be replaced: this involves identifying all the components of the current version which will have to undergo a version upgrade. By cross-referencing this list with the list of components required for the new version, we can identify the added components and the components that should be removed, as well as all the components – with their version – that must be changed.

– Identification of health-checks: when a new version is put on an environment, it is necessary to ensure – through rapid and extensive tests – that this new version is correctly deployed. One way to do this is to take a subset of the existing tests covering all the components and features present, to make sure they work well in the target environment.

– Validation of roll-back tools: as any activity can include failures, it is necessary to ensure that, if it is necessary – in the target environment – to return to the current version, it is possible to do so. This involves taking an image of the environment before the start of the delivery (including data, software components, settings, etc.) and restoring this image if necessary.

– Verification of completeness of checklists, with validation of each action. A delivery of a system-of-systems is an action involving many actors, systems, processes and data. It is important that each action – and its timing – is identified, down to its most mundane ramifications (e.g. are there access rights to the site for staff who will be involved at night delivery?). It is therefore recommended to ensure by blank checks that all actions have been identified and that each activity has been verified and validated.

9.11.1. Preparing for delivery

In general, a delivery is carried out with a certain preparation varying (depending on the level) from informal to very formal. Whatever the level of formalism, it is necessary to clearly identify what is delivered:

– the components with their configuration management reference, in order to correctly control the configurations of the subsystems and the system-of-systems;

– the functionalities and their reference documentation, in order to be able to compare these with the level of contractual progress expected;

– the link with the requirements and the proof of conformity to be provided (tests, demonstrations, etc.) so as to ensure traceability from the requirements to the software or components provided;

– for the lower levels (e.g. component tests or integration tests) which anomalies have been corrected, which will make it possible to rerun the necessary tests to ensure the correct correction and the absence of regressions.

This information is identified in a delivery sheet which will reference the components, their dependencies and links to other components or documentation. Any delivery that does not have an exhaustive description of what it includes should be refused insofar as a lack of mastery of the configuration is very detrimental to the quality of the project.

Simultaneously, with the verification of the elements delivered, it is necessary to ensure that the environment where the elements will be delivered is correctly controlled. This involves – if necessary – cleaning up the environment by removing unnecessary or redundant elements, resetting data and settings, and even reinstalling the necessary components. A backup of the entire environment can be useful to ensure that we have a reference version.

Of course, it is also necessary to ensure that the backup, delivery and rollback processes (backtracking or restoring the previously saved environment) also work correctly. If this is not the case and a problem occurs during a delivery, the environment may be totally or partially unusable.

9.11.2. Delivery validation

Two stages can take place within the framework of the delivery of configuration components: on the one hand, for the supplier – or the team which reads the components – to check that what is delivered is complete and works, and on the other hand – for the customer, the one who receives the components – to check if what is delivered works and corresponds to what is expected. Each step requires a specific environment to avoid side effects linked to other components still in development or delivered previously. Deliveries often also include settings and data; these elements are an integral part of the components delivered and which must be validated.

The first step, validation of what will be delivered, is to ensure that everything in the configuration package that will be delivered is complete and correct. This involves checking what bugs and features are provided and making sure they are working properly. Similarly, it will be necessary to ensure that the documentation is complete and up-to-date, that the interfaces with the other components are validated and that all the configuration and installation data are present and valid. It may be necessary to create installation scripts, to provide backups of data and components present, to provide rollback scripts (if the installation does not go as planned) and to identify test cases minimum to be performed to ensure that the anomalies have actually been corrected and that the new functionalities are correctly implemented. This first step can be considered as a verification before delivery, sometimes in the premises of the development team (step sometimes called factory validation).

The second step, the validation that what is delivered corresponds to what is expected, is carried out at the customer’s premises and comprises at least two parts: the first is a documentary check which will ensure that the planned anomalies are well listed, that the expected functionalities are those described. This part will make the link between the components and the contractual requirements. The second part concerns the validation of what is delivered. It is recommended to carry out this validation in a specific environment, separate from the test environment of the level in progress at the customer (often system environment, system integration and/or acceptance), to avoid impacts negatives of anomalies or regressions. In addition to the validation of new functionalities and bug fixes, it will also be necessary to perform a minimum of tests – called smoke tests – to ensure that the main functionalities of the application, the system and/or the system-of-systems continue to operate and allow a full testing phase to be implemented with minimal risk. These “smoke tests” will include functional tests of each area, as well as performance tests, security tests, etc.

The implementation of delivery checklists – for each of the two stages – ensures that no element is forgotten. Another solution – technical and dependent on the tools used – is also possible by using the concept of Pipelines in the context of DevOps.

9.12. Configuration management and deployments

As we have seen in this chapter, configuration management is essential throughout the existence of a system or system-of-systems. This importance is reflected in the interaction of configuration management tools with development frameworks. This integration allows user stories to be linked with the code and with the test case(s) that verify them, as well as with the versions where these test cases are executed. This results in traceability which makes it possible to ensure that the software is of good quality, as well as that the component has correctly passed the verification/validation stages imposed by the quality assurance process. This may include, upon release of the component, static verification of the code, execution of automated unit tests, deployment in an integration environment, execution of integration tests applicable to the released component, deployment in a system test environment with the execution of the automated tests necessary to validate the various versions, releases and configurations used by the customers.

Configuration management is often neglected even though it is the basis of controlled version management, as well as of fault identification and their corrections. As part of DevOps and CI/CD activities, configuration management is mandatory. Without a solid configuration management, considering an industrialization of tests (automation and execution associated with each evolution) is illusory because it will be too costly in terms of manual test load and delay.

As we have just seen in the previous paragraph, deployment requires not only configuration management, but also access to various environments (e.g. DEV, INT, TEST, SYST, UAT and PPROD) and the authorizations necessary to deploy in these environments. In a system-of-systems, direct deployment to production is not realistic, but deployment to a user acceptance or staging environment can be considered. It will therefore be necessary to coordinate with the teams in charge of ensuring security actions prior to production.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.77.149