13 CERTIFICATION

Requirements and approach to designing exams

Certification is used to validate the skills and competencies of an individual’s level of proficiency. Most IT companies implement certification programmes to validate their channel partners and ensure that customers have the skill required to be competent in their roles.

With certification gaining broad acceptance across most industries, there are three main objectives that an organisation looks to achieve when implementing certification:

  1. Ensuring customers and channel partners have access to a network of trusted training expertise and experience, which provides product knowledge and the skills required to be competent in their roles.
  2. Establishing a method of testing which proves that knowledge, skills, competency and performance improvements have been gained.
  3. Ensuring customers, partners and their sponsors gain a return on their technology investment.

In this chapter, the process of certification development, legal defensibility, item writing and proctored versus unproctored examinations are discussed in detail.

CERTIFICATION AND TESTING TYPES

There are three classifications and four types of testing used throughout the technical training arena, all of which reference and apply the ISO 17024 standard.1 The key bodies behind this standard are:

  • the International Organization for Standardization (ISO);
  • the American National Standards Institute (ANSI);
  • the United Kingdom Accreditation Service (UKAS);
  • the International Accreditation Forum (IAF).

The main issues ISO 17024 addresses can be summarised as:

  • defining what is being examined (competencies);
  • exploring required knowledge, skills and personal attributes;
  • ensuring examinations are independent;
  • checking that examinations provide a valid test of competence.

Testing types

Testing types are:

  • Written or practical: based on levels of understanding rather than an ability to implement competently within the working environment, and referred to as a low-stakes examination.
  • Accreditation: accreditation and certification are often used in the same sense, but actually are not the same. Accreditation is typically used to validate that channel partners can perform their role as authorised practitioners on behalf of a vendor. These examinations are classified as medium stakes.
  • Certification: used to validate skills and competencies and is referred to as a high-stakes examination.
  • Performance-based: performance-based testing is defined as high stakes where it tests skills, competencies and a candidate’s abilities to perform an actual role with demonstrable and measurable results.

Classifications

Examinations are classified as low, medium or high stakes. The term is derived from gambling and based on what is at stake in terms of risk level. In a high-stakes assessment, students who may have inappropriately been certified could jeopardise the stakeholders they work for, whereas those who deserve to be certified, but are turned down, may have grounds for legal action against the certifying body.

High-stakes testing is used to assess competency where it would provide a mechanism for a student to demonstrate their ability to apply knowledge, skills and attributes associated with a technology they have studied. It is normal practice for a high-stakes examination to be sat after the student has attended a course and been given the opportunity to put it into practice.

Medium stakes tend to be more skills assessment-based and in some quarters used to award accreditation, particularly for channel partners rather than customers. Low stakes are used to measure levels of understanding and often inserted at the end of a training course.

The important point to consider regarding the three stakes categories is that what is low stakes for one individual may be high for another. Therefore, legal defensibility can apply to all three levels.

ADVANTAGES AND DISADVANTAGES OF CERTIFICATION

There are plus and minus points to certification, depending upon how the value, cost and time of both developing and studying for the examinations are viewed. The two sections below provide a summary of some of the advantages and disadvantages.

Advantages

  • Certification improves industry recognition.
  • Certification can improve job opportunities and employee compensation.
  • Certification ensures that channel partners attain recognised levels of competency, which in turn provides confidence for customers.
  • Certification can be used to add value to the training content offering.

Disadvantages

  • Certification is costly in terms of development time and money.
  • It can be expensive for students to sit the examination.
  • Not all students will undertake certification, resulting in a low return on investment.
  • Every vendor has its own certification programme, which is not always aligned to a specific industry need.
  • Certification can lead to loss of employees who expect increases in their compensation packages aligned with higher skill levels.
  • Certification amendments and updates are costly in relation to staying abreast of the latest product releases.

THE CERTIFICATION DEVELOPMENT LIFECYCLE

To ensure compliance with ISO 17024, a robust development process must be established and implemented. Table 13.1 reflects a typical process to be followed to ensure the development of a valid certification examination to assess knowledge, skills and competencies within the context of a legally defensible framework.

DEFENSIBILITY

From an industry perspective, defensibility must ensure the ability of a testing entity (certification team within a training group) to withstand legal challenges. Challenges can come from those undertaking certification regarding the validity of the test development process. Therefore legal defensibility has to be able to prove that the process and programme of test development is valid in a court of law. It is therefore more to do with the validity of the certification examination process and not someone’s, or their company’s, ability to do the job.

Table 13.1 Certification development process

Process stage

Description

Job task analysis

To ensure the examination content to be developed is valid, the knowledge, skills and competencies required to be a minimally certified professional are assessed and documented by a group of subject matter experts.

The knowledge, skills and competencies are typically grouped in terms of tasks, which are then validated by running a survey to gather feedback on their importance, relevance and criticality from a group of actual practitioners.

Blueprint development

The output from the job task analysis survey is used to develop the blueprint for the examination. The tasks are assessed regarding their importance, criticality and relevance, which is then converted into an agreed number of items to be developed per task area.

The blueprint guides the item development and examination processes to ensure they satisfy the relative importance of the required knowledge and skills to perform the role, within the criteria of a minimally certified professional.

Item development and validation

Item development is undertaken by subject matter experts, who need to be trained in writing, reviewing and editing questions.

Each item is classified by a content category, assigned a cognitive level and validated according to its relevancy to the minimally certified professional requirements.

Examination assembly

Items from each content category, as defined in the blueprint, are reviewed and validated by subject matter experts.

The questions that are validated are formulated into an initial version of the examination.

Beta examination and psychometric analysis

The initial version of the examination is released as a beta, with selected candidates invited to participate under test conditions. The questions and results are checked for technical accuracy and psychometric integrity, which can lead to further refinements of the overall examination, and assists in strengthening its accuracy and validity.

Cut score

Based on the output from the psychometric analysis, a pass/fail mark will be agreed, which is based on a minimum competence level that is legally defensible in terms of being fair, reliable and valid.

Examination administration

A secure examination environment must be established, as well as the ability to report and document candidate results.

Some organisations undertake this themselves and others contract it to specialist companies, which minimises overheads associated with infrastructure, administration and security.

Examination review

In line with the training content development process, the examination needs to be reviewed for relevancy and validity in the market as the product and job requirements change.

In relation to recorded legal cases, there are four main areas where tests are typically legally challenged:

  1. Reliability: regarding how consistently the test measures a construct (a construct is the latent variable that is being assessed, such as mathematical ability or mechanical aptitude).
  2. Validity: regarding whether the test is measuring what it is supposed to measure.
  3. Fairness: regarding the test measuring the construct(s) it was designed to measure, with no unfair advantage for any given demographic group or subpopulation.
  4. Cut scores: which are the ‘pass/fail’ benchmarks used to determine whether participants have demonstrated an appropriate level of knowledge or skill on a test. A test may be legally challenged if it is believed that these have been unfairly set.

The training certification group must assemble and maintain a portfolio of legal defensibility evidence to promote best practices, and document the processes and procedures that need to be followed in a consistent manner.

As certification rigour is labour and cost intensive, some training groups may find themselves fiscally challenged. In this instance, if it is important to maintain certification then any reduction in rigour needs to be documented and not have significant impact on the validity, reliability and fairness of the exam.

Regarding test question development, to have a sound basis for legal defensibility, the ISO 17024 standard needs to be applied in the following areas:

  • Reliability: how consistently the assessment measures a student response.
  • Validity: whether the test measures what it is supposed to measure.
  • Fairness (and bias): whether the test performs fairly for different groups or demographics.
  • Cut scores: benchmarks used to determine whether students have demonstrated an appropriate level of knowledge or skill on a test. These scores are very common in high- and medium-stakes assessments and used to define pass and fail levels. There are many well-established processes and methods existing for setting pass/fail scores. One of the more popular ones, which is based on a test and question-centred approach, is the modified Angoff method (Thorndike et al., 1971), which requires:
    • SMEs to be briefed on the Angoff method and take the test with performance levels in mind.
    • SMEs to provide estimates for each question of the proportion of borderline or ‘minimally acceptable’ participants that they would expect to get the question correct.
    • Several rounds of assessment balanced with empirical data from a beta exam against the SME estimates.

ITEM WRITING DEVELOPMENT

As discussed, when developing questions for a low-, medium- or high-stakes examination, an important consideration is that of legal defensibility – specifically, how consistently the questions measure a student’s response and measure what they are supposed to.

To achieve this, it is important to understand the structure and approach that should be followed. The first task is to access the course objectives, which allows extraction of the key concepts and learning points by topic area. From this, meaningful and valid test questions can be developed that focus on Bloom’s taxonomy of higher levels of cognition (Bloom, 1956).

Bloom’s taxonomy is based around three hierarchical models, which classify learning objectives into levels of complexity and specificity. They are cognitive, affective and sensory. The cognitive is where most focus has been applied in the technical training arena.

On the cognitive level, test questions are structured to measure a learner’s ability to:

  • remember and recall facts, terms and basic concepts from the course studied;
  • apply and use the acquired knowledge to solve problems;
  • analyse, examine and divide information into parts and decide how they relate to one another;
  • synthesise and compile information into different solutions;
  • evaluate, present and defend findings through the application of valid judgements based on access to known information.

Evaluation is the highest level, which requires critical thinking to be applied and is useful when developing performance-based tests. The other levels are more commonly used in the low-, medium- and high-stakes examination questions.

To write multiple choice items for a question requires a structure where a problem statement known as a stem is constructed. The stem needs to be written based on a definite problem that applies focus on specific learning outcomes. This is followed by the development of a list of alternative answers or solutions. One of the alternatives is correct and the others incorrect, known as distractors.

Multiple choice test items have several advantages. One of these is versatility, where the test item can assess different levels of learning outcome from basic recall and analysis, through to evaluation. The other is to do with legal defensibility.

Legal defensibility requires a test to be reliable and valid. Multiple choice items are not prone to true or false question testing and can be written to test specific aspects of a learning outcome. This significantly enhances test reliability, and validity is improved with the use of multiple choice items mainly because they cover the broader aspects of a course rather than an essay-type examination. However, to ensure multiple choice items are written well, the following approach is recommended:

  • The stem needs to:
    • define a problem statement that relates to a learning outcome;
    • focus on known and relevant content covered in the course, otherwise it can be challenged as unreliable and invalid;
    • be a question allowing students to answer by applying the knowledge learned, rather than a partial statement requiring completion from one of the alternative answers.
  • When writing alternative answers for the stem, there are several guidelines that should be followed:
    • Consider four options per stem. Most examinations standardise on this number, as the benefit of writing additional options does not achieve any extra level of validity or defensibility.
    • Distractor answers should be concise, comparable, plausible and, where possible, relate to common errors or misconceptions.
    • The answers should be grammatically consistent with the stem and of similar length, structure and concept.

Items in general should focus on the application of knowledge and problem solving, rather than on the recall of knowledge or facts. When developing items, a team of SMEs comprising content developers, instructors, technical support and technology design staff should be enlisted to review and assess all items against agreed criteria, including setting the pass or cut score. This enables items to be checked for validity, fairness and reliability to ensure legal defensibility criteria are satisfied.

Example stem and alternative answers:

  1. Which company runs the Firefox web browser?
    • Mozilla [correct answer]
    • Google
    • Yahoo
    • Safari
  2. Files included in messages are often referred to as:
    • client-side scripts
    • cookies
    • attachments [correct answer]
    • server-side scripts

Table 13.2 is an example of a sample stem with alternative answers. The purpose of the stem is to define a problem statement that relates to a learning outcome. The alternative answers comprise a correct answer and two or more incorrect answers, known as distractors, which should be plausible and relate to common misconceptions.

Table 13.2 Sample stem and alternative answers

image

The modified Angoff method is a popular approach used to determine passing scores for an examination. It is based on the judgement of a panel of subject matter experts, as mentioned above, and their view on what questions a minimally qualified practitioner should be able to answer. The SMEs then consider how many practitioners in a typical cross section of 100 would be likely to answer the questions correctly. This is followed by using standard deviation calculations on a question by question basis to determine a pass score, and further ratified by running a beta examination for a selected audience of 100 participants, with the results being used to normalise the passing score before being released.

PROCTORED VERSUS UNPROCTORED EXAMINATIONS

Over the past few years there has been a transition from paper-based examinations towards internet-based ones, the majority requiring candidates to be tested in a controlled physical environment monitored by a proctor. This method ensures that every candidate takes the exam in a secure and quiet environment where the use of mobile phones, private notes, reference books or other means of accessing answers is not allowed. This helps to minimise cheating.

In the real world, employees reference their course notes and search the internet for answers to questions to assist them in the execution of their role. If this was applied to the test environment, there would be no requirement for a proctor. The main disadvantage of unproctored examinations is that there is no way of validating who the actual candidate is.

As proctored examinations are expensive to develop, run and support, consideration could be given to running a mix of the two. If the complexity of the technology being tested is high, then a proctored high-stakes exam could be developed. On the other hand, if expected volumes are low and the examination geared to testing knowledge is defined as low stakes, then the approach could be to go unproctored.

When looking to balance the training budget, decisions can be made regarding the level of importance placed on the decision to go low, medium or high stakes, which is another way of saying: develop the examination as proctored or unproctored based on its fitness for purpose.

EXAMINATION-RELATED REVENUE

Revenue from examinations, when compared to content and delivery provision, will normally be low. Most technical certification-based examinations are in the range of £100–400, depending on the level of complexity. For example, many Microsoft examinations are £100 each, whereas an Agile examination with higher complexity will be £450.

The student course to certification conversion rate will be less than 100 per cent, and will depend on vendor influence, promotional activities and relevancy to industry and the employer base. Certification revenue can be improved by the promotion of vouchers, especially if discounted, or by bundling certification in with the purchase of the course.

When vendors provide partner accreditation, conversion rates can be high because it is a requirement to gain entry into the partner programme and be eligible for discounts. Some vendors charge the same as they do for customers, others discount and some provide for free. For vendors who use third parties to provide customer and partner access to their examinations, the actual net revenue is lowered, typically by 40 to 60 per cent.

FUTURE TRENDS (BADGING, PERFORMANCE)

Many in the world of education understand the importance of the term ‘lifelong learning’, but what does it mean in the context of technical training?

Well, it is the process of engaging what we understand, apply and utilise with experience over time. Primarily, it is using what we have learned in the past and expanding our overall knowledge, skills and understanding to perform or interpret more effectively what is happening around us. Sometimes this is a subconscious activity, as opposed to a programmed or developmental activity.

With the shift from manufacturing to high technology, having the right attitudes, approach and skills for learning are becoming increasingly vital for survival and important for business and economic success. The technical training industry needs to understand and adapt the way it provides access to training, based on providing customers and their employees with skills leading to improved individual and corporate performance.

This is where lifelong learning and testing provision need to be aligned. Attendance at a single course and passing the examination is a small part in the overall process. What is required is a broader approach whereby technical training becomes an integral part of the overall employee developmental experience.

Employee development covers training, career development, performance management, coaching, mentoring, succession planning and assessment. For it to be successful, assessment needs to be applied and used in a manner that is progressive and reflects all aspects of employee development. This is where badges and performance assessment can be combined to measure the overall influence and growth of an employee’s developmental progress.

As technology plays a major part in the success of most companies, it is incumbent upon training providers to understand how it supports the broader success of both employees and employers. From this standpoint, training providers can align their training offering in a more effective and meaningful way.

For example, for each stage of the employee’s development plan, whether knowledge or skills acquisition, a badge can be issued (similar in principle to what a Boy Scout or Girl Guide would receive) to recognise and reward progress. Further badges could be awarded when specific tasks or competencies were proven. This then becomes evidence of an employee’s lifelong learning journey and value to a company.

Badging can also be used to recognise specific areas of employee performance. For example, employees could demonstrate their skills by referencing projects they have completed, where actual tangible proof can be provided to verify that all requirements were met. This validation is undertaken by evaluators who would compare the project goals to the result achieved.

For technical training providers, designing a badging programme is worthy of consideration. Many employers look to hire staff who are task-based and therefore only need to be trained on the elements of the task that the technology supports. To a certain extent, this is accelerating the need for on-demand training, which requires examinations to reflect that. Badging provides a means to support this, which can be further enhanced with performance-based testing.

Mozilla’s Open Badges project, which started in 2012 from funding provided by the MacArthur Foundation, has developed an infrastructure that provides companies with the ability to use and share resources globally by way of an open standard. To date, more than 1,000 organisations have issued several million badges. The Open Badge Infrastructure2 is defined by a technical standard and a Badge Backpack, which is a service providing badge earners a way to collect and manage their awards. The standard itself defines the metadata that must be included in a badge for it to be considered OBI-compliant. This includes how it was earned, where it was earned, who earned it and when it expires. This is not too dissimilar from the certification process.

On the performance assessment side, employees are required to demonstrate or provide evidence they can undertake the task proficiently. This is more effective than traditional multi-choice certification examinations as it reflects the candidate’s ability to actually do the task by way of providing proven evidence. There are three main categories of performance assessment:

  1. Project-based: employees are required to show actual tangible proof regarding completion of a project to an expected and agreed level. Evaluators discuss and question how the project was completed and what lessons were learned.
  2. Portfolios: especially useful for employees involved in programme management where they show evidence of how an overall programme was constructed and executed.
  3. Demonstration: employees are required to demonstrate mastery of particular tasks. Evaluators make judgements by way of observation and questions.

As with certification examinations, badges and performance-based testing require robust, fair, reliable and valid processes to be in place to ensure that legal defensibility is maintained when the stakes for the employee are important.

SUMMARY

Training organisations that implement certification should reference and apply the ISO 17024 standard, which is controlled by four internationally recognised bodies. From a certification perspective, it is used to validate skills and competencies, referred to as high-stakes examinations, which must adhere to the ISO 17024 standard.

In order for the certification examination to be compliant with ISO 17024, a robust certification development lifecycle must be established and implemented within the context of a legally defensible framework.

Legal defensibility is a key component of certification, which must ensure that the assessment results and the testing programme are defensible in a court of law. Therefore, a training certification group must maintain a portfolio of legal defensibility evidence to promote best practices and document the processes and procedures that need to be followed in a consistent manner.

Question development also comes under legal defensibility requirements, specifically regarding how consistently the questions measure a student’s response and measure what they are supposed to measure, which is where the application of Bloom’s taxonomy comes into play.

The final stage of certification development is validation of the passing scores, which is determined by the application of the modified Angoff method.

Employee development tends to be progressive and reflect not just skills proficiency but also performance contribution. Many companies are looking for training providers to align their training offerings in a more effective and meaningful way. In support of this there is a move towards the adoption of badges.

As a meaningful extension to certification, the Open Badge Infrastructure (OBI), which is defined by a technical standard, provides a process by which training providers can establish project-based activities where they can award badges to students who provide demonstrable evidence of proficiency.

1 ISO 17024 is an international standard (https://en.wikipedia.org/wiki/International_Organization_for_Standardization) specifying the criteria for the operation of a personnel certification body (https://en.wikipedia.org/wiki/Personnel_certification_body) and the requirements for the development and maintenance of the certification scheme.

2 The Open Badges Infrastructure (OBI) Group focuses on developing and maintaining the technical infrastructure that supports open badge systems.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.38.3