Training and certification are two important concepts when it comes to aviation security performance. Both have key roles to play in the formation and assurance of screener competency, and both are strongly linked to – and even reliant on – each other. However, each concept serves a specific objective within a National Aviation Security Training Programme; both have separate regulatory provision. Their distinct purposes mean that both are required for successful screener competency.
In this blog, Sophie Hibbin explores how the two concepts differ, what is important about each, and why a successful application of the regulations requires both.
Photo of Sophie Hibbin
By Sophie Hibbin

Senior Technical Advisor – Aviation Security

What is Training?

Training has varying definitions across the field of civil aviation. Generally, training is a means to deliver knowledge to and enhance the competency of a selected group of persons. For each category of persons performing screening tasks, training should result in specific competencies.

What is Certification?

Certification is defined in ICAO Annex 17 as: A formal evaluation and confirmation by or on behalf of the appropriate authority for aviation security that a person possesses the necessary competencies to perform assigned functions to an acceptable level as defined by the appropriate authority.

Competency evaluation for screeners can be done through TIP scores, covert testing, and an image-based interpretation test.

Therefore, we can say that training has the goal of developing competency, whereas certification has the goal of validating competency, often following a standardised assessment. Competency is a dimension of human performance that is used to reliably predict successful performance on the job. A competency is manifested and observed through behaviours that mobilise the relevant knowledge, skills and attitudes to carry out activities or tasks under specified conditions. [1]

A good training programme within aviation security comprises the following factors: [2]
  • Identification of learner needs and general training gaps.
  • Appropriate engagement and support, including embedding security culture.
  • Consideration of the target audience.
  • Integrated human factors principles.
  • Sustainability, leading to long-term security improvements.
  • Effective quality assurance.
  • Measured outcomes.
A good certification programme for X-ray screeners comprises the following factors:
  • Responsibilities for certification are well-defined.  
  • The Appropriate Authority has suitable oversight of the process, can review it, and requires certification evidence.
  • Each screener is subject to testing first after training and then on an ongoing basis.
  • Screeners are evaluated on all the training they receive, e.g. by written exams, oral tests, computer-based training on image interpretation, and practical skills exams in an operational environment.
  • The Appropriate Authority defines minimum pass marks.
  • An image-based interpretation test is used to evaluate prohibited item detection competency, including images of prohibited articles and clear images .
  • An appropriate retake process is facilitated.  
  • Certification is objective and standardised.
  • Certification is fair, timely, reliable, and valid.
  • Re-certification is set by the Appropriate Authority and occurs at an appropriate frequency.
Airport Security Checkpoint during Boarding Flight. Diverse Passengers Put Personal Belongings in Plastic Trays for Screening:

IMAGE: FRAME STOCK FOOTAGE / Shutterstock.com

So, why do they need to be different?

The aviation security system of today is complex, sitting within the context of advanced technology, defined processes and procedures, and challenging operational environments. This complexity has grown in recent times, both from a technological perspective and from the way the system is governed and regulated. The human element remains a key component of aviation security, with the operator of an x-ray machine – the human reviewer – relied upon to make critical and final decisions in respect of potential threats. It is clear that we need to be able to assess the competency of this human reviewer in a robust and consistent manner.  

This is where the certification process comes in; screener certification offers us a route to assure that the standard of screening is maintained via an assessment of all the required competencies that our human reviewer needs to be able to do their job. The advantages of certification go beyond this – a mature certification process facilitates quality assurance, highlights potential vulnerabilities and strengths in the security system, validates training processes, and even provides motivation and drive for the screeners themselves.  

To maintain these advantages, however, a certification system should be independent of a training regime. Naturally, certification validates training, but the most effective validations are objective assessments; a blended training and certification approach risks introducing a vested interest in the certification process’s success. 

Flip chart icon
Theoretical Testing
Certification Pillar 1
Person with stars icon
Practical Testing
Certification Pillar 2
Analysis icon
Image-Based Interpretation Test
Certification Pillar 3
Formative Assessment vs Summative Assessments

A standardised image-based interpretation test, which is also a regulatory requirement, is the only way to assess the operational competency and effectiveness of training for newly trained screeners. Theoretical and practical training and testing are obviously significantly important, as are methods for ongoing competency assessment, such as covert testing and TIP. Nevertheless, it is worth exploring the value of a distinct image-based assessment.

It is clear that training, including that mandated under a National Aviation Security Programme, must provide a path for the initial creation of competency and its continuous upkeep. Training also includes a component of assessment – assessment for learning or formative assessment.

Formative assessments, also known as classroom assessments, continuous assessments or assessments for learning, are those carried out by teachers and students as part of day-to-day activity. There are multiple interpretations of formative assessment, but most literature takes the broad definition: “encompassing all those activities undertaken by teachers, and/or by students, which provide information to be used as feedback to modify the teaching and learning activities in which they are engaged”. [3]

For x-ray screeners, an example of this would be the requirement to undergo 6 hours in 6 months of computer-based training using a package that provides feedback.

Summative assessments [4] are used to evaluate trainee learning, skill acquisition, and competency achievement at the conclusion of a defined instructional period. There are generally three main components to summative assessment:

  • The test evaluates whether the trainees have achieved the requisite competencies through their training. What makes an assessment “summative” is not the format of the test but the way in which it is used—i.e., as an assessment of a specific competency.
  • The test is delivered at the end of a training programme or at a specifically defined interval (e.g., every 13 months for the UK X-Ray Competency Test). This means the test provides insight into the effectiveness of the training as a whole and the skills realised rather than a diagnosis of what further learning might be needed, with the aim of then bridging any gaps. Trainees generally enter summative assessments having fully completed a training course designed to give them the relevant competencies.
  • The test produces quantitative scores or grades that are appended to a record and can be used for further evaluative purposes and decision-making.

For X-ray screeners in the UK, this would be the Digital National X-Ray Competency Test, or DXCT.

There is a clear difference between formative and summative assessments in our evaluation of an x-ray screener. But why is this difference so important? The next section will explore how the blending of formative and summative approaches can reduce the quality of the overall assessment and, therefore, the certification process.

Where previously seen images are used in a final assessment, the test may become unfair – in reality, a screener is not going to be able to see the images of bags from their live operational environment multiple times.

Issues with undefined assessment strategies
  • Vested Interests: Having a separate, stand-alone certification process to validate a training programme removes the risk of bias in the certification assessment. Biased assessments risk screeners being “certificated” with an unsuitable level of competency. Where certification is delivered as decoupled from the training, it is more likely to stand as a valid and reliable confirmation. 
  • Loss of integrity of the final assessment: Having a separate training and certification system ensures that the question bank/image bank is maintained. An assessment for learning may not be subject to the same quality controls, and therefore, there may be instances of malpractice. Where previously seen images are used in a final assessment, the test may become unfair – in reality, a screener is not going to be able to see the images of bags from their live operational environment multiple times.
  • Absence of controlled conditions: A certification assessment is designed to be a true test of acquired competencies. This is in line with assessment principles. Where controlled conditions are not applied, the assessment risks being an invalid assessment of competency. 
  • Diluting the value: An assessment for certification carries a certain weight or credit. This value is diluted without a separate certification assessment process. In the opposite direction, where only testing is used, the learning and motivational value of formative assessment is lost (Ismail et al., 2022 [5]; Woods, 2015 [6]).
  • Not compliant with the regulations: Our regulatory regime clearly defines certification and training as different concepts.
  • Difficult to standardise: Without a controlled test with a specific image bank, it becomes harder to effectively compare test scores across time or locations. This removes the data insight advantages brought by an assessment product and makes it harder to quality assure the test itself.
  • High stakes definition: Linked to ‘dilution of value’, a test for certification could be defined as “high-stakes for purposes of accountability. This means that the assessment has direct consequences for passing or failing (in our example, continuing employment as a screener) and the potential risk of harm if the test is invalid. Pedagogically, high-stakes tests generally have final, definitive results rather than results that can be discussed and improved (or even not relevant to the trainee at all). They also tend to test whether the answer is right or wrong rather than the process of getting to that answer. High-stakes tests ensure the safety of the public and, therefore, have a higher level of standardisation and assurance applied to them (Dolin et al., 2018 [7]).
baggage inspection system

IMAGE: MIKE SHOTS / Shutterstock.com

What can we learn from educational good practice?

Education is fundamentally important to society as a whole; as such, the topic has been extensively researched. Security education—including the training and certification processes defined in our regulations—must look to the wider education world to find solutions to some of its issues.

Conclusions

An image-based interpretation test is one element of an overall certification process for screeners. It can be considered a summative assessment, confirming that the relevant competencies are in place. Educational good practice leads the design of certification systems towards using assessments that match the purpose of the evaluation: if the assessment is being used to inform training and to give knowledge, then it is formative. If the assessment is being used to identify a standard or fulfil a purpose (certification or selection), then it is summative [8].

Often, the use of formative and summative assessments can be combined, to the detriment of the goal of the other (Harlen & James, 1997 [9]). There will, of course, be certain competencies or training regimes where different assessment practices suit different goals – the literature points to examples of portfolio assessment, where a series of formative assessments can be combined to make a summative judgement (Dolin et al., 2018 [7]). However, this must be done with caution; finding a way of relating them that preserves the distinct purposes and removes vested interest and risk. In practice, for x-ray screener competency, this means maintaining a separate training programme from the certification system

Certification under ICAO Annex 17 is a combination of the evaluation of on-the-job training, practical assessment, theoretical assessment, and the image-based interpretation test for screeners. The full certification process of a screener will therefore include numerous assessments; it should be noted that any method of assessment (e.g. written exam, oral quiz, image interpretation test) can be used for formative or summative (Dixon & Worrell, 2016 [10]). For screener certification, this means an image-based interpretation test can form a valid part of training, but in order to maintain the reliability, validity, and fairness of the summative component, the final image-based interpretation test for certification must be delivered separately.

View shopping cart