4.1. Introduction

In the pharmaceutical industry, analytical methods play a vital role in all experiments performed in the development of a drug product. If the quality of an analytical method is doubtful, then the whole set of decisions based on those measures is questionable.

Consequently, assessment of the quality of an analytical method is far more than a statistical challenge, it is a matter of good ethics and good business practices.

Many regulatory documents have been released in the pharmaceutical industry to address quality issues. These are primarily ICH and FDA documents. Those that are related to analytical and bioanalytical method validation (ICH, 1995, 1997; FDA, 2001) suggest that analytical methods must comply with specific acceptance criteria to be recognized as validated procedures. The primary aim of these documents is to require evidence that the analytical methods are suitable for their intended use. Unfortunately, discrepancies exist among these documents with respect to the definition of acceptance criteria, and limited guidance is provided for estimating the performance criteria.

In this chapter, background information will be provided on analytical method validation concepts and apparent inconsistencies will be addressed from a statistical perspective. Statistical methods will be described for estimation of analytical performance parameters, and decision criteria will be illustrated that are consistent with the concept of a "good" analytical procedure. The impact of these methods on the design of experiments needed to obtain reliable estimates of the performance criteria will be considered. A major emphasis of this chapter will be the use of SAS programs to illustrate the computation of assay performance parameters, with limited discussion about the philosophy of the assay validation purpose or practices.

To save space, some SAS code has been shortened and some output is not shown. The complete SAS code and data sets used in this book are available on the book's companion Web site at http://support.sas.com/publishing/bbu/companion_site/60622.html.

4.1.1. Method Classification Based on Data Types

The ultimate goal of an analytical method or procedure is to measure accurately a quantity, such as the concentration of an analyte, or to measure a specific activity, as for example for a biomarker. However, many assays such as cell-based and enzyme activity biomarker assays may not be very sensitive, may lack precision, and/or may not offer definitive reference standards. Assays based on physicochemical (such as chromatographic methods) or biochemical (such as ligand binding assays) properties of an analyte assume that these quantifiable characteristics are reflective of the quantities, concentration or biological activity of the analyte. For the purpose of analytical validation, we will follow the recently proposed classifications for assay data by Lee et al (2003). These classifications, summarized below, provide a clear distinction with respect to the analytical validation practice and requirements.

Qualitative methods generate data which do not have a continuous proportionality relationship with the amount of analyte in a sample; the data are categorical in nature. Data may be nominal such as a present/absent call for a gene or gene product. Alternatively, data might be ordinal in nature, with discrete scoring scales (e.g., 1 to 5 or −, +, +++) such as for immuno-histochemistry assays or Fluorescence In SituHybridization (FISH).

Quantitative methods are assays where the response signal has a continuous relationship with the quantity or activity of the analyte. These responses can therefore be described by a mathematical function. Inclusion of reference standards at discrete concentrations allows the quantification of sample responses by interpolation. The availability of a well-defined reference standard may be limited, or may not be representative of the in vivo presentation, so quantification may not be absolute. To that end, three types of quantitative methods have been defined:

  • A definitive quantitative assay uses calibrators fit to a known model to provide absolute quantitative values for unknown samples. Typically, such assays are only possible where the analyte is not endogenous. An example of this is a small molecule drug.

  • A relative quantitative assay is similar in approach, but generally involves the measurement of endogenously occurring analytes. In this case, even a "zero" or blank calibrator may contain some amount of analyte, and quantification can only be done relative to this "zero" level. Examples of this include immunoassays for cytokines such as sTNFRII, or gene expression assays, e.g., Reverse Transcriptase-Polymerase Chain Reaction (RT-PCR).

  • A quasi-quantitative assay does not involve the use of calibrators, mostly due to the lack of suitable reference material, so the analytical result for a test sample is reported only in terms of the assay signal (e.g., optical density in ELISA).

This chapter deals with the assessment of definite and relative quantitative assays. A full discussion of quasi-quantitative and qualitative assays and statistical considerations thereof is beyond the scope of this chapter. A good reference on the analytical validation of a typical quasi-quantitative assay is a white paper on immunogenicity by Mire-Sluis et al. (2004).

4.1.2. Objective of an Analytical Method

The objective of a definite and relative quantitative analytical method is to be able to quantify as accurately as possible each of the unknown quantities that the laboratory will have to determine. In other words, what all analysts expect from an analytical procedure is that the difference between the measurement or observation (X) and the unknown "true value" μT of the test sample be small or inferior to an acceptance limit λ:


The acceptance limit λ can be different depending on the requirements of the analyst and the objective of the analytical procedure. The objective is linked to the requirements usually admitted by the practice (e.g., 1% or 2% on bulk, 5% on pharmaceutical specialties, 15% for biological samples. Acceptance limits vary in clinical applications depending on factors such as the physiological variability and the intent of use).

4.1.3. Objective of the Pre-Study Validation Phase

The aim of the pre-study validation phase is to generate information to guarantee that the analytical method will provide, in routine use, measurements close to the true value (DeSilva et al, 2003; Findlay, 2001; Hubert et al, 2004; Smith and Sittampalam, 1998; Finney, 1978) without being affected by other elements present in the sample. In other words, the validation phase should demonstrate that the inequality described in Equation (4.1) holds for a certain proportion of the sample population.

The difference between the measurement X and its true value is a sum of a systematic error (bias or trueness) and a random error (variance or precision). The true values of these parameters are unknown but they can be estimated based on the validation experiments. The reliability of these estimates depends on the adequacy of these experiments (design, size).

Consequently, the objective of the validation phase is to evaluate whether, given the estimates of bias and variance, the expected proportion of measures that will fall within the acceptance limits is greater than a predefined level, say, β:


Although Equation (4.2) cannot be solved exactly within a frequentist framework, Section 4.6 will discuss approximate solutions that can be used in practice.

4.1.4. Classical Design in Pre-Study Validation

Experiments performed during pre-study validation are designed to mimic the processes and practices to be followed during routine application of a method. All aspects of the analytical method should be taken into account, such as the lot of a solvent, operator, preparation of samples, etc. If measurements generated under these "simulated" conditions are acceptable (see Section 4.6) then the method will be declared valid for routine use. Usually, two sets of samples will be prepared for simulating the real process: calibration and validation samples.

  • Calibration samples (CS) must be prepared according to the protocol that will be followed during routine use, i.e., the same operational mode, the same number of concentration levels for the standard curve, and the same number of repetitions at each level.

  • Validation samples (VS) must be prepared in the sample matrix when applicable. In the validation phase, they mimic the unknown samples that the analytical procedure will have to quantify in routine use. Each validation standard should be prepared independently, in order to have realistic estimates of the variance components.

The minimum design of a pre-study validation phase is at least two replicates per run or series in a minimum of three runs. However, it is highly recommended to consider at least six runs in order to have a good estimate of the between-run variance. The number of runs and replicates to perform at each concentration level to demonstrate that an analytical procedure is valid could be estimated (by simulations) and depends, of course, on the inherent but unknown properties of the analytical procedure itself. The more variable the method, the more experiments is necessary.

Table 4.1 displays the minimal sample size for r runs and s replicates per run for 10% acceptance limits (the table was computed via simulations, assuming a potential small bias of 2%). It is clear that the number of runs increases with increasing between-run variance. The higher number of runs can be compensated by more replicates per runs, but this leads to a larger total number of experiments (rs). Also as expected when the sum of bias (2%), the within-run and between-run variances becomes greater than 10%, it becomes unlikely that the method will ever be validated for such acceptance limits. More development in the laboratory is required to achieve this objective. The reproducibility that requires between-laboratory experiments will not be discussed in this chapter.

Table 4-1. Minimal Sample Size for r Runs and s Replicates per Run for 10% Acceptance Limits
Between -run varianceWithin-run variance
1%2%3%4%5%
rsrsrsrsrs
1%4343434459
 5353535467
2%4343434689
 5353535697
3%444656710  
 53536589  
4%71098      
 87106      

4.1.5. Example: A Sandwich ELISA Assay

A sandwich ELISA assay, optimized by statistical design of experiments and validated at Lilly Research Laboratories, will be used throughout this chapter for illustration purposes. This data set is available on the book's companion Web site.

The objective of this assay was to quantify a protein used as a biomarker in neurological disease therapeutic research. The ELISA consisted of incubation of samples on plates pre-coated with a capture antibody specific to the protein of interest followed by immunological detection of the specific bound protein by an enzyme conjugate and measurement (optical density) of the colored product. In order to validate this assay, calibration standards and validation samples were prepared in appropriate matrices from stock protein solution via serial dilution and then tested in triplicate. The procedure was followed for four independent runs with two plates per run over four days.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.21.47