ANALYTICAL PERFORMANCE CHARACTERISTICS
Accuracy
Definition
The accuracy of an analytical method is the closeness of test results obtained by that method to the true value. The accuracy of an analytical method should be established across its range.
Determination
In assay of a drug substance, accuracy may be determined by application of the analytical method to an analyte of known purity (e.g., a Reference Standard) or by comparison of the results of the method with those of a second, well-characterized method, the accuracy of which has been stated or defined.
In assay of a drug in a formulated product, accuracy may be determined by application of the analytical method to synthetic mixtures of the drug product components to which known amounts of analyte have been added within the range of the method. If it is not possible to obtain samples of all drug product components, it may be acceptable either to add known quantities of the analyte to the drug product (i.e., to spike) or to compare results with those of a second, well-characterized method, the accuracy of which has been stated or defined.
In quantitative analysis of impurities, accuracy should be assessed on samples (of drug substance or drug product) spiked with known amounts of impurities. Where it is not possible to obtain samples of certain impurities or degradation products, results should be compared with those obtained by an independent method. In the absence of other information, it may be necessary to calculate the amount of an impurity on the basis of comparison of its response to that of the drug substance; the ratio of the responses of equal amounts of the impurity and the drug substance (response factor) should be used if known.
Accuracy is calculated as the percentage of recovery by the assay of the known added amount of analyte in the sample, or as the difference between the mean and the accepted true value, together with confidence intervals.
The ICH documents recommend that accuracy be assessed using a minimum of nine determinations over a minimum of three concentration levels, covering the specified range (i.e., three concentrations and three replicates of each concentration).
Precision
Definition
The precision of an analytical method is the degree of agreement among individual test results when the method is applied repeatedly to multiple samplings of a homogeneous sample. The precision of an analytical method is usually expressed as the standard deviation or relative standard deviation (coefficient of variation) of a series of measurements. Precision may be a measure of either the degree of reproducibility or repeatability of the analytical method under normal operating conditions. In this context, reproducibility refers to the use of the analytical procedure in different laboratories, as in a collaborative study. Intermediate precision expresses within-laboratory variation, as on different days, or with different analysts or equipment within the same laboratory. Repeatability refers to the use of the analytical procedure within a laboratory over a short period of time using the same analyst with the same equipment. For most purposes, repeatability is the criterion of concern in USP analytical procedures, although reproducibility between laboratories or intermediate precision may well be considered during the standardization of a procedure before it is submitted to the Pharmacopeia.
Determination
The precision of an analytical method is determined by assaying a sufficient number of aliquots of a homogeneous sample to be able to calculate statistically valid estimates of standard deviation or relative standard deviation (coefficient of variation). Assays in this context are independent analyses of samples that have been carried through the complete analytical procedure from sample preparation to final test result.
The ICH documents recommend that repeatability should be assessed using a minimum of nine determinations covering the specified range for the procedure (i.e., three concentrations and three replicates of each concentration, or a minimum of six determinations at 100% of the test concentration).
Specificity
Definition
The ICH documents define specificity as the ability to assess unequivocally the analyte in the presence of components that may be expected to be present, such as impurities, degradation products, and matrix components. Lack of specificity of an individual analytical procedure may be compensated for by other supporting analytical procedures.
[NOTEOther reputable international authorities (IUPAC, AOAC) have preferred the term selectivity, reserving specificity for procedures that are completely selective.
] For the test or assay methods below, the above definition has the following implications:
IDENTIFICATION TESTS
ensure the identity of the analyte.
PURITY TESTS
ensure that all the analytical procedures performed allow an accurate statement of the content of impurities of an analyte (e.g., related substances test, heavy metals limit, organic volatile impurity limit).
ASSAYS
provide an exact result, which allows an accurate statement on the content or potency of the analyte in a sample.
Determination
In qualitative analyses (identification tests), the ability to select between compounds of closely related structure that are likely to be present should be demonstrated. This ability should be confirmed by obtaining positive results (perhaps by comparison to a known reference material) from samples containing the analyte, coupled with negative results from samples that do not contain the analyte, and by confirming that a positive response is not obtained from materials structurally similar to or closely related to the analyte.
In an analytical procedure for impurities, specificity may be established by spiking the drug substance or product with appropriate levels of impurities and demonstrating that these impurities are determined with appropriate accuracy and precision.
In an assay, demonstration of specificity requires that it can be shown that the procedure is unaffected by the presence of impurities or excipients. In practice, this can be done by spiking the drug substance or product with appropriate levels of impurities or excipients and demonstrating that the assay result is unaffected by the presence of these extraneous materials.
If impurity or degradation product standards are unavailable, specificity may be demonstrated by comparing the test results of samples containing impurities or degradation products to a second well-characterized procedure (e.g., a pharmacopeial or other validated procedure). These comparisons should include samples stored under relevant stress conditions (e.g., light, heat, humidity, acid or base hydrolysis, oxidation). In an assay, the results should be compared; in chromatographic impurity tests, the impurity profiles should be compared.
The ICH documents state that when chromatographic procedures are used, representative chromatograms should be presented to demonstrate the degree of selectivity, and peaks should be appropriately labeled. Peak purity tests (e.g., using diode array or mass spectrometry) may be useful to show that the analyte chromatographic peak is not attributable to more than one component.
Detection Limit
Definition
The detection limit is a characteristic of limit tests. It is the lowest amount of analyte in a sample that can be detected, but not necessarily quantitated, under the stated experimental conditions. Thus, limit tests merely substantiate that the amount of analyte is above or below a certain level. The detection limit is usually expressed as the concentration of analyte (e.g., percentage, parts per billion) in the sample.
Determination
For noninstrumental methods, the detection limit is generally determined by the analysis of samples with known concentrations of analyte and by establishing the minimum level at which the analyte can be reliably detected.
For instrumental procedures, the same method may be used as for noninstrumental. In the case of methods submitted for consideration as official compendial methods, it is almost never necessary to determine the actual detection limit. Rather, the detection limit is shown to be sufficiently low by the analysis of samples with known concentrations of analyte above and below the required detection level. For example, if it is required to detect an impurity at the level of 0.1%, it should be demonstrated that the procedure will reliably detect the impurity at that level.
In the case of instrumental analytical procedures that exhibit background noise, the ICH documents describe a common approach, which is to compare measured signals from samples with known low concentrations of analyte with those of blank samples. The minimum concentration at which the analyte can reliably be detected is established. Typically acceptable signal-to-noise ratios are 2:1 or 3:1. Other approaches depend on the determination of the slope of the calibration curve and the standard deviation of responses. Whatever method is used, the detection limit should be subsequently validated by the analysis of a suitable number of samples known to be near, or prepared at, the detection limit.
Quantitation Limit
Definition
The quantitation limit is a characteristic of quantitative assays for low levels of compounds in sample matrices, such as impurities in bulk drug substances and degradation products in finished pharmaceuticals. It is the lowest amount of analyte in a sample that can be determined with acceptable precision and accuracy under the stated experimental conditions. The quantitation limit is expressed as the concentration of analyte (e.g., percentage, parts per billion) in the sample.
Determination
For noninstrumental methods, the quantitation limit is generally determined by the analysis of samples with known concentrations of analyte and by establishing the minimum level at which the analyte can be determined with acceptable accuracy and precision.
For instrumental procedures, the same method may be used as for noninstrumental. In the case of methods submitted for consideration as official compendial methods, it is almost never necessary to determine the actual quantitation limit. Rather, the quantitation limit is shown to be sufficiently low by the analysis of samples with known concentrations of analyte above and below the quantitation level. For example, if it is required to assay an analyte at the level of 0.1 mg per tablet, it should be demonstrated that the method will reliably quantitate the analyte at that level.
In the case of instrumental analytical methods that exhibit background noise, the ICH documents describe a common approach, which is to compare measured signals from samples with known low concentrations of analyte with those of blank samples. The minimum concentration at which the analyte can reliably be quantified is established. A typically acceptable signal-to-noise ratio is 10:1. Other approaches depend on the determination of the slope of the calibration curve and the standard deviation of responses. Whatever method is used, the quantitation limit should be subsequently validated by the analysis of a suitable number of samples known to be near, or prepared at, the quantitation limit.
Linearity and Range
Definition of Linearity
The linearity of an analytical method is its ability to elicit test results that are directly, or by a well-defined mathematical transformation, proportional to the concentration of analyte in samples within a given range.
Definition of Range
The range of an analytical method is the interval between the upper and lower levels of analyte (including these levels) that has been demonstrated to be determined with a suitable level of precision, accuracy, and linearity using the method as written. The range is normally expressed in the same units as test results (e.g., percent, parts per million) obtained by the analytical method.
Determination of Linearity and Range
Linearity should be established across the range of the analytical procedure. It should be established initially by visual examination of a plot of signals as a function of analyte concentration of content. If there appears to be a linear relationship, test results should be established by appropriate statistical methods (e.g., by calculation of a regression line by the method of least squares). In some cases, to obtain linearity between the response of an analyte and its concentration, the test data may have to be subjected to a mathematical transformation. Data from the regression line itself may be helpful for providing mathematical estimates of the degree of linearity. The correlation coefficient,
y-intercept, slope of the regression line, and residual sum of squares should be submitted.
The range of the method is validated by verifying that the analytical method provides acceptable precision, accuracy, and linearity when applied to samples containing analyte at the extremes of the range as well as within the range.
ICH recommends that, for the establishment of linearity, a minimum of five concentrations normally be used. It is also recommended that the following minimum specified ranges should be considered:
ASSAY OF A DRUG SUBSTANCE (or a finished product):
from 80% to 120% of the test concentration.
DETERMINATION OF AN IMPURITY:
from 50% to 120% of the specification.
FOR CONTENT UNIFORMITY:
a minimum of 70% to 130% of the test concentration, unless a wider or more appropriate range, based on the nature of the dosage form (e.g., metered-dose inhalers) is justified.
FOR DISSOLUTION TESTING:
±20% over the specified range (e.g., if the specifications for a controlled-release product cover a region from 20% after 1 hour, and up to 90% after 24 hours, the validated range would be 0% to 110% of the label claim).
Ruggedness
Definition
The ruggedness of an analytical method is the degree of reproducibility of test results obtained by the analysis of the same samples under a variety of conditions, such as different laboratories, analysts, instruments, lots of reagents, elapsed assay times, assay temperatures, or days. Ruggedness is normally expressed as the lack of influence on test results of operational and environmental variables of the analytical method. Ruggedness is a measure of reproducibility of test results under the variation in conditions normally expected from laboratory to laboratory and from analyst to analyst.
Determination
The ruggedness of an analytical method is determined by analysis of aliquots from homogeneous lots in different laboratories, by different analysts, using operational and environmental conditions that may differ but are still within the specified parameters of the assay. The degree of reproducibility of test results is then determined as a function of the assay variables. This reproducibility may be compared to the precision of the assay under normal conditions to obtain a measure of the ruggedness of the analytical method.
Robustness
Definition
The robustness of an analytical method is a measure of its capacity to remain unaffected by small but deliberate variations in method parameters and provides an indication of its reliability during normal usage.
System Suitability
If measurements are susceptible to variations in analytical conditions, these should be suitably controlled, or a precautionary statement should be included in the method. One consequence of the evaluation of ruggedness and robustness should be that a series of system suitability parameters is established to ensure that the validity of the analytical method is maintained whenever used. Typical variations are the stability of analytical solutions, equipment, and analysts. In liquid chromatography, typical variations are the pH of the mobile phase, the mobile phase composition, different lots or suppliers of columns, the temperature, and the flow rate. In the case of gas chromatography, typical variations are different lots or suppliers of columns, the temperature, and the flow rate.
System suitability tests are based on the concept that the equipment, electronics, analytical operations, and samples to be analyzed constitute an integral system that can be evaluated as such. System suitability test parameters to be established for a particular method depend on the type of method being evaluated. They are especially important in the case of chromatographic methods, and submissions to the USP should make note of the requirements under the
System Suitability section in the general test chapter
Chromatography 621.
DATA ELEMENTS REQUIRED FOR ASSAY VALIDATION
Compendial assay procedures vary from highly exacting analytical determinations to subjective evaluation of attributes. Considering this variety of assays, it is only logical that different test methods require different validation schemes. This chapter covers only the most common categories of assays for which validation data should be required. These categories are as follows.
Category I:
Analytical methods for quantitation of major components of bulk drug substances or active ingredients (including preservatives) in finished pharmaceutical products.
Category II:
Analytical methods for determination of impurities in bulk drug substances or degradation compounds in finished pharmaceutical products. These methods include quantitative assays and limit tests.
Category III:
Analytical methods for determination of performance characteristics (e.g., dissolution, drug release).
Category IV:
Identification tests.
For each assay category, different analytical information is needed. Listed in
Table 2 are data elements normally required for each of the categories of assays.
Table 2. Data Elements Required for Assay Validation
Analytical Performance Characteristics |
Assay Category I |
Assay Category II |
Assay Category III |
Assay Category IV |
Quantitative |
Limit Tests |
Accuracy |
Yes |
Yes |
* |
* |
No |
Precision |
Yes |
Yes |
No |
Yes |
No |
Specificity |
Yes |
Yes |
Yes |
* |
Yes |
Detection limit |
No |
No |
Yes |
* |
No |
Quantitation limit |
No |
Yes |
No |
* |
No |
Linearity |
Yes |
Yes |
No |
* |
No |
Range |
Yes |
Yes |
* |
* |
No |
*
May be required, depending on the nature of the specific test.
|
Already established general assays and tests (e.g., titrimetric method of water determination, bacterial endotoxins test) should be revalidated to verify their accuracy (and absence of possible interference) when used for a new product or raw material.
The validity of an analytical method can be verified only by laboratory studies. Therefore, documentation of the successful completion of such studies is a basic requirement for determining whether a method is suitable for its intended applications. Appropriate documentation should accompany any proposal for new or revised compendial analytical procedures.