Features

Quality Risk Management for Laboratory Operations

Best practices for mitigating risks in all aspects of laboratory operations.

Author Image

By: Paul Mason

Executive Director, Lachman Consultants

The ICHQ9, revision 1, Quality Risk Management (QRM) defines risk as “the combination of the probability of occurrence of harm and the severity of that harm”1 and indicates that risk-based decision making is inherent throughout the product lifecycle. An obvious critical aspect of this is ensuring that the correct risk-based decisions are made, which goes hand in hand with a sound understanding of the subject matter for which the risk assessment is being applied. ICHQ9(R1) goes into significant detail on the QRM process and the associated phases and methodology (going into further detail in Annex I). Annex 2 of ICHQ9 (R1) provides suggestions of where QRM can be applied in the industry, understanding that Annex 2 is not an exhaustive list as QRM can be applied to anytime a decision is made with a potential product quality/patient safety impact. So, how does this apply to laboratory operations?

Annex 2 of ICHQ9 (R1) provides scenarios, for example under Quality Risk Management as Part of Production Validation:

“To identify the scope and extent of verification, qualification, and validation activities (e.g., analytical methods, processes, equipment and cleaning methods).”1

In addition, under Annex 2 for Quality Risk Management as Part of Laboratory Control and Stability Studies Out of Specification results it states:

“To identify potential root causes and corrective actions during the investigation of out of specification results.”1

Knowledge management

USP2 demonstrates how QRM can be used during analytical procedure development (as part of analytical procedure lifecycle management) where the analytical control strategy is being established. QRM goes hand in hand with analytical development, where a pre-requisite to QRM is an understanding of the critical method attributes in terms of potential impact to the reported result and whether the proposed analytical control strategy will afford results that meet method requirements—as defined within the Analytical Target Profile (ATP). The example provided in USP is an Ishikawa diagram which lists out all the method variables between people, equipment, method, measurement, environment and materials, and then such variables are risk ranked as it relates to potential impact on ATP along with the proposed control strategy.

This is where a key concept of QRM must be applied in that the level of control is commensurate with the level of risk. For example, with most analytical instrumentation, there is an inherent risk associated with the background noise of a detector which would prompt the following questions: What effort should I take to control that background noise? What risk is associated with that detector noise for that detector with that method? These can be answered through Knowledge Management as one of the “enablers” referenced in ICHQ10 Pharmaceutical Quality System2 (along with QRM) by which a company can effectively and successfully implement a pharmaceutical quality system. ICHQ10 states under section E.1:

“Product and process knowledge should be managed from development through the commercial life of the product up to and including product discontinuation. For example, development activities using scientific approaches provide knowledge for product and process understanding.”3

Knowledge management is a necessity for effective QRM. So, returning to the question of the required control for the detector noise, an approach that can be taken (through method development) is to determine the impact that detector’s noise has on the analytical method ATP, determine what method variables/settings impact the noise and then identify the appropriate controls. ICHQ14 Analytical Procedure Development4 refers to the establishment of proven acceptable ranges (PAR) and Method Operable Design Region (MODR) through univariate or multivariate (enhanced approach) experiments with a goal of understanding the relationship between analytical procedure variables (inputs) and the responses (outputs).

Now it is conceivable that it is determined that, regardless of the control you implement for the critical method attribute, the residual risk is unacceptable, and that the analytical technique needs to be changed all together. Such a scenario could be that, to control the noise of the detector to an acceptable level and operate within the PAR/MODR, the instrument is running too close to the instrument qualified range (per Analytical Instrument Qualification (AIQ)) and there is risk of continual system suitability failures (as manifested through Detection Limit / Quantitation Limit analyses). This leads to another aspect of QRM, which is continual Risk Review. So, continuing with this scenario, a firm should evaluate instances of laboratory investigations where the root cause was linked to the method capability and determine if the methods analytical control strategy is still effective. Investigation trending is particularly invaluable when assessing whether the level of residual risk is still acceptable.

QRM within the laboratory does not just apply to the lifecycle management of analytical procedures, but all systems, procedures, and processes that pose a potential impact or risk to the quality of the data that is generated by the laboratory.

Therefore, to make an appropriate QRM decision for that system/procedure/process there needs to be an understanding of the potential impact/risk to the data and that comes back to: Knowledge Management. For example, a laboratory’s Reference Standard program has a high risk/potential impact to the quality of the generated data, which should be reflected in the risk reduction controls; for example when addressing the following: reference standard characterization (based upon type of standard) requirements along with reference standard retesting (including change in reference standards), reference standard monitoring and reference standard storage/inventory control.

Conversely, the laboratory’s housekeeping program does not have a similar level of risk/potential impact to the data and, as such, the level of controls should reflect this. For example, the housekeeping program may require a periodic “clean up” where any expired chemicals/reagents are removed along with general cleaning of work areas. As previously stated, any assumptions within the risk assessment relating to residual risk should be monitored for ongoing accuracy via risk review; in this instance, if there are laboratory investigations tied to housekeeping, for example, due to laboratory cross contamination, then the assumption on residual risk and required control would need to be revisited. 

QRM should be integral to those foundational laboratory processes, such as Documentation Control, Data Review, Change Control, and Laboratory Investigation. For example, the Laboratory Data Review’s procedure should have risk reduction controls designed into the program which have been identified via a risk assessment of the process. The risk assessment should be based upon a flow diagram representing the process with each process step clearly delineated along with the decision points and the identified risks. With such a critical program as data review, any residual risk needs to be scrutinized for acceptability.

One of the challenges with such a program as Data Review from the perspective of QRM is that there needs to be awareness of the risks associated with the test methods/procedures that generated that data. For example, most test procedures involve samples and standard weighing as part of preparing the sample and standard solutions, which is recognized as a critical step. The data review procedure should recognize what controls are built into the standard/sample weighing procedure and consider the following: Is there a real time second person verification recorded in a logbook? Are there bar code scanners with automatic transfer to a LIMS system? Is the process highly manual/automated? Frequency and range of balance calibration?

The data review program should be designed with an understanding of the data integrity risks associated with systems/processes that generated the data for which is being reviewed. Returning to the data review of laboratory weighing activities, the data review process should confirm that the periodic balance check has occurred per process, and that requirements were met and that there were no errors with the sample and standard weighing process. The latter can be a challenge, but any residual risk associated with the laboratory weighing process that is not recognized when designing the laboratory data review process can result in laboratory incidents. For example, investigations due to misalignment between what was weighed, and the afforded sample/standard weight or standard being weighed in place of the sample. Potential laboratory weighing risk reduction controls are real time verification, barcoding and using a standard with a profile that is distinct from the samples that are weighed (to aid in enhanced detection controls). Such risk reduction controls would then become factored into the data review program.

Risk assessment

It is paramount that Risk Assessment is employed when executing an investigation. This applies to all investigation elements including Root Cause Analysis, Impact Assessment and CAPA. This is crucial as the investigation may identify multiple potential root causes, various scopes of potential impact along with multiple CAPA’s and risk analysis will then be required to hone in on the most appropriate. Impact assessment highlights the importance of risk assessment as a means of defining the scope of potential impact along with defining the data that is needed to determine impact along with criteria.

When considering an OOS investigation of an API, the previously referenced Ishikawa approach can be used to assess the potential causes and then the most likely can be further investigated. If the OOS is associated with a system suitability failure, then there is a potential that the OOS is due to a laboratory root cause and maybe not reflective of the material that is being tested. Therefore, there will be a focus of People, Equipment, Method, Measurement, Environment, and Materials as it relates to executing the laboratory test to determine if the cause of the system suitability failure is also the cause of the OOS.

If the cause of the OOS is linked to the instrument performance, then risk assessment can be used to determine if the impact assessment should be expanded beyond the subject instrument. For example, should the impact assessment assess previous testing on the instrument for all tests or only that test method associated with the OOS? Can it be justified that the impact assessment can be limited to the subject incident, or does it need to be expanded to other instruments? Obviously, it will come down to the specifics of the instrument failure and the capability of the associated instrument preventative and detection controls, but having the risk rationale documented within the investigation to justify the scope of the impact assessment puts the firm in the position to defend the approach that was taken.

QRM is a cornerstone of change control, where the risk assessment can be cross departmental, such as when defining specification requirements where input would be sought from development, manufacturing, regulatory, procurement and the laboratory. In such a situation, risk management is paramount to ensure that appropriate specifications are set where product/process knowledge is employed to ensure there is a scientific basis for the material attributes that are included within the specification and the criteria that is set. For example, when considering specifications for incoming materials, it is unlikely that it is appropriate to default to the vendor’s specification as that will likely result in an unacceptable level of risk to the associated product/process.

Through product and process knowledge, the critical material attributes are hopefully known, and this will be the basis for defining the specification. It is recommended that any residual risk is documented as that could prove to be invaluable as a future reference. For example, for a raw material, a risk decision could be made as it relates to the approach taken for assaying residual solvents/moisture content. These could be assayed via a non-specific method such as Loss On Drying or an analyte specific approach can be taken. The change control should record the decision that is made along with the rationale including any residual risk, as it may prove necessary to revisit that residual risk (along with the associated rationale). 

Ultimately, QRM is a powerful decision-making tool whereby risks associated with laboratory operations can be identified and appropriate approaches/actions can be defined. This should be employed in conjunction with product and process knowledge and any assumptions/conclusions, as they relate to residual risks, should be re-evaluated via lifecycle management.

References
1. International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use, “ICH Harmonized Guideline – Quality Risk Management Q9 (R1)”, January 18, 2023, https://database.ich.org/sites/default/files/ICH_Q9%28R1%29_Guideline_Step4_2022_1219.pdf
2. United States Pharmacopeia, General Chapter Analytical Procedure Life Cycle, Effective May 1, 2023, gc-1220-pre-post-20210924.pdf (uspnf.com)
3. International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use “ICH  Harmonised Tripartite Guideline – Pharmaceutical Quality System Q10”, June 4, 2008, https://database.ich.org/sites/default/files/Q10%20Guideline.pdf
4. International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use, “ICH Harmonised Guideline – Analytical Procedure Development Q14”, Draft version, March 24, 2022, https://database.ich.org/sites/default/files/ICH_Q14_Document_Step2_Guideline_2022_0324.pdf


Note: If you have an any questions relating to the employment of Quality Risk Management within laboratory operations, please feel free to reach out to Lachman Consultant Services, Inc. via www.lachmanconsultants.com


Paul Mason, Ph.D., is a Senior Director at Lachman Consultants who has 20+ years of experience in the pharmaceutical industry. He is a Quality Control chemist experienced in sterile parenteral, API, and solid oral dosage forms. His experience spans finished dosage form, CMOs, and API (intermediates) manufacture support in both a Quality Control and Analytical Development setting. Dr. Mason possesses a deep understanding of business strategy relating to drug research, development, quality assurance, quality control, CMC submissions, laboratory design, clinical and pre-clinical quality/analytical development support. In addition, he has provided expert scientific support for the timely resolution of complicated scientific issues raised by FDA application reviewers.

Keep Up With Our Content. Subscribe To Contract Pharma Newsletters