Features

The Importance of Change Control within the Laboratory Setting

Exploring best practices for the employment of quality risk management within laboratory operations.

Author Image

By: Paul Mason

Executive Director, Lachman Consultants

Change management is essential to a pharmaceutical quality system, and it is considered one of the primary elements, as stated within ICH Q10.1 It is expected and understood that change is inherent to a cGMP setting, and, as such, there needs to be a systematic and structured process for addressing change. ICH Q10 states:

“[T]o evaluate, approve and implement these changes properly, a company should have an effective change management system… The change management system ensures continual improvement is undertaken in a timely and effective manner. It should provide a high degree of assurance there are no unintended consequences of the change.”

This leads to a critical concept of change management—risk. The September 2006 FDA “Guidance for Industry: Quality Systems Approach to Pharmaceutical CGMP Regulations”2 states:

“Quality risk management can, for example, help guide the setting of specifications and process parameters for drug manufacturing, assess and mitigate the risk of changing a process or specification…”

However, a company should not make the mistake of enabling a culture where risk of change automatically blocks any change; rather, risk management should be looked upon as a primary tool for enabling successful change as this requires understanding of the potential consequences of a change and facilitates definition of the necessary controls for the change to be implemented. Inherent to the “c” of cGMP is that a company’s management encourages change as it is essential to a culture of continual improvement.

Within a laboratory setting, it is recognized that there can be various triggers that result in a need to make a change. For example, this can be due to quality investigations that drive a corrective action, which in turn leads to a change control, or because of a continual improvement project. Whatever the trigger, it is imperative, when executing a change, that it is under a quality-approved procedure. EudraLex Volume 4, Annex 15, under Chapter 11.2,3 states:

“Written procedures should be in place to describe the actions to be taken if a planned change is proposed to a starting material, product component, process, equipment, premises, product range, method of production or testing, batch size, design space or any other change during the lifecycle that may affect product quality or reproducibility.”

Inherent to a robust change management program is ensuring that there is a comprehensive record of a change. Such documentation should map the process flow of the change whereby there is documentation of the intent of the change, the associated risk/potential impact, an initial review of the change record, an implementation plan, results from execution of the change, and monitoring of the effectiveness of the change. Commonly, there is a change review board, consisting of SMEs from the various impacted departments, that participates in the review and approval of a proposed change. The focus of a change review board is ensuring the adequacy of the risk assessment of a change along with the required controls.

In the laboratory setting, it is expected that a considerable percentage of changes will relate to analytical test procedures. When assessing the impact of such test procedure changes, there will need to be consideration of the impact that changes will have on the method (and the associated data), which in turn will be based upon the significance of the changes. When revising a test procedure, there needs to be an understanding of the significance and impact of the change; ICH Q124 refers to Established Conditions (ECs) that “are legally binding information considered necessary to assure product quality” and are associated with the control strategy for the product.

ECs also apply to test procedures and relate to the importance of method development as part of analytical lifecycle management. ICH Q12 states:

“ECs related to analytical procedures should include elements which assure performance of the procedure. The extent of ECs and their reporting categories could vary based on the degree of the understanding of the relationship between method parameters and method performance, the method complexity, and control strategy.”


Photo: stock.adobe.com/Golden Sikorka

If a change to an analytical test procedure includes a modification to the method’s EC, it is imperative that there is a clear rationale for the proposed change of EC and assessment of the potential risk/impact to the associated control strategy. Understandably, a change to a method’s EC is more significant and has a higher risk; as such, the justification supporting the change, through method development, method validation, and comparability studies, will need to be robust, and there will likely need to be prior approval for such a change. Change management is facilitated when there is greater understanding of a method’s critical attributes and the required control strategy to ensure the quality of reported results; this is where an enhanced approach to analytical method development, as defined within ICH Q14,5 provides a benefit as there is better understanding of what the true analytical ECs are, and, thus, there is less risk of mischaracterization of the ECs than can occur with a more limited method development approach (which can also result in a higher number of ECs).

It is understood, when revising a test procedure, that one can change an existing method, which may impact an EC, or there could be a complete change in the method, e.g., switching from an offline HPLC in-process method to an inline spectroscopic PAT method. In either situation, the change is proposed to enact a desired outcome where the improvement can be characterized/defined and can form the basis for a comparability assessment between the old method and the new method as part of change execution as well as be the effectiveness assessment of the change, assessing the effectiveness of the implemented controls to any identified risks. The comparability assessment should consider validation of the new analytical procedure (as per ICH Q2(R2),6 USP Validation of Compendial Procedures7), and it should be central to execution of the change control. Now, the question is, what should be the basis for an analytical comparability assessment protocol? The FDA’s “Guidance for Industry: Analytical Procedures and Method Validation for Drugs and Biologics”8 states, under analytical comparability studies, that the comparability protocol should demonstrate that:

•  The new method coupled with any additional control measures is equivalent or superior to the original method for the intended purpose.
•  The new analytical procedure is not more susceptible to matrix effects than the original procedure.

The guidance goes on to state that if the comparability protocol is addressing a stability-indicating method:

“Appropriate samples should be included that allow a comparison of the ability of the new and original method to detect relevant product variants and degradation species.”

The guidance then states that:

•  The number of batches analyzed for comparison should provide sufficient statistical power.
•  Equivalence, non-inferiority, or superiority studies should be performed with appropriate statistical methods to demonstrate that the new or revised methods’ performance is comparable or better than the original method.
•  The statistical analyses performed to compare product testing should be identified.

So, the change control for an analytical test procedure should address the suitability of the new, revised method towards the issue that was driving the requirement for a change. For example, consider a situation where there is a need to switch to a new supplier of API for a commercial manufacturing process; this is driving the need to make a change to the drug product’s impurity release method as the existing NDA-filed method is not specific towards a degradant that is observed upon stability with drug product manufactured with API from the new supplier. The trigger for the change may have been a quality investigation associated with deficient specificity with the existing method that resulted in a CAPA that fed into a change control. The new method’s development and validation may occur outside of the change control, but should be linked to the CAPA.  If the method’s revision results in needing to revise an established control, this needs to be recognized in the method’s development and validation where impact to the method’s control strategy needs to be assessed such that impact to the method’s other attributes will be considered. For example, if method adjustments are made towards the impurity associated with the new API supplier, will there be a detrimental impact to other impurities, such as stability degradants? Within the change control, it is important that there is scientific basis supporting that the change is specific towards the desired outcome, with minimal risk for any unwanted outcome.

When generating a change control for implementation of a new method, development and validation reports are key documents that will be referenced and will provide justification for the suitability of the change towards the trigger (that prompted the change) and will also provide the source data for the risk/potential impact of the change. When implementing a new method, the risk is that “different” results will be generated. But remember, different results are the goal of a revised method as they relate to addressing the issue. Going back to the above referenced scenario, we are expecting different results relating to the specific degradant due to the superiority of the new method. However, we do not want other different results, i.e., other impurities/degradants (unless the results are superior to those generated using the existing method). This is, therefore, where a comparability protocol is required.

With such a comparability protocol, it is recommended that there is initially an assessment of the existing and new methods’ capability based upon the results from method development and validation, specifically towards the issue/trigger that drove the change but then other aspects of the method as well. With such a comparison, it is important that the results/data are compared but also that there is a scientific assessment relating to the design of the new method versus the issue/trigger such that one can correlate the method development/validation results to the expected outcome based upon the scientific rationale behind the new method. The other aspect of the comparability protocol is comparing results generated by the existing and new methods. USP Analytical Data – Interpretation and Treatment,9 provides various study designs for comparing old and new procedures with consideration to selection of test materials, experimental design, and sample size determination. The goal is to stress test the comparison such that the protocol requires comparison of historical release batches, stability samples (accelerated and long term), force-degraded samples, and new production batches. Any differences, as defined by the comparability protocol, need to be explained and clear as to whether the differences reflect superiority of the new method. For example, “Analytical Procedures and Method Validation for Drugs and Biologics”10 states:

“If new process-related or product-related variants or any new impurities are discovered with the new procedure, testing on retention samples from historical batches should be performed to demonstrate that the variants/impurities detected by the new method are a result of an increase in the sensitivity or selectivity of the new procedure and not a result of a change to process-related impurities.”

Returning to the above scenario, the comparability protocol should focus on generating comparison data based on retains, stability samples, new batches, force-degraded samples, etc. It is important to select samples that are close to specification so that stress testing, as the protocol requires, compares not only quantitatively but also qualitatively to determine whether the same quality decision will be made. It is quite common that such protocols remain in place for a considerable length of time, during which comparison data is generated for new batches and ongoing stability studies to ensure that there is sufficient statistical confidence that there is minimal risk associated with implementing the new method.

If you have any questions relating to the above topic, Lachman Consultants can help you! Please contact LCS@lachmanconsultants.com for support with this critical undertaking or click HERE.

References
1. International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use; “ICH Harmonized Tripartite Guideline, Pharmaceutical Quality System Q10”; June 2008; https://database.ich.org/sites/default/files/Q10%20Guideline.pdf.
2. U.S. Food and Drug Administration (FDA); “Guidance for Industry – Quality Systems Approach to Pharmaceutical CGMP Regulations”; September 2006; https://www.fda.gov/media/71023/download.
3. European Commission, “Eudralex, Volume 4, EU Guidelines for Good Manufacturing Practice for Medicinal Products for Human and Veterinary Use, Annex 15: Qualification and Validation”; March 2015; https://health.ec.europa.eu/system/files/2016-11/2015-10_annex15_0.pdf. 
4. International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use; “ICH Harmonized Guideline, Technical and Regulatory Considerations for Pharmaceutical Product Lifecycle Management Q12”; November 2019; https://database.ich.org/sites/default/files/Q12_Guideline_Step4_2019_1119.pdf.
5. International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use; “ICH Harmonized Guideline, Analytical Procedure Development Q14”; November 2023; https://database.ich.org/sites/default/files/ICH_Q14_Guideline_2023_1116.pdf.
6. International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use; “ICH Harmonized Guideline, Validation of Analytical Procedure Q2(R2)”; November 2023; https://database.ich.org/sites/default/files/ICH_Q2%28R2%29_Guideline_2023_1130.pdf.
7. The United States Pharmacopeial Convention; “USP Validation of Compendial Procedures”; Official as of 01-Aug-2017.
8. Food and Drug Administration (FDA); “Guidance for Industry – Analytical Procedures and Methods Validation for Drugs and Biologics”; July 2015; https://www.fda.gov/media/87801/download.
9. The United States Pharmacopeial Convention; “USP Analytical Data – Interpretation and Treatment”; Official as of 01-Dec-2021.
10. Food and Drug Administration (FDA); “Guidance for Industry – Analytical Procedures and Methods Validation for Drugs and Biologics”; July 2015; https://www.fda.gov/media/87801/download.



Paul Mason, Ph.D., is an Executive Director at Lachman Consultants who has more than 20 years of experience in the pharmaceutical industry. He is a Quality Control chemist experienced in sterile parenteral, API, and solid oral dosage forms. His experience spans finished dosage form, CMOs, and API (intermediates) manufacture support in both a Quality Control and Ana-lytical Development setting. Dr. Mason possesses a deep understanding of business strategy relating to drug research, development, quality assurance, quality control, CMC submissions, laboratory design, clinical and pre-clinical quality/analytical development support. In addition, he has provided expert scientific support for the timely resolution of complicated scientific is-sues raised by FDA application reviewers.

Keep Up With Our Content. Subscribe To Contract Pharma Newsletters