Features

CMC and Regulatory Affairs

See the forest and the trees

By: Edward A.

DS InPharmatics

The Forest: Broadly written guidelines are subject to interpretation and raise so many half-truths and misconceptions that it is difficult to know with any certainty what constitutes a recommendation, as opposed to what is an actual requirement at any given phase of clinical development.

FDA regulations in 21 CFR Section 312.23(a)(7)(i) states, “that an IND for each phase of investigation include sufficient CMC information to ensure the proper identity, strength or potency, quality, and purity of the drug substance and drug product” and goes on to say, “The type of information submitted will depend on the phase of the investigation, the extent of the human study, the duration of the investigation, the nature and source of the drug substance, and the drug product dosage form.”

Here’s the problem: drug development managers must translate guidance into clear program objectives, identifying the data and information that needs to be produced, and prioritize resources and expenditures.

The Trees: The goal of this article is to define the essential steps that must be undertaken to create a scientifically cohesive CMC development program that meets FDA requirements by addressing the key quality attributes of identity, strength, purity, potency and safety.

Step 1. Analytical Methods and Product Characterization
Analytical methods are the foundation for acquiring product knowledge. It is critical to develop and qualify a series of methods useful for characterizing product attributes such as structure, purity, chemical modifications and biological activity. Early in development, it is unlikely that a recognized reference standard already exists, so an in-house primary reference must be made and fully characterized through the development of analytical methods and tools.

A reference standard is an indispensable resource during early product development and throughout the product lifecycle. A reference standard has many uses. It is essential in the development of analytical methods and to monitor their performance over time. It will serve as a valuable benchmark during process development to bridge lot-to-lot comparability and consistency. In certain cases, it may be used as a working assay standard to generate standard curves or as a control to set assay acceptance criteria. It also will be used to assess stability, track trends in product attributes, and to ensure against product drift or overtime.

Step 2. Analytical Method Development and Stability
Both small molecule drugs and biologics are susceptible to a number of environmental influences including temperature, pH, light, oxidation, ionic strength, chemical modification and drying. It is important to have an understanding of how different denaturation or degradation pathways affect the product. Accelerated stability and forced degradation studies can be useful tools to acquire insight on product stability.

Early efforts toward developing meaningful and reliable analytical methods that characterize pathways of chemical and physical instability are especially useful in conjunction with forced degradation studies. There are advantages to developing physicochemical methods as stability indicators because they are usually simpler and can provide great precision. Such assays sometimes can reveal trends that are early predictors of stability issue and can be used to predict stability for later batches where real-time data may be limited.

Stability-indicating assays should be developed and qualified early on, and stability programs should be designed to focus on product attributes that are critical to activity and safety.

Stability-related changes can result in safety issues such as immunogenicity and unwanted side effects or toxicity from degradation products. Correlating forced degradation studies with genotoxicity and/or immunogenicity experiments provides supporting data for identifying product attributes associated with safety and assessing the risk of degradants.

When selecting release tests, it is important to focus on those attributes that address product properties as they relate to its function and safety. These must be incorporated into a quality system for product release to ensure quality and consistency.

FDA Guidance tells us that the confidence in and reliability of analytical results come ”from thorough assay optimization, qualification, validation, and tracking of performance over time.”

Step 3. Process Characterization and In-Process Testing
As the saying goes, timing is everything. Once human trials progress into Phase III, it becomes increasingly difficult and perceivably risky to make critical changes. As a result, process characterization and the identification of critical product attributes should occur earlier during Phase I and II stages of development rather than later.

The emphasis is now on proactively designing in product quality and process control through better understanding of the underlying science and manufacturing design space. Process optimization and carefully planned designed experiments will help to pinpoint variability and will produce key data and information useful for defining the ‘edge of failure.’ This information provides a scientific basis for setting acceptable limits around key process variables and in process controls.

Specifications are usually set with broad limits early in development then narrowed as processes and the level of product understanding are refined. Appropriate specifications should be established that balance product needs, manufacturing capabilities, and industry standards. Once again, such efforts rely highly on development of suitable analytical methods used to set acceptable limits around key process variables. Analytical methods capable of characterizing pathways of chemical and physical instability can be incorporated into process optimization studies to improve process robustness and extend product shelf life.

Understanding and controlling critical process parameters is vital for monitoring product quality and consistency. Early investment in process characterization holds the prospect of reducing batch failure and provides supporting data and information to assess the impact of deviations and process changes as they arise without causing program delays.

At the end of the discussion, no matter what is suggested there is no substitute for product knowledge.

Step 4: Process Modifications and Lot-to-Lot Consistency
Manufacturing process changes are inevitable during the course of clinical development, whether for scale-up or refinements to improve manufacturability for commercialization. It is essential to demonstrate comparable strength, stability, and especially safety (e.g., impurities and contaminants) following any significant process change.

Again, analytical methods that fully characterize the product and its critical quality attributes need to be in place in order to demonstrate lot-to-lot comparability following a process change. This is especially important for biologics, where the process largely defines the product.
Development programs that also demonstrate a scientific understanding of process and product characterization can greatly reduce the perceived risk-associated changes by regulatory authorities.

Know Your Product
There is simply no substitute for having a sound scientific understanding of your product and the processes used to produce it. Proper application of the regulations — requirements versus recommendations — during preclinical and early-stage clinical development is vital.

The modern-day FDA initiative of quality by design (QbD) is intended to promote the idea of controlling the quality of final products through process and product understanding and building in better process control using in-process analytics and knowledge of key variables.

You’ll never know everything there is to know about your product. However, the more effort you put into understanding your product and process throughout development, the better prepared you will be to make informed decisions and deal effectively with unanticipated problems that inevitably will arise.

Reference Standards
Establishment of a reference standard does not have to wait for a finalized process or extensive knowledge of product stability. As soon as, or even before, a process begins to resemble what will be used to generate clinical trial material, it is essential to store a batch in aliquots for use as reference and it should be fully evaluated by all available analytical tools.

Formulation
Early investments in optimizing the formulation and the resulting PK/PD profile of the API can mean the difference between clinical success and failure. Most drug candidates have strong scientific foundations but therapeutic relevance depends upon whether a drug ever gets to its target, in a consistent, active form and at the right dose, so that it has a chance to accomplish its intended task.

Variations in dose resulting from stability or delivery issues can severely confound the interpretation of clinical data and make the difference between meeting or  failing to achieve statistical significance of clinical endpoints.

Ideally, analytical method development should be integrated along with a formulation development program to identify, as early as possible, the appropriate product attributes, along with traditional elements such as optimal storage conditions.

Later formulation changes largely driven by stability concerns carry the risk of unanticipated effects on the clinical outcome. Frequently, regulatory authorities require extensive retesting of ADME and clinical safety studies. 

Avoid The Validation Bog
The word ‘validation’ has become a miniature industry in and of itself. Validation proves that you are measuring what you intend to measure, but validation won’t tell you what you are missing.

Too often, small biotech/pharma companies spend time and precious resources validating analytical methods prior to or during Phase I. It is not unusual for methods to be tweaked (e.g., solvent changes) in order to better resolve peaks or even to see them replaced entirely with a technology more appropriate for larger scale process. Sponsors then have to spend time and money revalidating methods. That money is better spent on advancing process knowledge.

It may not be necessary to validate analytical methods until later stages, but they should be suitably qualified to provide a high level of confidence that the results are dependable.

Making critical decisions based on unreliable data can have catastrophic consequences. It is better to invest resources in the development and qualification of analytical methods that balance product function and safety with manufacturing capabilities than validating early methodology.

Expanded Change Protocols
Carefully written information packages which provide a summary of the major technical issues and a scientifically cohesive logic trail and resolution provide necessary support for interactions with regulatory authorities.

Expanded Change Control procedures provide a mechanism for requesting pre-approval of comparability protocols and are consistent with current Regulatory practices. 


Edward A. Narke is co-founder and regulatory managing director of DS InPharmatics (DSI), a provider of CMC regulatory and operational services. He can be reached at enarke@dsinpharmatics.com.

Keep Up With Our Content. Subscribe To Contract Pharma Newsletters