Corrective and Preventive Action (CAPA) is vital to the life sciences

Corrective and Preventive Action (CAPA) is of the essence in any industry. The importance of CAPA is all the more pronounced in the life sciences industry, considering the importance of this discipline on human lives. It is the very core of a sound management system. CAPA, as the name suggests, is a set of preventive and corrective measures that need to be taken to ensure that the product meets its quality and regulatory expectations.

The essence of CAPA for the life sciences is that it should build the ability to respond to problems as they arise, but more importantly, a CAPA system should help the life sciences organization to anticipate and thus prevent problems from happening.

CAPA in the life sciences is an amalgamation of the following core ingredients:

o  Change Control

o  Continuous improvement

o  Complaint management

o  The need for understanding and implementing CAPA

CAPA is the very edifice of Quality Management. It is a sine qua non for meeting regulatory requirements. Its importance to the life sciences industry vis-à-vis the FDA can be gauged from the fact that in just four years starting 2012, the number of Warning Letters the FDA issued shot up by over 12 times. Understanding and implementing the right CAPA system is critical to meeting regulatory requirements and preventing such actions from the FDA.

Common pitfalls of CAPA implementation

Many life sciences organizations face a few obstacles when it comes to CAPA implementation. The most common ones among these are:

o  Failure to achieve an integrated view of the whole process that can be managed from any source

o  Lack of understanding of the highly intricate and difficult processes

o  Not documenting every step of the CAPA process

o  Lack of clarity between what a corrective and preventive action is

o  Not implementing uniform process across the product’s or company’s numerous sites.

An important line of thinking that needs to go into approaching CAPA is that understanding failures is as important as understanding success of a product in the life sciences industry if it has to meet quality and regulatory requirements. Understanding and correcting problems before they become critical and impede the product’s progress are vital to ensuring quality and meeting regulatory requirements.

Given the extremely high importance CAPA has in the regulated industries; it is but natural that there are a number of regulations that govern this area of the life sciences. How does an organization get an understanding of these regulations right? How does it enforce these regulations in the right manner, so that its products meet the regulatory and quality requirements?

Learn to get these aspects right

All these will be the topic of a highly valuable webinar from Compliance4All, a leading provider of professional trainings for all the areas of regulatory compliance.

At this webinar, Charles H. Paul, who is the President of C. H. Paul Consulting, Inc., a regulatory, manufacturing, training, and technical documentation consulting firm, will be the speaker. To gain proper and complete insights into all the areas of CAPA for the life sciences, just register for this webinar by visiting http://www.compliance4all.com/control/w_product/~product_id=501216?Wordpress-SEO

At this session, Charles will explain how to apply core aspects of the CAPA process that includes:

o  The critical steps

o  The timing of CAPA

o  Who all in the organization need to take part in the process

o  Their roles, responsibilities and functions, and

o  The snags and hazards with CAPA investigation into the process.

The content of this webinar is designed to help participants achieve a highly effective CAPA system. It will help explain the purpose and function of CAPA, by which they will be able to:

o  Identify and explain the relevant CAPA regulations

o  Define exception/deviation reporting and explain the process of executing the reporting process

o  Explain and trace the CAPA flow from problem identification to resolution

o  Explain the challenges and pitfalls of the CAPA process and how they are overcome.

o  Explain CAPA’s role in risk mitigation.

o  Explain how root cause analysis is executed.

At this very valuable session, Charles will cover the following areas of CAPA for the life sciences:

o  CAPA defined

o  CAPA relevant regulations

o  Exception/deviation reporting

o  CAPA process flow

o  CAPA process steps explained

o  Challenges and pitfalls of CAPA’s

o  CAPA and risk mitigation

o  Root Cause Analysis.

http://www.meritsolutions.com/life-sciences/top-8-pitfalls-and-challenges-of-life-sciences-capa-systems-and-processes/

The importance of Design of Experiments (DoE)

Design of Experiments (DoE) is an important component in many industries. It is a series of tests or runs that is carried out repeatedly and consistently over a period of time, and its outputs or responses, observed. Design of Experiments is very important in industry to help arrive at an understanding of the predictability and reproducibility of an experiment.

Design of Experiments is a very important aspect of the important elements of a product, such as quality, reliability and performance. What Design of Experiments does is that it helps to examine and investigate the inputs that lead to poor quality. This insight leads the entity carrying out the Design of Experiments to use these to improve their quality standards.

 

Ruling out chance

Design of Experiments does not rely on chance or providence to bring about the quality that is required of an experiment. It arrives at the optimal set of procedures that are needed to get the required quality standards after a series of tests and experiments, so that the final result shows in the process that goes into the product.

Fundamentally, Design of Experiments helps to put in place a system of control for a product. All the ingredients that go into the inputs needed for obtaining a product of a defined standard or quality are scientific and precise. This precision and accuracy is arrived at after carrying out as many runs or series of Design of Experiments as needed to finally arrive at it.

An introduction to Design of Experiments

The ways of understanding Design of Experiments and applying their standards into production will be the topic of a webinar that is being organized by Compliance4All, a leading provider of professional trainings for all areas of regulatory compliance. At this webinar, the speaker, William Levinson, an ASQ Fellow, Certified Quality Engineer, Quality Auditor, Quality Manager, Reliability Engineer, and Six Sigma Black Belt, who is the principal of Levinson Productivity Systems, P.C., will explain the fundamentals of Design of Experiments.

To gain a proper understanding of the principles of Design of Experiments and to get a grasp of how to implement this concept into your systems, please register for this webinar by logging on to http://www.compliance4all.com/control/w_product/~product_id=501202?Wordpress-SEO

An understanding of the significance level in hypothesis testing

William will make participants understand how to use Design of Experiments to identify and rule out the particular item or input that affects quality. The concept of significance level in hypothesis testing, which will serve as a basis for not only DoE, but also Statistical Process Control and acceptance sampling, will be explained.

A description of the other uses of DoE, such as supporting Corrective and Preventive Action (CAPA) and in process improvement, where it helps to identify and optimize the factors influenced by Critical to Quality (CTQ) characteristic, will be part of the learning that is on offer at this webinar.

 

 

Levinson will cover the following areas at this webinar:

·        Economic benefits of DOE

·        Hypothesis testing: the foundation of DOE, SPC, and acceptance sampling

o  Null and alternate hypothesis

o  Type I or alpha risk of concluding wrongly that the experiment differs from the control (or that a process is out of control, or that an acceptable production lot should be rejected)

o  Type II or beta risk of not detecting a difference between the control and the experiment, not detecting an out of control condition, and accepting a production lot that should be rejected

·        Factors, levels, and interactions

o  Interaction = “the whole is greater or less than the sum of its parts”. One variable at a time experiments cannot detect interactions.

·        Randomization and blocking exclude extraneous variation sources from the experiment.

·        Replication means taking multiple measurements to increase the experiment’s power.

·        Interpret the experiment’s results in terms of the significance level, or quantifiable “reasonable doubt” that the experiment differs from the control.

http://support.minitab.com/en-us/minitab/17/topic-library/modeling-statistics/doe/basics/what-is-a-designed-experiment/

How do laboratories deal with Out of Specification (OOS) results?

Laboratory testing is the soul of the successful operation by a drug maker. It is required as part of current Good Manufacturing Practices (cGMP) regulations to confirm that all the elements that go into a laboratory product, namely raw materials; in-process materials, finished materials, and containers conform to set specifications. When a laboratory test throws up an Out of Specification (OOS), how do laboratories deal with it?

The FDA takes a very serious view of Out of Specification results

The FDA is very stern in dealing with laboratories which come up with Out of Specification results. It inspects laboratory operations very closely, and has clear guidance on how the laboratory investigates Out of Specification and Out-of-Tolerance observation investigations.

cGMP regulation Sec 211.165 specifies that finished Out of Specification products which fail to conform to set specifications, safety standards and other quality standards will be rejected. These cGMP regulations also state that any unexplained deviation from the set specifications of a batch or its contents will be thoroughly investigated if its test results show an Out of Specification result. This is the same rule for both batches that have been distributed into the market, and those that are not.

Steps to deal with Out of Specification results

cGMP regulation makes Out of Specification testing compulsory for the release of a test batch. Whenever an Out of Specification result is confirmed, the batch gets rejected, and if there is ambiguity in the result, then the company’s Quality Assurance (QA) will have to state the reasons for the release and justify it.

Section 501(a) 2 (b) of cGMP guidelines on Out of Specification requires that current Good Manufacturing Practices need to go into the manufacture of both active pharmaceutical ingredient and finished pharmaceuticals. Further, active pharmaceutical ingredients, raw material testing, in-process and stability testing and Process Validation all come under the purview of the cGMP guidelines.

The FDA guidance on Out of Specification covers the following products:

o  Human drugs

o  Biology and biotechnological products

o  Combination products

o  Veterinary drugs

o  Type A medicated articles

o  Transplantation of human tissues

o  Medicated feed

o  Finished products & active pharmaceutical ingredients

o  Dietary supplements

Need for understanding Out of Specification

All the complexity and depth of the issues relating to Out of Specification results need to be fully understood if a laboratory has to meet the required results. Important personnel in laboratories should have complete knowledge of the FDA expectations for Out of Specification results.

They have to use this knowledge to put in place procedures that define a complete, scientifically sound investigation of each Out of Specification and Out-of-Trend laboratory observation and to establish evidence that laboratory personnel are following the procedures.

A complete understanding of Out of Specification results and dealing with them

This will be the content of a training session that is being organized by Compliance4All, a highly popular provider of cost-effective professional trainings for all the areas of regulatory compliance.

At this session, Jerry Lanese, an independent consultant who focuses on Quality Systems and the components of an effective Quality System, will be the speaker. To understand the concept and workings of Out of Specification results and the ways of dealing with them, register for this webinar by logging on to  http://www.compliance4all.com/control/w_product/~product_id=501214?Wordpress-SEO

Tools that help deal with Out of Specification results

At this webinar, Jerry will help participants build the foundation for the implementation of adequate procedures that help avoid Out of Specification results, and will review existing procedures and practices. This webinar is aimed at helping participants develop an understanding of the steps a compliant laboratory needs to take to handle the investigation of Out of Specification test results.

Jerry will also explain the ways in which the laboratory has to interface with other units through the laboratory investigation process. The FDA guidance on handling OOS laboratory results will be the foundation for this webinar, which will offer a clear process for compliant laboratory Out of Specification investigations.

Jerry will cover the following areas at this webinar:

o  Why the regulators are concerned about the handling of OOS investigations

o  The FDA model for handling OOS investigations

o  Commonly accepted terminology such as repeat testing and retesting

o  How the laboratory can meet regulatory expectations for OOS investigations.

o  The interaction between the laboratory and other units in the organization.

http://www.fda.gov/downloads/drugs/guidances/ucm070287.pdf

http://sphinxsai.com/2013/JulySept13/phPDF/PT=11(943-948)JS13.pdf

Optimizing the use of Microsoft Outlook

From the time of its introduction by Microsoft in the late 1990’s, Microsoft Outlook has been a standard package for innumerable organizations around the world. Part of the world famous Microsoft Office system, Microsoft Outlook is a very popular personal information manager.

In the two decades of its launch, Microsoft Outlook has undergone various improvements. The additions that are made to it are aimed at improving user experience and adding features that go on to make it more useful for users.

Needs to be put to full use

Despite its being in the market for such a long time, many users do not know the full extent to which Microsoft Outlook can be put. When explored fully, Microsoft Outlook offers a bouquet of features that go beyond just sending and receiving mails and making appointments on the calendar.

The ways of optimizing Microsoft Outlook for office use and making it a more useful personal information manager will be the content of a webinar that is being organized by Compliance4All, a leading provider of very cost-effective professional trainings in all the areas of regulatory compliance.

Understand the ways of optimizing the use of Microsoft Outlook

At this webinar, Mike Thomas, a subject matter expert in a range of technologies including Microsoft Office and Apple Mac, who is a Fellow of The Learning and Performance Institute and has worked with and for a large number of global and UK-based companies and organizations across a diverse range of sectors, will be the speaker.

To get a better understanding of how to put your Microsoft Outlook package to much better use, please register for this webinar by logging on to

http://www.compliance4all.com/control/w_product/~product_id=501148LIVE?Wordpress-SEO

Control Microsoft Outlook; don’t let it control you

Mike will offer simple and easy guidance on how to put Microsoft Outlook to far greater use than people are normally used to. Microsoft Outlook. The point about Microsoft Outlook is that if it is put to the right use, it facilitates a number of functions that the user will find thorough useful and beneficial.

If it is not, Microsoft Outlook has the potential to throw the user’s work schedule into disorder, as many of its functions can confuse the user who does not know how to use it to the maximum.

Mike will show participants of this webinar the simple way of controlling Microsoft Outlook, rather than letting Microsoft Outlook control the user. He will cover the following areas at this session:

o  The 4D’s of email management

o  Configure Outlook for distraction free productivity

o  How to use Rules to automate email processing

o  How to use Quick Steps to automate a series of actions

o  The benefits of Tables

o  Creating data visualizations using Shapes and SmartArt

o  Converting emails into Tasks

o  Using Categories to tag emails, calendar items and Tasks

o  Using Views to display information in a way that suits you.

The Attribute Agreement Analysis

Humans can be calibrated, although most people like to think otherwise. The commonly used standard, Attribute Agreement Analysis, or what is called AAA, is a handy tool in helping to do this. At its barest, Attribute Agreement Analysis is a method in which the level of agreement or conformance between the appraisal made by the appraiser(s) and the standard is assessed. Then, the elements used for the appraisal that have the highest levels of disagreement with the standard are identified.

The two methods of the Attribute Agreement Analysis

The Attribute Agreement Analysis uses two primary methods of assessing the agreement of the attribute with the standard:

–       The percentage or extent to which the appraisals agree with the standard

–       Kappa statistics, or the percentage or extent to which adjustment is made between the agreement between the appraisals and the standard and the percentage of agreement that happens by chance

The three aspects of Attribute Agreement Analysis

Attribute Agreement Analysis has three aspects: Agreement with oneself, Agreement to a peer, and Agreement to the standard. When calibrating humans, the use of Attribute Agreement Analysis calls for control plans that need to be in put in place for “MSA” analysis on key processes. An AAA may be described as a “Measurement Systems Analysis” (MSA) for attributes.

The Attribute Agreement Analysis method is useful to auditing professionals, to whom it makes sense to understand the effectiveness of these methods when these are used by their clientele and/or in their own organization.

Gain learning of Attribute Agreement Analysis

The ways by which Attribute Agreement Analysis can be comprehended and used effectively will be the learning a webinar being organized by Compliance4All, a provider of cost-effective regulatory compliance trainings for a wide range of regulated industries, is offering.

The speaker at this webinar is Jd Marhevko, Vice President of Quality and Lean for Accuride Corporation, who has been involved in Operations and Quality/Lean/Six Sigma efforts across a variety of industries for more than 25 years. To gain insights into the inner aspects of Attribute Agreement Analysis, please register for this webinar by logging on to

http://www.compliance4all.com/control/w_product/~product_id=501073?Wordpress-SEO

The “Statistical AAA” and the Kappa value

At this webinar, Jd will review both the “Statistical AAA” and the Kappa value, as well as the confidence levels for the result bands and incorporation of AAA into the Control Plan and frequency of calibration.

She will assess the pros and cons while discussing the general benefits of reductions in arguments (what is good or not) internal/ external rework, returns, premium freight, etc.

A number of uses from the Attribute Agreement Analysis method

This explanation will help participants understand ways by which they can apply this tool while learning how to bring down business costs. Jd will evaluate the benefits of human calibration by reviewing the three basic types of agreements.

The important learning this session will give is that it will enable participants to learn the ways of developing, creating, executing and interpreting an Attribute Agreement Analysis so that an accurate and repeatable disposition can be made and rework and returns can be effectively reduced.

At this webinar on Attribute Agreement Analysis, which will be highly useful to professionals such as Quality and Engineering system practitioners, Directors, Engineers, Analysts and Managers, Jd will cover the following areas:

o  To help people understand how AAA can be effectively utilized for mitigating business loss

o  Increased understanding of how to actually perform the analysis

o  Build confidence in the ability to calibrate a human operator.

http://support.minitab.com/en-us/minitab/17/Assistant_Attribute_Agreement_Analysis.pdf

http://support.minitab.com/en-us/minitab/17/topic-library/quality-tools/measurement-system-analysis/attribute-agreement-analysis/what-is-an-attribute-agreement-analysis-also-called-attribute-gage-r-r-study/

Alternatives to AQL sampling plans do exist

Alternatives to AQL sampling plans do exist, but companies need to be aware of them and explore them. Acceptance Quality Limit, or AQL, is applied as a benchmark in most manufacturing organizations to inspect the quality of products they purchase. It is only when the product meets AQL that the receipt is acknowledged and the payment made.

So, what is AQL?

What is AQL? In simple terms, AQL, which expands to Acceptance Quality Limit, is what may be termed as the least or the worst or lowest level of tolerable process means that can be accepted for the quality of product. It is the ratio or percentage level below which quality cannot be lowered to be termed acceptable.

Acceptance Quality Level is accepted as a standard business practice by most medical device companies. Attribute sampling based on ANSI/ASQ Z1.4 and Zero Acceptance Number Sampling Plans by Nicholas L. Squeglia continue to be the most common applications used by companies.

Considering a viable alternative to AQL sampling plans

Although popular, these common methods are not always the best approaches. This is not to belittle the effectiveness of this method, but to just point out that these are in themselves insufficient. Medical devices need to be aware of a variety of methods and when and how to use them.

Establishing “processes needed to demonstrate [product] conformity” is a requirement from ISO 9001 and ISO 13485. Similarly, the FDA’s GMP (21CFR820) requires that “sampling methods are adequate for their use”. Further, an FDA guideline states that “A manufacturer shall be prepared to demonstrate the statistical rationale for any sampling plan used”.

However, an AQL sampling plan does not provide what is needed to meet either of those requirements. Using only Attribute sampling based on ANSI/ASQ Z1.4 and Squeglia’s Zero Acceptance Number Sampling Plans, it is not possible to actually “demonstrate” that an AQL sampling plan ensures product quality.

This is where “Confidence/reliability” calculations come in as alternatives to AQL sampling plans. They are a better way to assess the quality of purchased parts. It is easy to make such calculations using tables and/or an electronic spreadsheet. It is also easy to use confidence/reliability calculations to provide evidence of product quality. The statistical rationale for such calculations is easy to explain and demonstrate, which is why these calculations constitute strong and reliable alternatives to AQL sampling plans.

A learning session on the alternatives to AQL sampling plans

These alternatives to AQL sampling plans will be the core of a learning session that Compliance4All, a leading provider of professional trainings for all the areas of regulatory compliance, will be organizing. John N. Zorich, a senior consultant for the medical device manufacturing industry, will be the Speaker at this webinar, to enroll for which, all that is needed is to visit http://www.compliance4all.com/control/w_product/~product_id=501099LIVE/~sel=LIVE/~John_N.%20Zorich/~Better_Alternatives_to_AQL_Sampling_Plans_for_Risk_Management_in_Incoming_QC

A complete heads-up on the alternatives to AQL sampling plans

At this webinar on the alternatives to AQL sampling plans, the speaker will explain the pros and cons of ANSI Z1.4, and Squeglia’s C=0 in detail. He will highlight the weaknesses of such plans vis-à-vis meeting regulatory requirements. John will offer real-world examples of how using such sampling plans leads to production of non-conforming product to fortify the learning on the alternatives to AQL sampling plans.

He will also examine ISO and FDA regulations and guidelines regarding the use of statistics, especially in regards to Sampling Plans. As part of alternatives to AQL sampling plans, John will explain the advantages of “confidence/reliability” calculations are explained. Such calculations are demonstrated for Attribute data (pass/fail, yes/no data) as well as for variables data (i.e., measurements). If variables data is “Normally distributed”, the calculations are extremely simple. The webinar explains how to handle “non-Normal” data, and provides the methods, formulas, and tools to handle such situations.

The webinar on alternatives to AQL sampling plans ends with a discussion of how one OEM manufacturer has implemented “confidence/reliability” calculations instead of AQL sampling plans for all of its clients. The speaker will offer suggestions for how to use “confidence/reliability” QC specifications instead of “AQL” QC specifications. The use of “reliability plotting” for assessing product reliability during R&D is also discussed.

The speaker will talk on the following topics during this session:

·                    AQL and LQL sampling plans

·                    OC Curves

·                    AOQL

·                    ANSI Z1.4

·                    Squeglia’s C=0

·                    Confidence/Reliability calculations for

o                 Attribute data

o                 Normally-distributed variables data

o                 Non-Normal data

·                    Transformations to Normality

·                    K-tables

·                    Normal Probability Plot

·                    Reliability Plotting

Derive the benefit of powerful legal writing

The responses that organizations give to the FDA when they need to interact with it have to be very precise, technical and scientific. They are a test of the writing skills of the writer who frames these responses. The answers have to be very carefully weighed. Responses to the FDA have to be more loaded and impactful than being creative or artistic.

The power of persuasive and precise writing comes into play when organizations have to defend their actions in relation to the method or process they adapt for developing new drugs or when it comes to responding to FDA citations and other documents. When negotiating with the FDA; the organization has to have very strong and robust responses, which will go a long way in convincing the FDA about the veracity of the intent behind many of the organization’s actions.

These skills can be learnt

These writing skills can certainly be learnt. Documentation that is legally coherent and convincingly written has the ability to go beyond documents that are just well written in passing the FDA hurdle. The ways of doing this properly and effectively will be the learning that a webinar from Compliance4All, a well-known provider of professional trainings for all the areas of regulatory compliance, will be offering.

At this webinar, at which Robert Michalik, a Massachusetts regulatory attorney and founder of RegulatoryPro.com will be the speaker, learning will be offered on a variety of methods by which legal writing can be tightened and compacted. To enroll for this webinar, just log on to http://www.compliance4all.com/control/w_product/~product_id=501030

At this valuable learning session, Michalik will cover the following areas:

o  Basic communications skills that all successful attorneys use to win arguments, in legal briefs and oral presentations

o  Step-by-step analysis of how to present both good and poor data in a persuasive manner

o  How to train scientists and engineers to generate “good” data to support legal, regulatory and quality claims?

o  Tips and secrets to framing an argument that makes even poor data look good

o  Examples of good writing that can be useful templates for training and skills development

o  What you should never say in a quality or regulatory document?