Cardiovascular Outcomes Trials and COPD Drugs – The Latest Regulatory Point of View

Written by: Dr. Robert Kleiman, Chief Medical Officer & Vice President, Global Cardiology

It’s quite common for patients with chronic obstructive pulmonary disease (COPD) to have concomitant cardiovascular (CV) disease.   There’s no great mystery to this – COPD tends to occur in older patients, many of whom will also have common CV risk factors such as hypertension, diabetes and hyperlipidemia.  And of course, most cases of COPD are related to cigarette smoking, which itself is a strong risk factor for the development of CV disease.  Patients who participate in COPD trials often develop CV adverse events which are just related to their underlying CV disease.  As a result, during the development of new drugs to treat COPD, it becomes very difficult to tell whether CV adverse events are directly related to therapy or simply represent the natural history of the patient’s underlying CV disease.

There are similarities with the development of new drugs for the treatment of diabetes, as patients with diabetes often have CV disease, and have increased rates of CV events.  As part of the fallout from Rosiglitazone, regulators have raised CV concerns about all new anti-diabetic agents, and despite some controversy, have required large CV outcome safety trials (learn more on a previous ERT Blog). Are sponsors expected to go down the same path for COPD therapies?

Probably not.  During a recent Cardiac Safety Research Consortium (CSRC) Think Tank Meeting, several regulatory representatives commented that there was no clear evidence that current COPD treatment modalities (long acting beta agonists and long acting muscarinic antagonists) produce adverse CV outcomes.  In fact, the results of two large randomized clinical trials, TIOSPIR and UPLIFT, suggest that COPD therapies don’t result in an excess of CV events.  The regulators present at the recent CSRC meeting stated that they did not feel that dedicated CV outcomes trials would be routinely required for new COPD drugs. The CV safety of new COPD medications, however, does remain an area of regulatory concern, and a dedicated CV outcomes trial might be required on a case-by-case basis, depending on the data submitted. This is good news for the industry as a blanket requirement for CV outcomes studies could delay or even halt the development of new treatments for COPD due to the huge expense of these studies.

So how can sponsors prospectively avoid big, long, expensive outcomes trials? Here are a few recommendations based on comments from the FDA, EMA and PMDA:

  1. Include a representative sample of COPD patients – including sicker patients – in Phase III trials.  This may require reducing the exclusion criteria to allow at least a subset of patients with more severe COPD as well as baseline CV disease.
  2. Obtain better and more robust baseline assessments of CV risk factors and CV status during clinical trial enrollment.
  3. If a CV adverse event does occur, insure that the site asks the right questions and collects adequate clinical data for later analysis. The CSRC provides forms which can and should be used by sites here.  Formal event adjudication may not be required, but at least be prepared to help the regulators understand any CV adverse events which have occurred.
Posted in Cardiac Safety, Clinical Research, Clinical Trials, COPD, FDA | Tagged , , | Leave a comment

A Global Crisis – The War on Suicide in Japan

Dr. Rene Duignan, not a professional film-maker but an Irish economist, was driven by a personal regret to create a film that would shed light on the crisis of suicide in Japan. Rene’s award winning documentary Saving 10,000 — Winning a War on Suicide in Japan explores the true causes of the high suicide rate in the region and suggests practical solutions for suicide prevention. The goal: to save 10,000 lives.

The suicide rate in Japan is twice that of America, with 30,000 lives lost each year. That’s 300,000 Japanese people in the last 10 years. While it is “taboo” to talk about suicide in Japan, there are manuals teaching you how to kill yourself that sell over a million copies. The culture sometimes portrays suicide as “beautiful” and is often “sensationalized” or used as a source of entertainment on Japanese television. Suicide is also a solution for many as a way to help their families from financial struggles. In Japan, life insurance companies pay out to the families of suicide victims.  Due to a deep sense of personal responsibility, some would rather take their own lives than continue to face economic hardship.

As in any country, suicidal ideation and behavior comes from an element of mental illness such as depression, and does not discriminate. It affects men, women, children, middle aged, elderly, rich, poor, famous or ordinary. It was noted in this documentary that 2/3 of all depression is triggered by a pressure of some sort – lack of sleep, being over worked, bullied, or abused, loneliness, failure, guilt, and the list goes on. There is a lack of mental health treatment and resources in Japan. The average clinic has upwards of 40-50 patients, which only leaves 3-4 minutes per individual. Therapy is too expensive for many people as it is not covered by health insurance.

For every suicide, there are 10 suicide attempts; that’s 300,000 attempts every year. This constitutes up to 20% of all patients in the most critical emergency medical centers. Of the 30,000 suicides completed annually, approximately 10,000 of those victims were already in the mental health system seeking active treatment, having consultations, taking medications or have been institutionalized. As healthcare providers, there is an opportunity to address this public health crisis. People who commit suicide see healthcare providers in the critical days before their death.

How can healthcare workers meet the increased demand for their time and address the challenge of rising suicides among patients? Standardized and efficient mental health screening and suicide risk assessment is a part of this answer.

In pharmaceutical product development, the Food and Drug Administration has mandated prospective monitoring of suicidal ideation and behavior for certain classes of drugs. Clinical researchers have used electronic, patient self-reported suicide risk assessment tools to ensure the safety of patients enrolled in clinical trials. AVERT™ is one such tool. AVERT utilizes the electronic Columbia-Suicide Severity Rating Scale (eC-SSRS) to capture the full range of evidence-based suicidal ideation and behavior data. It has been used to conduct more than 160,000 assessments of over 35,000 patients across the globe – including Japan. Global healthcare providers can realize the same benefits clinical researchers continue to find in AVERT.

Electronic, self-reported systems enable patients to complete the assessment privately via the Web, an electronic tablet or their smartphones. People can complete them in just over three minutes, and the results are reported immediately to their healthcare provider. If the assessment finds the patient to be at risk for suicide, an alert will instantly be issued and the appropriate follow-up can be provided.

In addition to the solutions proposed by Dr. Rene Duignan, we believe that healthcare providers have an opportunity to provide community benefit through quick, efficient and effective suicide risk assessment. Systems like AVERT, which leverage good technology and good science, can help do that.

Please take a few moments to view: Saving 10,000 — Winning a War on Suicide in Japan below. For any questions regarding AVERT in clinical research or healthcare, you may contact us here.

Posted in Clinical Research, Clinical Trials, eC-SSRS, FDA, Suicidality Monitoring | Tagged , , | Leave a comment

Regulatory Trends in Reviewing Risk/Benefit Assessments: U.S. FDA Perspective

Today’s blog will discuss regulatory trends in reviewing benefit/harm assessments in medical interventions. The important question here is: how do we balance the two? When we use the term “benefits” we are referring to “the good” that actually happens to patients by prescribing medical interventions; more specifically, the ability of medical intervention to improve outcomes for patients in terms of decreasing symptoms, improving function, or improving survival. It could also mean fewer adverse effects compared to other interventions. Benefit is also termed “efficacy” or “effectiveness.”

Harms, on the other hand, are “the bad.” They are the adverse unwanted consequences reasonably associated with use of medical interventions. These can include things like signs, symptoms, lab values, vital signs, ECG, etc. Harms are often erroneously termed “safety.” However, no intervention is completely “safe” in terms of absence of all harms. Safety is really the balance of benefits vs. harms, not just harms alone. This concept was recognized early on in the history of FDA, even before the 1962 requirement for effectiveness.

Regulatory History:

Prior to 1938, there was no premarket review, only response to crises. However, based on the sulfanilamide tragedy that year, drugs had to be shown as “safe” prior to marketing and there was recognition that efficacy was important in this consideration. “If the drug that killed one person in ten thousand was of only minor use therapeutically, it might still be judged to be unsafe, whereas the drug that killed one in a thousand persons, if it had marked and undisputed therapeutic value it would still be a safe and valuable drug.” [J.J. Durett, Chief, Drug Division, FDA, December 1938] In other words, “safety” depends upon context of use – the magnitude of benefit, in what patient population, for what disease and at what dose/exposure.

In 1962, the requirement to demonstrate efficacy (benefits) to justify any potential harms, prior to marketing, was established. The standard of efficacy is “substantial evidence” from “adequate and well-controlled studies.”  Efficacy is not based on p-values alone. You need to show clinically meaningful differences as well as statistical significance. [Warner-Lambert v Heckler 1986] This entails judgment of what is considered “clinically meaningful.” A better way to frame that judgment is to have actual evidence from patients about what is meaningful for them through the development and use of patient reported outcomes (PROs).

The standard for evaluating harms is not very clear in terms of law and regulations. In the Federal Food, Drug, Cosmetic Act, section 505 it states that “adequate tests by all methods reasonably applicable to show whether or not such drug is safe for use under the conditions prescribed, recommended, or suggested in the proposed labeling.” “Adequate tests” are very contextual depending upon what kind of patients and medical interventions you are studying. During a recent court case [Matrixx vs. Siracusano, March 22, 2011], Justice Sotomayor stated statistical significance is not necessary to show harm – “Because adverse event reports can take many forms, assessing their materiality is a fact-specific inquiry…Something more than the mere existence of adverse event reports is needed to satisfy that standard, but that something more is not limited to statistical significance and can come from the source, content and context of the reports.” To address the need for improvement in the clarity and transparency of the FDA’s benefit-risk assessment in human drug review, the FDA has made recent efforts to move toward more structured assessments, entailing both quantitative and qualitative types of decision making.

Current FDA Thinking:

As part of Prescription Drug User Fee Act (PDUFA) V negotiations, the FDA was tasked to develop structured benefit-risk assessment to serve as a template in product reviews. In February 2013, a document entitled “Structured Approach to Benefit-Risk Assessment in Drug Regulatory Decision Making: Draft PDUFA V Implementation Plan” was published to lay out the thinking on the approach to the assessment of benefits and harms when reviewing medical interventions.

One of the things people have been pushing the FDA to do is to use a more “quantitative” approach to benefit-harm assessments. On page four of this document it states: “The term ‘quantitative benefit-risk assessment’ can have various meanings depending on who is asked. Some hold the view that a quantitative benefit-risk assessment encompasses approaches that seek to quantify benefits and risks, as well as the weight that is placed on each of the components such that the entire benefit-risk assessment is quantitative.” “This approach is typical of quantitative decision modeling. It usually requires assigning numerical weights to benefit and risk considerations in a process involving numerous judgments that are at best debatable and at worst arbitrary. The subjective judgments and assumptions that would inevitably be embodied in such quantitative decision modeling would be much less transparent, if not obscured, to those who wish to understand a regulator’s thinking.”  In other words, if you reduce this to only a number, the meaning behind that number will be lost. So, much like efficacy assessments, you cannot boil down assessments of harm to a “p value” which is independent of its clinical meaningfulness.

This document provides a very helpful way to think about doing benefit/harm assessments including the suggestion to divide up the considerations into two major areas: therapeutic and product specific. Therapeutic area considerations cover the disease and the types of patients being studied including an analysis of the condition as well as current treatment options. It lays out the problem but does not address whether the new intervention provides a solution to the problem. Product specific considerations cover the benefits of the particular intervention (what are they, the magnitude, and in whom) and the harms (risk), including how these harms can be mitigated (risk management).  In essence, you want to briefly frame the therapeutic area considerations and then spend your time on how your product actually addresses the problem.  Below is the provided matrix for FDA Benefit-Risk Framework.



“Uncertainty,” as presented in this matrix, refers to decisions based on assumptions that are not verifiable based on available information. The best way to de-risk development and decrease uncertainties regarding the nature and magnitude of benefits (to balance against harms) is through better measurement of benefits – using improved measurement tools, such as PROs. Oftentimes, sponsors are very hesitant to study additional potential benefits due to “regulatory uncertainty.” However, if we do not move beyond that thinking, FDA approval will become nothing more than a rubber stamp.

The FDA regulatory standard for most interventions, except for life threatening or contagious diseases, is that you only have to be better than nothing. However, patients, clinicians, and payers have no interest in whether your product is slightly better than placebo when there are other interventions out there for that particular disease. People will begin to look beyond FDA approval and compare how you have improved patients’ lives against other interventions in order to decide which they will pay for and ultimately use. Improved and additional information on benefits and harms through the use of PROs leads to more informed decision making. This will help you rely less on assumptions to address uncertainties and get you into more of a fact-based assessment that will actually help patients, clinicians, regulators, and payers understand which medical interventions are the better treatments options.

Posted in Uncategorized | Leave a comment

Drug-induced Hypertension – Potential New FDA Guidance?

Last month, we introduced Ambulatory Blood Pressure Monitoring and its use in clinical trials. Today, we will discuss drug-induced hypertension and why regulatory agencies, such as the FDA, have recently shown some concern. 

Some classes of drugs are known to have off-target hypertensive effects including corticosteroids, androgens, estrogens, progestins, NSAIDs, and several others.  Currently, there isn’t any empirical data regarding the safety/lack of safety regarding drug-induced increases in blood pressure (BP).  However, high blood pressure is well known to produce harmful side effects such as stroke, heart attack, kidney failure, etc.

Concerns about drug-induced hypertension were elevated after Torcetrapib, a cholesteryl ester transfer protein (CETP) inhibitor designed to increase good cholesterol and lower bad cholesterol, increased mortality despite having excellent effects on lipid levels – halting the drug’s development.  In Phase III trials, patients had a 50% increase in HDL and a 20% decrease in LDL, but 60% excess mortality and cardiovascular events.  The average blood pressure increase was merely 2.8-5.4 mm Hg, but 5-9% of patients had an increase of 15 mm Hg.  It is still unclear if the increase in blood pressure was the cause of the increased mortality rates, but it has made the FDA more concerned about off-target drug-induced hypertension. 

The lack of clarity regarding drug induced changes in blood pressure is not surprising as there are many mechanisms by which non-cardiac drugs can alter the BP, and many factors that can influence the BP effect demonstrated in clinical trials.  These include the method of measurement, the population studied (e.g., normal subjects vs. target population vs. higher risk groups), dose and duration of exposure, and background therapies that might mitigate BP risk.

So, is there a new FDA guidance in the works? 

The FDA has held public meetings in which the need for better blood pressure data on drugs has been discussed. It is important to note that the FDA is unlikely to require a “Thorough BP Trial” for all drugs.  However, during the Cardiac Safety Research Consortium Thinktank meeting in July 2012, the FDA hinted about a blood pressure guidance that will likely be directed towards drugs in certain classes or drugs that are administered chronically to populations at risk.  For example, oral contraceptives or corticosteroids could be likely candidates for more extensive BP assessments, but an antibiotic administered for a few days or a drug used to treat children, might not.  Since blood pressure behaves differently in different people, particularly in older populations or those in higher risk groups, intensive blood pressure monitoring performed in healthy young volunteers wouldn’t really tell us much about the BP effects on other populations.

Pharmaceutical companies should be thinking ahead when developing a new member of a class of drugs that is known to increase blood pressure, or a drug for which there is pre-clinical or early clinical evidence of blood pressure effects. In these instances, the FDA may require you to go back and collect more thorough blood pressure data.  You can be prepared for this requirement by utilizing an Ambulatory Blood Pressure Monitoring (ABPM) solution during clinical development. 

Why use ABPM for intense BP assessment? Traditional stethoscope and blood pressure cuff measurements are not very reliable or reproducible and can often produce “white coat hypertension” – which is estimated to cause 15% of patients to have artificially elevated BP.  Also, drug effects on blood pressure may occur throughout the day.  ABPM allows you to measure blood pressure over an extended period of time (typically 24 hours) to gather more precise and reproducible data. 

Unsure if your compound in development will likely require more intense blood pressure data? Want to know when and how you should implement ABPM? Contact us for a complimentary 30-minute consultation with one of ERT’s leading cardiac safety experts to learn more.

Posted in Uncategorized | Leave a comment

Why Use Ambulatory Blood Pressure Monitoring (ABPM) in Clinical Research?

Written by: Dr. Robert Kleiman, Chief Medical Officer & Vice President, Global Cardiology

Hypertension, or high blood pressure (BP), is one of the leading risk factors influencing the global burden of cardiovascular disease, resulting in an increased incidence of cardiovascular mortality, sudden death, stroke, coronary heart disease, heart failure, atrial fibrillation, peripheral artery disease, and renal insufficiency. During clinical trials, BP has traditionally been measured at the bedside by a nurse or physician using a stethoscope and blood pressure cuff.  A more recent development has been the use of Ambulatory Blood Pressure Monitoring (ABPM), which collects a patient’s blood pressure throughout the day, even after they leave the physician’s office or clinical site.  The patient wears a BP cuff connected to a small unit which automatically inflates and deflates the cuff at pre-specified intervals, measuring and recording the BP at the desired times during the day and night.

Early ABPM studies demonstrated that many anti-hypertensive drugs which appeared to be quite efficacious – based on conventional blood pressure measurements – had much less impressive performance when assessed with ABPM.  As a result, many clinical trials were conducted in the US and Europe to assess the value of the fully-automated ABPM technique.

Early trials evaluated the role of ABPM in confirming the efficacy of anti-hypertensive drugs as well as in predicting major cardiovascular outcomes in patients with hypertension.  The results were clear.  In a clinical setting, ABPM correlated much better with efficacy for anti-hypertensive drugs and with cardiovascular outcomes/adverse events than random clinic BP values.

The ABPM technique has also become recognized as the most effective way to eliminate the phenomenon of “white coat hypertension”, which is very common in clinic BP assessments. At least 15% of people will have an artificially elevated BP while in a doctor’s office, yet are otherwise normotensive.  The prevalence appears to be even higher among older adults, females, and non-smokers.

Additionally, in pharmacological research, ABPM is superior to clinic BP measurements in determining appropriate effective drug doses, duration of action of drugs and dosing schedules, and efficacy in “covering” the last 6 hours of the 24-hour dosing interval (morning BP surge).

Many researchers believe that ABPM should become a mandatory tool during development of new anti-hypertensive agents and when conducting additional studies of older medications.  They also advocate expanding the use of ABPM in pharmacological research beyond its use in the assessment of anti-hypertensive drugs, and support the use of ABPM as an additional tool while assessing the cardiac safety of new chemical entities including Parkinson’s, Alzheimer’s and anti-diabetic drugs, as well as medications in areas such as oncology (where BP safety concerns have recently emerged).

It is well known that many classes of medications can elevate blood pressure, and ABPM can help accurately identify this concern. These medications include, but are not limited to, non-steroidal anti-inflammatory drugs (NSAIDs), cough and cold medications, migraine headache medications, weight loss drugs, some antidepressants, oral contraceptives, some antacids, corticosteroids, and cyclosporine.

ABPM enables biopharmaceutical companies to efficiently evaluate the efficacy and cardiac safety of both hypertensive and anti-hypertensive drugs as well as endpoints for cardiovascular and cerebrovascular outcomes.  ERT is able to offer an ABPM solution that captures accurate 24-hour readings of systolic and diastolic blood pressure, mean arterial pressure, and pulse rate, in conjunction with our other suite of cardiac safety services.

To learn more please visit or you can register for on-demand viewing of ERT’s latest ABPM webinar presented by Dr. Kleiman.

Thank you for being a part of the global ERT community.

Posted in Clinical Research, Clinical Trials | Tagged , , | 1 Comment

Improving Your Drug’s Value Proposition throughout the Product Lifecycle: How Optimized Clinical Outcomes Assessment (COA) Strategies Can Drive Commercialization Success

Typically, pharmaceutical and biotech companies have focused their Clinical Outcome Assessment (COA) development efforts solely on the requirements necessary for regulatory approval.  While we recognize that is extremely important, today we are going to discuss how pharmaceutical companies may reap benefits from a longer term approach.  An approach that addresses long-term market access hurdles can lead to continued product differentiation in the face of ongoing competition and subsequent long-term payer acceptance.  Drug development strategies that optimize COA data – including patient, clinician, and observer reported outcomes (PRO, ClinRO, and ObsRO) – can be used strategically to support regulatory approval and, increasingly, post-marketing success.

The development of COA endpoints, wherever you are in the product lifecycle, should always answer three clear questions (in order):

1. What messages are you trying to test and convey?

  • Create the messages you want to deliver at the end of your work
  • Identify the concepts, or what you want to measure, that will support your messages
  • What you want to measure depends upon:
    • Who is in your target patient population
    • Who you want to influence with your results – regulators, payers, clinicians, and/or patients themselves

2. How are you going to measure the concepts behind the messages?

  • Select instruments, based on your own research, that measure exactly what you want to measure
  • Only select instruments after deciding on what to measure

3. How are you going to implement the study that gives you the data?

  • Implement the instruments in study designs that give you the optimal endpoints to evaluate what you want to measure

This takes you beyond the very limited thinking regarding regulators and the narrow concepts of what you measure, to the broader focus on what the impact of this product is on the patient’s Health Related Quality of Life (HRQOL).  It should also get you thinking about how you are going to convey that information in the most compelling way.  For example, if you are dealing with a debilitated population, maybe it’s mobility information you want to convey.  If it is a very healthy population, maybe it’s strong physical exertion.  This is why really understanding the message and patient population and then focusing on the concept is important.  Once this is established, you need to determine how you are going to measure and implement that concept, and finally, you need to select which instrument you will use.  We suggest following this process for all stakeholders – not just for regulators.

With that foundation in place, we can now move to where we think the market is heading and how pharmaceutical companies can prepare for those changes. We believe that several combined elements – such as payer interest, big data and manufacturer needs – suggest that there will be an increased demand for and use of phase 4 economic reviews. Ultimately, payers and reviewers are going to evaluate your drug and make the decision whether to cover it (or not) and at what price.

Payer Interest:
As always, payers are interested in cost control and soon they will be able to measure costs very well.  Also, there is the diffusion of use of more formal Health Technology Assessment (HTA) methods, which uses only generic utility measures – unless other credible measures are provided by you.  Finally, there is always the pressure to drive therapeutic substitution.

Big Data:
With the ability to slice and dice data from populations into subgroups, it is much easier to measure hard outcomes.  There is relentless pressure to measure outcomes, so a manufacturer will need to prove a drug’s benefits.

Manufacturer Needs:
Product differentiation is increasingly important, yet there is a need to control development costs.  You’ll want to be able to defend long-term product value without needing to run another huge trial or contract discount.

Based on these factors, if you anticipate additional reviews, you’ll want to start thinking about the type of data payers will want to examine.  If you talk to payers directly, some may lead to you believe that they only care about budget impact and not much else.  However, there are good reasons for investing in the collection of COA data.  It is true, payers are looking to treat their patients and to do so at a good price. However, once the product characteristics are known, it is in the payers’ interests to say they care less about patient benefit. The fact that they say it doesn’t matter doesn’t make it true.  So never stop selling benefit to stakeholders!  This is where COAs can help, because you can come in with credible measures rather than some qualitative statements. Payers need to gather product information, design the rules for the selection, and conduct a selection.  All of these are affected by the medical community, particularly inside the payer and also patients. So the implication is that tangible benefits data will be helpful to a manufacturer during a negotiation, and in the “game” before the negotiation.

You may be thinking, “What will payers likely ask/want at these frequent, ad-hoc reviews?” Here is a list we’ve put together to help you get started:

  • Why is your drug better than your competitor’s?
  • Show us something new in our patient population – we’ve already seen your registrational data.
  • Your competitor went generic. We want a discount.
  • Please respond to your competitor’s observational studies in our patients showing that its life-cycle costs are lower than yours.
  • We just read an AHRQ/NICE/PCORI study that shows that all the drugs in this class have only tiny differences. Give us a reason not to implement the obvious implications of that finding – which is to engage in competitive contracting.

You’ll need to get started early on answering these questions because the lead time to respond is usually very short (weeks or a few months).  How long might it take to produce credible data in publications, which could then be taken to review bodies that control access? We estimate the total time from brainstorming to publication is approximately 5 years.

  • Planning COA strategy: 1 year
  • Beta testing COA instruments: 1 year
  • Full scale trial in the field: 2 years
  • Write up and publications: 1 year

Clinical Outcomes Assessments and Economic Reviews

Post-marketing COA could be a valuable addition in a difficult market.  ERT is readily available to support you – whatever the size, complexity and therapeutic area of your clinical program.  Contact us to speak with an expert consultant on how to optimize your COA strategy.  This is an opportunity to speak with a Senior Scientist and discuss what reasonable steps you can take to reduce risks and ensure the success of your clinical development program.

If you enjoyed this blog and would like to learn more, please visit to view a full 1-hour presentation on this topic.

Thank you for being a part of the global ERT Community.

Posted in Uncategorized | Leave a comment

Metrics Champion Consortium – Cardiopulmonary Performance Metrics Initiative

ERT Blog with guest, Linda Sullivan – Vice President Operations, MCC

The Metrics Champion Consortium (MCC) is a non-profit organization comprised of approximately 90 biotechnology, pharmaceutical, medical device and service provider organizations.  Their mission is to help sponsor and service providers improve their overall clinical trial development processes through the utilization of standardized clinical trial performance metrics such as time, cost & quality.  The MCC has been in operation since 2006, when the first set of metrics were defined and released.  ERT has been involved with the MCC’s working groups and initiatives since its inception.

In 2011, the MCC changed the name of the ECG working group from the ECG Performance Metric Steering Committee to the MCC Cardiopulmonary Performance Metrics Initiative.  The new name reflects the broadened scope of metrics that the initiative are discussing and developing at this time. The specific areas covered by this initiative include implementation support of the MCC ECG Performance Metrics version 2.0 and developing/implementing other MCC performance metrics in the areas of Ambulatory Blood Pressure Monitoring (ABPM), Spirometry and Echocardiography.  These performance metrics share a similar set of “core” metrics and contain their own area-specific metrics as well.

On June 27, 2011, the MCC launched version 2.0 of the MCC ECG Performance Metrics.  Version 1.0 of these metrics were initially introduced to the industry in 2007.  This set of standardized performance metrics, selected and defined by MCC members (sponsors and ECG core labs), were implemented in the hope that they would facilitate discussions for process improvements and a better understanding of the processes within each organization that affect the management of centralized ECGs.  MCC Members felt that this has been accomplished and believe that more improvements can be realized using the newest version of the MCC ECG Performance Metrics.  Version 2.0 contains 15 core metrics and 2 ECG specific metrics.  The types of performance metrics include cycle time, timeliness, quality, efficiency/cost and tracking.  Additional and more specific information regarding Version 2.0 of the MCC ECG Performance Metrics can be found here

On December 21, 2012, as a part of the Cardiopulmonary Performance Metrics Initiative, the MCC released version 1.0 of the Spirometry Performance Metrics.  For background information purposes, forced spirometry is when a patient takes a deep breath in and exhales as rapidly and forcefully as possible, then the amount of air exhaled at different time points is measured.  The keys to centralized spirometry are that forced expiratory volume in 1 second (FEV1) is a primary endpoint in Respiratory clinical trials, a secondary endpoint for inhaled medications and it is also an effort dependent test.  According to the ATS/ERS 2005 Guidelines, best practices for forced spirometry include three acceptable efforts, two of which are repeatable.  Acceptability entails a good start of the test, no artifact (i.e. coughing) during the first second of expiration and a satisfactory end of test.  Repeatability requires that subjects with a forced vital capacity (FVC) < 1 liter, the best and second best acceptable efforts must be within 100 mls of one another for both FEV1 and FVC, and for those subjects with a FVC ≥ 1 liter, the best and second best acceptable efforts must be within 150 mls of one another for both FEV1 and FVC.

Amy Furlong, Chief Operations Officer at ERT, and Jim Sowash, Director of Respiratory OverRead at ERT, have been instrumental in helping to lead group discussions regarding the development and implementation of the Spirometry Performance Metrics.  The spirometry metrics include the same 15 core metrics as the aforementioned ECG Performance Metrics as well as 3 additional metrics specific to spirometry.  The three additional metrics are designed to evaluate the overall quality of the data from proficiency training of personnel through the quality of the data received by the central reader.  Unlike the collection for ECG data, the spirometry data requires significant interaction between the site staff and the subject to obtain quality data. Proper training of site personnel on how to coach subjects to complete a successful maneuver is critical. As noted above, there are strict guidelines on the definition of an acceptable test.  The central overread plays an important role in assuring only the best data in compliance with these standards are selected by the site.  The “deselection” of data is a metric used to track the ability of the site to recognize and submit good data.   As with the ECG Performance metrics, the goal here is to facilitate process improvements and open up the dialogue between sponsors and service providers to change the way we look at performance and ultimately change performance itself when necessary.  

While the MCC has been initiating moves to develop and implement standardized performance metrics in other areas of clinical research, another major initiative involves closing the loop with the sites themselves.  This will help drive quality improvements at the site level with their own set of metrics.  What people sometimes don’t understand about the MCC is that half of these metrics are measuring the performance of the sites and are not always in the vendor’s control.  They are sometimes more about the quality and timeliness of the interactions with the sites.  Sites are essentially the be all and end all of where clinical data is coming from and even though they are consistently being measured on their performance, often times, they are not given this feedback.  The sites would like to compare performance against themselves in order to make better business decisions. They want to know if they are they the only one having “this” problem, how they can do a better job, how they can become more efficient, and what they need to know to select better partners on projects and studies as sponsors do.  The MCC Site Initiative group is working diligently to make this happen as a next step for the organization.  Linda Sullivan has said it best in that, “All parts have to work together.  The industry has to sink or swim together.  We all share a common goal, and that is to keep people safe and get products in the market place that will help them.”

To learn more about the MCC, please visit or contact Linda Sullivan, Vice President Operations, MCC, for membership inquiries.

For more information on ERT’s Centralized Spirometry services, please visit
Posted in Clinical Research, Clinical Trials, Pharmaceutical | Tagged , , | Leave a comment

The Importance of Inspiratory Capacity in COPD Clinical Trials

Written by: Dawn Patterson, Supervisor Respiratory OverRead

On average, it costs $539 per person annually to treat an asthmatic and $4,150 to treat COPD.  These costs reveal a need for improved symptom management and reduction in exacerbations that would reduce the need for escalated care and in return reduce overall costs.  In addition, it would increase the quality of life for patients by avoiding or preventing an exacerbation event.

This post will address these three basic questions:

  • Can spirometry be used to measure hyperinflation?
  • Is inspiratory capacity a dependable parameter to measure hyperinflation?
  • What impact does centralization have on inspiratory capacity?

Before we get started, let’s step back to basics for a minute: what is hyperinflation and why do we measure it?

Hyperinflation is the volume of air that is trapped in a patient’s lungs at the end of exhalation.  The ability to fully exhale depends on the degree of airflow limitation and the time available for exhalation, which is why greater hyperinflation occurs during exacerbations or exercise.   Because COPD is an irreversible disease characterized by a reduced expiratory airflow, COPD patients already start at a higher lung volume than that of a healthy individual.  With increased activity, anxiety or if the patient becomes hypoxic, their respiratory rate increases, decreasing expiration time and ultimately creating dyspnea.  As the disease progresses so does the hyperinflation, producing significant detrimental effects on a patient’s breathing.

There are several ways to confirm Hyperinflation including a physical exam, X-ray or CT scan.  Another and more accurate way to confirm hyperinflated lungs is to measure lung volumes and capacities.  Measurement of lung volumes provides a tool for understanding normal function of the lungs as well as disease states.  The amount of air in the lungs is subdivided in two groups, lung volumes (VT, IRV, ERV and RV) and lung capacities (TLC, VC, IC, FRC).  Measuring lung volumes in COPD patients can give us a better understanding of hyperinflation and how treatments work to improve their daily activity.

So, how do we measure IC?  One way is by using body plethysmograph, also known as body box.  Body Box is the GOLD standard for measuring lung volume.  What is nice about the body box is that it not only measures spirometry, but also thoracic lung volumes and resistance.  Most importantly it can be performed quickly and will give absolute values. However, it does have a few draw backs. Some COPD patients are claustrophobic and may not want to be placed in an airtight closed system.   It’s also rather large and takes up a lot of room.  In addition, severe COPD patients may require Oxygen and have other devices which may not be able to go into the body box. The most significant disadvantage, especially within clinical research, is having non-standardized equipment at each site. With sites using their own equipment, there is a possibility of equipment variability, poor equipment feedback (inconsistent assessment) and inconsistent site training – this will likely result in poor quality data, thus having inconclusive results.  Having the same body box at each site will provide you with a greater percentage of acceptable data but unfortunately, it can be quite costly.

An alternative to the body box is spirometry.  Spirometry (forced or slow) is used to confirm airway obstruction and physicians are in a position to detect COPD in its early stages.  Spirometry is a relatively simple and noninvasive test which only takes a few minutes of the patient’s and technician’s time.   There are many different devices to choose from which are smaller in size, and much lighter than the body box.  More importantly, it’s affordable.  According to the GOLD standard, FEV1 is the parameter of choice for diagnosis and monitoring progression of airway obstruction.  It also provides a starting point for determining a patients initial treatment plan, however it does have its limitations.  The changes in FEV1 value does not necessarily reflect changes in dyspnea and is limited to exercise performance.  It is also limited in clinically assessing patients.  For example, patients with mild and severe COPD may have similar FEV1s but they can be at complete opposite ends of the spectrum when it comes to quality of life.

Forced spirometry consists of a patient taking a deep breath in and forcefully expiring until the patient cannot exhale out any more, then the patient is asked to take another deep breath in. It is essential that the patient performing the test be clearly instructed in the procedure prior to the start of each test.  A very enthusiastic demonstration by the technician is crucial so that the patient makes a maximum effort when carrying out the forced expiratory test.  With this test, the forced expiration may cause airways to prematurely close and trap air, resulting in an inaccurate measurement.

The primary difference with slow spirometry is that the expiration into the spirometer is done slowly.  Patients who have trouble with the force maneuver, due to an inability to complete a full exhalation, may do better with this test.   Patients should be relaxed and asked to breathe regularly for several breaths until the end expiratory lung volume is stable (this usually requires at least 5 tidal breaths).  They are then encouraged to take a deep breath to TLC with no hesitation, followed by an expiration.  It is important, as with all the other tests, to make sure that it is done properly.  If the inspiratory breath is too slow due to poor effort or hesitation, or if there is premature closure of the glottis, the IC may be underestimated.  As with force, it is important for the patient’s cooperation and understanding of the test.  The technician should be enthusiastic about the test but not as demanding.

In a study of 93 patients in a primary care setting, it was shown that in addition to measuring dyspnea and quality of life, IC is also an established reliable parameter in measuring hyperinflation.  It has also been noted that Spirometry is highly reproducible and gives a clear indication of the extent of hyperinflation.  After gathering all of the data, it is important to make sure that the data collected in clinical trials is of the best possible quality.  The best way to eliminate any transcriptional errors and reduce variability is by having the data centralized.

Respiratory clinical trials can be challenging and centralizing IC data plays an important role in clinical research.  Patients entering these trials already have breathing that is compromised, raising variability from the beginning.  However, performance by the patient is extremely important to producing accurate results.  Just as relying on the technician to be an enthusiastic motivator for the patient undergoing a test, it is important that the site is delivering data that is acceptable. A centralized approach to spirometry helps you increase data quality, by providing the ability to better see treatment effects. Centralization institutes quality control measures in each step of the clinical trial process so the study has the advantage of cleaner data, which in turn minimizes the negative impact to your data and your budget.   For example, before centralization, there were 45,497 measurements with 6,094 (or 14%) that were unusable data.  After having a team of expert reviewers assess the measurement and data quality based on the current ATS/ERS standards that percentage of unusable data dropped down to 9%.

As previously stated, improper training and poor patient effort will result in poor quality data and it is crucial to educate and monitor the technicians to improve the accuracy of data reporting.  With centralizing, data is transmitted to a central database and is graded according to a combination of the ATS/ERS standards and pharmaceutical company specifications.  The overread of data quality is reviewed proactively, so sites with a large amount of poor quality data can be targeted during the trial for retraining.  As noted in the chart below, the most frequent error is repeatability. The 2005 ERS/ATS guideline on spirometry has established a repeatability criterion (highest to 2nd highest value) of 150 mL for Vital Capacity (VC), while for the Inspiratory Capacity (IC) there are no set established criteria.  More studies need to be evaluated to estimate an achievable repeatability for IC.  Some of the other errors are unstable baseline, cough, unstable breathing, breathing frequency, and improper procedure.

*Source:  ERT Respiratory Solutions

In summary, although body plethysmography gives absolute values, it is an expensive way to monitor hyperinflation.  IC together with spirometry on the other hand has been shown to be a dependable, inexpensive and simpler parameter that can indicate the presence and management of lung hyperinflation.  The aim of centralization, standardized equipment and training, is to increase the percentage of acceptable data and provide the best quality data – resulting in greater patient retention, and increasing statistical power at a reduced price.

For more information on ERT’s Respiratory Solutions, please visit: 
Posted in Clinical Trials, COPD, Inspiratory Capacity | 2 Comments

CLEAR Study Update: A look inside the collaboration between ERT and UCLA

Written by: Michael Taylor – Senior Director, Healthcare Solutions

As some of you may remember, back in February 2012, ERT collaborated with the University of California, Los Angeles (UCLA) to launch the CLEAR study, an innovative research study aimed at helping patients suffering from Chronic Obstructive Pulmonary Disease (COPD). You can read the press release, here.

For the first time in the industry, UCLA and ERT worked together to design a study that would enable home-based spirometry. If the trial is successful, it will represent a major step forward for the treatment management and continued care of COPD. It is hoped that if the methods used within this study demonstrate better, more reproducible data that indicate more specificity or sensitivity to predict exacerbations, it could change the future of clinical trials. The on-going study has already made significant progress in the first six months, something which couldn’t have been achieved without the combined expertise, knowledge and support of ERT and the UCLA teams. The broad range of skills and experience within the partnership has ensured that the methodology for the study is completely focused on improving the lives of individuals living with COPD.

ERT’s wide understanding of how to efficiently record data in clinical trials was the basis on which the study was built. Prior to meeting with the UCLA team, ERT had already developed a series of hypotheses for the trial and prepared an initial protocol covering the objectives to be achieved, the type of equipment that would be used and the specific aims of the program. Dr. Christopher Cooper, MD, Professor of Medicine and Physiology at the David Geffen School of Medicine at UCLA, took ERT’s protocol and applied his many years of experience to expand the hypotheses, getting to the very core of ERT’s objectives. Together, the two parties went on to co-develop an extensive protocol comprising primary and secondary outcomes, the entire study design, baseline assessments and specifications about how the study would work based upon the patients that it would be monitoring (view press release for more information). Once the study protocol was finalized, ERT implemented methods and criteria within its instruments that were specifically adapted for the study and trained UCLA’s team on how to efficiently and effectively record optimized data. This involved ensuring they knew how to use the devices, how to train patients to effectively use electronic PRO devices, and how to analyze the data reported on the online portal. This helped to expand UCLA’s knowledge of how technology can improve the quality of patient data in studies, demonstrating the mutual benefits of collaboration between manufacturers and academia.

The first of the study’s 200 patients were recruited in February 2012 with trial results to be reported in late 2013. Upon enrollment, each patient was provided with a full physical examination and an exit consultation planned for when the study is concluded. To date, the study has been developed to address the needs of physicians with the aim of achieving better treatment for patients living with COPD. The data being collated in the study is wide-ranging, to ascertain exactly which treatments and therapies are most effective. Due to the large population used within the study, there has been a large volume of interest generated about the program across the industry.  Throughout the duration of the study, ERT and UCLA remain in constant contact to discuss what’s working, revisit the objectives, ensure progress is in line with their goals and ascertain if there are any improvements that could be implemented. Some of the main challenges associated with the study have been the high number of participants required for the ambitious project and, consequently, the high volume of data resulting from the program. As a result, additional time and effort has had to be contributed by both parties to ensure that all the data is collected and analyzed in the most efficient and effective way possible.

Built on a 12 year relationship, ERT and UCLA both demonstrate a clear understanding of and commitment to helping each other achieve their aims. A key difference between this trial and similar ones is the dedication of both parties to developing new methods that will provide information about what is really important to COPD patients. This patient-centric approach goes further than simply mimicking existing research; it has resulted in the development of a questionnaire tailored to the specific objectives of the study by collating information on multiple parameters including not only data on sufferers’ symptoms, but also information on physical activity and patients’ medications. ERT and UCLA have combined their technologies, knowledge and expertise, with the aim of changing the lives of patients living with a condition where the burden of illness is high and the need for quick intervention, vital. Without the involvement of either party, the volume of data and participant numbers would simply not be attainable by the organizations individually. Bringing their resources together means that the study will be able to successfully demonstrate the benefits of on-going symptom management, early intervention of exacerbations, and on-going patient assessments for medication; helping to progress the adoption of remote healthcare monitoring.

For more information on ERT’s Respiratory Solutions, please visit:
Posted in Uncategorized | Tagged , , | 1 Comment

Brief Report – Changes to the FDA Draft Guidance Suicidal Ideation and Behavior: Prospective Assessment of Occurrence in Clinical Trials

The original Draft Guidance was released September 8, 2010

  • Prospective assessment of  suicidality
    • Identify patients at risk
    • Collect complete, timely data
    • Perform in every phase, in every trial, at every visit
      • In all psychiatric indications
      • In all neurology compounds
      • For all other drugs pharmacologically similar to drugs about which there has been concern
  • The C-SSRS is an ‘acceptable’ prospective assessment 
  • Administration by ‘phone and computer’ are acceptable

Update: August 6, 2012

After two years of review, comment and redrafting, the FDA’s changes were centered around reinforcement and clarification of their original objectives – patient safety and quality data.  Overall, there were no fundamental policy shifts. These changes are instrumental to sponsors understanding and getting more specifics around assessing suicidal ideation and behavior in clinical trials.  The most important changes include:

  • Changes the term “Suicidality” to “Suicidal Ideation and Behavior”
    • This is a good change and makes the language much clearer
  • Expansion of the C-CASA categories
    • The Guidance shifts from four of the nine previous C-CASA classifications to 11. 10 of those 11 are exactly the ones that are in the Columbia Suicide Severity Rating Scale, five for ideation, five for behaviors and then added as the 11th is the non-suicidal self-injurious behaviors.
  • Slight revisions on particular trials and patients that need assessment and timing of assessments
    • The updated draft Guidance outlines any drug for a psychiatric condition, epilepsy, neurologic drugs with CNS activity, drugs that are pharmacologically similar to isotretinoin and other tretinoins, beta blockers (especially those entering the brain), reserpine, drugs for smoking cessation, and drugs for weight loss.
  • Addresses concerns about assessment time burden on people in the sites and on the patients – FDA saw the burden was very small and is not to be a reason not to do these assessments
    • We have learned from the first 35,000 assessments of the electronic Columbia Suicide Severity Rating Scale that the mean completion time was 3.8 minutes. It was 3.5 minutes for the 98.3% of negative assessments and only 7.7 minutes for the 1.7% of assessments that had a positive signal for suicidal ideation or behavior.  The completion rate was 99.89%, which is the confirmation that the time burden from the patient perspective, as well as from the site perspective, is not great.
  • Recognizes the value of the lifetime assessments in providing protection for patients at risk
    • Based on the study findings of 35,000 assessments (6,000 of which were performed at baseline and questioned lifetime ideation and behavior), we found quite strikingly that lifetime assessments did predict a substantial increase in the odds ratio of subsequent positive signals for suicidal ideation or behavior, between four and nine times, depending on which combination of ideation and behaviors were there.
  • Explicitly mentions the electronic Columbia Suicide Severity Rating Scale
    • “The eC-SSRS … is an alternative approach to obtaining data on suicidal ideation and behavior.”
    • C-SSRS & the eC-SSRS are the only two assessments specifically identified in the guidance

To request more information or to speak with a PRO expert in regards to this guidance, please fill out this form or visit .  In the meantime, you can also review the following resources:

Posted in Uncategorized | Leave a comment