Benefits of Standardized, Centralized DLCO in Respiratory Clinical Trials

The largest source of subject variability within a respiratory clinical trial is the improper performance of a test. Such “noise” in spirometry and other pulmonary function testing (PFT), coupled with device calibration and technician training inconsistencies, can restrict the ability to find important clinical signals relating to treatment effects.

For your trial, this means a heightened risk of inaccurate and unusable data that can only be prevented by utilizing standardized and centralized respiratory services. Standardization involves the use of consistent devices and calibration across sites, as well as identical training regimes for technicians to ensure consistent patient performance. Centralization delivers digital collection of test results and helps increase data quality by providing the ability to better see treatment effects through accurate analysis. Centralization also institutes quality control measures in each step of the clinical trial process so the study has the advantage of cleaner data, which in turn minimizes the negative impact to your data and your budget.

These benefits apply to the carbon monoxide diffusing capacity test (DLCO), a commonly used pulmonary function test that measures the gas exchange ability of the lungs. While spirometry measures lung mechanics and lung volume PFTs ascertain size, DLCO tests indicate how well the lungs and heart are able to oxygenate blood. DLCO measurements specifically test the integrity of the alveolar/capillary interface in the lungs. Repeated measures of DLCO aid in determining disease progression or therapy efficacy and enable clinicians to identify diagnostic details that are not visible by spirometry or other PFT approaches.

Unfortunately, many DLCO measurements are subject to variability as a result of patient, equipment, or technician errors. To obtain consistent, credible data, DLCO equipment must be properly maintained and regularly calibrated. In addition, diffusion testing requires a specially trained technician who can validate the data by understanding the measurements and also their plausibility in relationship to the subject’s condition.

According to a study published in Respiratory Care, DLCO measurements of the same patient in different labs vary as much as 50%. In an effort to reduce these differences, the American Thoracic Society (ATS) and European Respiratory Society (ERS) published the standardized testing procedures and equipment recommendations for DLCO in 2005, which can be found here:  http://www.thoracic.org/statements/resources/pfet/pft4.pdf.

Since improper training and poor patient effort will lead to poor quality data, it is crucial to educate and monitor the technicians to improve the accuracy of the data being reported. It is equally important to ensure the equipment and measurement procedures meet performance criteria so that end assessments have the greatest, most consistent data – significantly reducing variability.

Using standard equipment across all investigator sites is also vital to producing consistent, quality data. Standardized devices significantly reduce variability often caused between instruments, meeting ATS/ERS standards for accuracy.

With centralization, data is transmitted to a central database and is graded according to a combination of the ATS/ERS standards and pharmaceutical company specifications.  The overread of data quality is reviewed proactively, so sites with a large amount of poor quality data can be targeted during the trial for retraining.

Centralized, standardized DLCO measurement benefits the development of new compounds for the treatment of asthma, chronic obstructive pulmonary disease (COPD), and other indications by reducing variability inherent to effort-dependent pulmonary tests.  ERT provides standardized devices and training as well as central data collection, and we will evaluate both digital and site-generated paper DLCO measurements for quality and performance. 100% data overread is provided by a highly qualified team with over 20 years of clinical experience guided by multiple levels of quality control and assurance. Standardizing and centralizing with ERT significantly increases the percentage of acceptable data and provides the best quality data – resulting in greater patient retention, increased statistical power at a reduced price, and ultimate determination of efficacy to support your new drug application’s regulatory approval.

Visit the ERT Resource Center at https://www.ert.com/clinical/resources for more information on centralizing and standardizing DLCO and other measures in respiratory trials. There you can register to view the recently recorded webinar, “Current Pharmacotherapeutic Treatment and DLCO Monitoring Strategies for the Management of Asthma and COPD.”

Posted in Clinical Research, Clinical Trials, COPD | Tagged , , | Leave a comment

ERT Summary and Commentary on the ICH E14 Q&A(R2)

Written by: Dr. Robert Kleiman, Chief Medical Officer & Vice President, Global Cardiology

The ICH E14 Implementation Working Group (IWG) released a third set of Q&A responses on March 21, 2014.  The latest document discusses concentration response modelling of QTc data as well as three “special cases”.  Overall this Q&A adds valuable commentary and guidance regarding some of the questions about Thorough QT Studies (TQTs) – though a variety of additional topics would still benefit from further clarification.

The first topic addressed by the new Q&A document concerns the use of concentration response relationship (CRR) modelling of TQT data.  At the time that the ICH E14 guidance was drafted, CRR assessment was still considered an area of “active investigation”, but it has since become a very important and well respected part of the assessment for the proarrhythmic potential of new drugs.  The new Q&A document explains that the CRR can be a useful part of a drug’s evaluation, and goes on to explain the importance of prospectively describing the pharmacokinetic-pharmacodynamic (PK-PD) models which will be used.  This is to avoid post hoc attempts to utilize as many PK-PD models as possible until one finds the “best” PK-PD model which yields the most desirable results.  The IWG also acknowledges that there are situations in which a QT effect is delayed or persistent when compared to plasma concentrations, resulting in an exposure response relationship that shows hysteresis.  This may occur when a long acting metabolite has QT effects, when there is myocardial accumulation of a drug or metabolite, or when there are delayed effects on ion channel trafficking.  The IWG briefly discusses the use of PK-PD models which incorporate hysteresis, and then discusses the situations in which analysis of CRR may be useful. This includes the estimation of the QT effect of doses of a drug which have not been formally tested, clarifying ambiguous QTc results from a TQT or early phase study, or helping to predict the QTc effect of factors which may alter a drug’s PK.

The IWG then proceeds to discuss 3 “special cases”.  First, when a single dose crossover design TQT is not feasible, the IWG mentions that alternative designs using a parallel arm design, using patients with the targeted disease rather than healthy volunteers, or using other alternative designs may still be possible.  When a placebo controlled arm is not appropriate (as is the case for many new oncologic agents), the IWG recommends that study designs should incorporate as many of the usual TQT features as possible.  In particular, intense ECG and PK collection during early phase ascending dose studies, or even in a late stage trial, may be able to provide sufficient data to assess a drug’s proarrhythmic risk.

The IWG then briefly discusses QT assessment for combination drug products.  If the component drugs in a new product have all undergone TQTs which show no relevant QT effects, then a TQT or intense late stage ECG monitoring will probably not be necessary.  However, if one or more of the component drugs has never had a thorough QT/QTc assessment, the IWG states that they may be evaluated in combination or independently.

Finally, the IWG states that large targeted proteins and monoclonal antibodies have a low likelihood of direct interactions with ion channels, and therefore a TQT would usually not be required unless there are unusual circumstances suggesting the potential for proarrhythmia.

While this latest Q&A document does clarify some of the important issues which remain regarding the QT/QTc assessment of new drugs; further clarity would still be helpful regarding several other issues, including:

  • Use of PK-PD models which better account for the effects of active metabolites
  • The need for use of a positive control when a drug can only be studied in patients
  • How best to assess combination products: the IWG statement about performing QTc evaluation of “independently or in combination” does not clearly help us understand the design elements which would best meet regulatory approval
  • The best strategy for assessing the QT/QTc effects of large proteins and macromolecules.  Although a TQT may not be required, it is well known that many proteins do have QTc and other cardiovascular effects, and regulators do still expect submission of some ECG data for large molecules.  A further clarification concerning what type of ECG data is expected would be helpful.

In summary, the latest Q&A document further refines our understanding of the regulatory pathways for evaluating the proarrhythmic potential of new drugs.  As always, there are always further issues which may still require additional clarification, and we await future releases from the E14 IWG.

For more information on TQT studies, contact ERT here.

Posted in Cardiac Safety, Clinical Research, Clinical Trials | Tagged , , , | Leave a comment

ERT Goes to the 50th Annual DIA in San Diego

This year at the DIA, the industry’s largest annual conference, ERT had lots of exciting news to share. With several product launches happening at the booth, ERT continues to improve clinical technology and innovate better health for patients.

We had the Senior Director of Global Cardiac Safety Solutions on hand to discuss the new QT Guard Plus Analysis System and an upcoming SITEpro/ELI-PC integration. The QT Guard is the latest novel technology provided by our EXPERT® platform as part of ERT’s centralized cardiac safety solutions. Using the standard 12-lead ECG, biopharmaceutical companies can perform quantitative analysis of drug induced changes in T-wave morphology. This technology is set to improve cardiac safety while preserving sponsors’ drug development pipelines. ELI-PC and SITEpro provides ECG integration with eCOA studies to drive value to clients for integrated service delivery. The goal is to utilize ELI-PC hardware and software (or API) to collect ECG and eCOA data during the clinical trial patient visit, and to use SITEpro to seamlessly transfer the data to ERT.

eCOA Product Managers were also at the booth to demonstrate ERT’s new eCOA app which enables a Bring Your Own Device (BYOD) approach for collecting patient data in clinical trials. Since this Apple® (iOS) and Android™ app use patients’ smartphone devices, sponsors eliminate the need to purchase and manage hardware, which means reduced cost and logistics burden. Patients benefit from a familiar user interface that fits into their daily lives. Unlike other mobile applications that run within web browsers and require a live internet connection, this BYOD solution allows fully-offline operation for when data must be entered during strict time windows, regardless of internet connectivity.

Respiratory Product Managers provided demos of the new MasterScope v2.0. This system delivers an improved and fully integrated solution for Spirometry, ECG, home monitoring, and collection of exhaled nitric oxide (NIOX Mino). While at DIA, ERT also received 510(k) clearance from the U.S. Food and Drug Administration to market our AM3® GSM devices in respiratory clinical trials.  The AM3 GSM device provides a robust and reliable wireless option for clinical trial patients to communicate key spirometry data during the clinical development of new respiratory treatments. Now with the availability of AM3 GSM, important respiratory data can be simply and cost-effectively transferred from a patient’s home which allows near real-time access to patient data for investigators and sponsors, enabling better patient monitoring and, ultimately, better care.

MasterScope Respitatory Platform

In addition to all of this hard work and launch planning, we also made sure to have a little fun. We participated in a Medical Heroes 5k first thing Monday morning. Together 300 professionals, patients, and community members celebrated the people who give the gift of participation in clinical research. At the ERT booth, we were streaming the World Cup games live, held several contests, gave away things like ice cream, beer and wine, healthy snacks, hacky sacks, flash drives, and gift cards. We also had two stations for customers to come by and “refresh and recharge” – allowing some time to stop and talk with us while they added some juice to their mobile devices. One of the most memorable events was the Monday night DIA Castaway party. ERT, in conjunction with TransPerfect, put together the biggest event of the year at the Port Pavilion in San Diego. So many of our customers and colleagues (1,500 to be exact!) came out to join us for food, drinks, and music. The venue was amazing, the castaway/beach themed décor couldn’t have been better, and seriously, who doesn’t love a rockin’ 80’s cover band?

Medical Heroes 5k  World Cup at DIA    ca3

  

You can view more photos at www.diacastaway.com and follow all the social media buzz on www.tagboard.com/diacastaway.

See you next year!

 

Posted in Clinical Research, Clinical Trials, DIA, DIA Annual Meeting, Drug Information Association, Pharmaceutical | Leave a comment

ERT Clinical Applications Not Susceptible to Heartbleed OpenSSL Exploit

Message to our customers regarding Heartbleed security bug – ERT clinical applications were not impacted: 

As widely publicized, there was a recently discovered vulnerability detected within a very popular encryption software known as OpenSSL. This vulnerability, known as Heartbleed, allows internet traffic, thought to be secured via SSL encryption, to be exposed to persons who have created methods of exploiting this vulnerability.

ERT maintains stringent security protocols during the development and production implementation of our applications and we apply multiple layers of protection for our client data. Following the identification of this vulnerability, we have conducted exhaustive tests to verify that our systems remain secure. Due to the fact that ERT’s clinical applications utilize the most current Java and Windows security stacks, we were not susceptible to this OpenSSL exploit – since OpenSSL is not used by our clinical applications.

As always, our security protocols as well as our package management process includes checks for current and older security exploits which we actively test for and correct when detected in accordance with ERT Standard Operating Procedures.

Posted in Uncategorized | Leave a comment

Cardiovascular Outcomes Trials and COPD Drugs – The Latest Regulatory Point of View

Written by: Dr. Robert Kleiman, Chief Medical Officer & Vice President, Global Cardiology

It’s quite common for patients with chronic obstructive pulmonary disease (COPD) to have concomitant cardiovascular (CV) disease.   There’s no great mystery to this – COPD tends to occur in older patients, many of whom will also have common CV risk factors such as hypertension, diabetes and hyperlipidemia.  And of course, most cases of COPD are related to cigarette smoking, which itself is a strong risk factor for the development of CV disease.  Patients who participate in COPD trials often develop CV adverse events which are just related to their underlying CV disease.  As a result, during the development of new drugs to treat COPD, it becomes very difficult to tell whether CV adverse events are directly related to therapy or simply represent the natural history of the patient’s underlying CV disease.

There are similarities with the development of new drugs for the treatment of diabetes, as patients with diabetes often have CV disease, and have increased rates of CV events.  As part of the fallout from Rosiglitazone, regulators have raised CV concerns about all new anti-diabetic agents, and despite some controversy, have required large CV outcome safety trials (learn more on a previous ERT Blog). Are sponsors expected to go down the same path for COPD therapies?

Probably not.  During a recent Cardiac Safety Research Consortium (CSRC) Think Tank Meeting, several regulatory representatives commented that there was no clear evidence that current COPD treatment modalities (long acting beta agonists and long acting muscarinic antagonists) produce adverse CV outcomes.  In fact, the results of two large randomized clinical trials, TIOSPIR and UPLIFT, suggest that COPD therapies don’t result in an excess of CV events.  The regulators present at the recent CSRC meeting stated that they did not feel that dedicated CV outcomes trials would be routinely required for new COPD drugs. The CV safety of new COPD medications, however, does remain an area of regulatory concern, and a dedicated CV outcomes trial might be required on a case-by-case basis, depending on the data submitted. This is good news for the industry as a blanket requirement for CV outcomes studies could delay or even halt the development of new treatments for COPD due to the huge expense of these studies.

So how can sponsors prospectively avoid big, long, expensive outcomes trials? Here are a few recommendations based on comments from the FDA, EMA and PMDA:

  1. Include a representative sample of COPD patients – including sicker patients – in Phase III trials.  This may require reducing the exclusion criteria to allow at least a subset of patients with more severe COPD as well as baseline CV disease.
  2. Obtain better and more robust baseline assessments of CV risk factors and CV status during clinical trial enrollment.
  3. If a CV adverse event does occur, insure that the site asks the right questions and collects adequate clinical data for later analysis. The CSRC provides forms which can and should be used by sites here.  Formal event adjudication may not be required, but at least be prepared to help the regulators understand any CV adverse events which have occurred.
Posted in Cardiac Safety, Clinical Research, Clinical Trials, COPD, FDA | Tagged , , | Leave a comment

A Global Crisis – The War on Suicide in Japan

Dr. Rene Duignan, not a professional film-maker but an Irish economist, was driven by a personal regret to create a film that would shed light on the crisis of suicide in Japan. Rene’s award winning documentary Saving 10,000 — Winning a War on Suicide in Japan explores the true causes of the high suicide rate in the region and suggests practical solutions for suicide prevention. The goal: to save 10,000 lives.

The suicide rate in Japan is twice that of America, with 30,000 lives lost each year. That’s 300,000 Japanese people in the last 10 years. While it is “taboo” to talk about suicide in Japan, there are manuals teaching you how to kill yourself that sell over a million copies. The culture sometimes portrays suicide as “beautiful” and is often “sensationalized” or used as a source of entertainment on Japanese television. Suicide is also a solution for many as a way to help their families from financial struggles. In Japan, life insurance companies pay out to the families of suicide victims.  Due to a deep sense of personal responsibility, some would rather take their own lives than continue to face economic hardship.

As in any country, suicidal ideation and behavior comes from an element of mental illness such as depression, and does not discriminate. It affects men, women, children, middle aged, elderly, rich, poor, famous or ordinary. It was noted in this documentary that 2/3 of all depression is triggered by a pressure of some sort – lack of sleep, being over worked, bullied, or abused, loneliness, failure, guilt, and the list goes on. There is a lack of mental health treatment and resources in Japan. The average clinic has upwards of 40-50 patients, which only leaves 3-4 minutes per individual. Therapy is too expensive for many people as it is not covered by health insurance.

For every suicide, there are 10 suicide attempts; that’s 300,000 attempts every year. This constitutes up to 20% of all patients in the most critical emergency medical centers. Of the 30,000 suicides completed annually, approximately 10,000 of those victims were already in the mental health system seeking active treatment, having consultations, taking medications or have been institutionalized. As healthcare providers, there is an opportunity to address this public health crisis. People who commit suicide see healthcare providers in the critical days before their death.

How can healthcare workers meet the increased demand for their time and address the challenge of rising suicides among patients? Standardized and efficient mental health screening and suicide risk assessment is a part of this answer.

In pharmaceutical product development, the Food and Drug Administration has mandated prospective monitoring of suicidal ideation and behavior for certain classes of drugs. Clinical researchers have used electronic, patient self-reported suicide risk assessment tools to ensure the safety of patients enrolled in clinical trials. AVERT™ is one such tool. AVERT utilizes the electronic Columbia-Suicide Severity Rating Scale (eC-SSRS) to capture the full range of evidence-based suicidal ideation and behavior data. It has been used to conduct more than 160,000 assessments of over 35,000 patients across the globe – including Japan. Global healthcare providers can realize the same benefits clinical researchers continue to find in AVERT.

Electronic, self-reported systems enable patients to complete the assessment privately via the Web, an electronic tablet or their smartphones. People can complete them in just over three minutes, and the results are reported immediately to their healthcare provider. If the assessment finds the patient to be at risk for suicide, an alert will instantly be issued and the appropriate follow-up can be provided.

In addition to the solutions proposed by Dr. Rene Duignan, we believe that healthcare providers have an opportunity to provide community benefit through quick, efficient and effective suicide risk assessment. Systems like AVERT, which leverage good technology and good science, can help do that.

Please take a few moments to view: Saving 10,000 — Winning a War on Suicide in Japan below. For any questions regarding AVERT in clinical research or healthcare, you may contact us here.

Posted in Clinical Research, Clinical Trials, eC-SSRS, FDA, Suicidality Monitoring | Tagged , , | Leave a comment

Regulatory Trends in Reviewing Risk/Benefit Assessments: U.S. FDA Perspective

Today’s blog will discuss regulatory trends in reviewing benefit/harm assessments in medical interventions. The important question here is: how do we balance the two? When we use the term “benefits” we are referring to “the good” that actually happens to patients by prescribing medical interventions; more specifically, the ability of medical intervention to improve outcomes for patients in terms of decreasing symptoms, improving function, or improving survival. It could also mean fewer adverse effects compared to other interventions. Benefit is also termed “efficacy” or “effectiveness.”

Harms, on the other hand, are “the bad.” They are the adverse unwanted consequences reasonably associated with use of medical interventions. These can include things like signs, symptoms, lab values, vital signs, ECG, etc. Harms are often erroneously termed “safety.” However, no intervention is completely “safe” in terms of absence of all harms. Safety is really the balance of benefits vs. harms, not just harms alone. This concept was recognized early on in the history of FDA, even before the 1962 requirement for effectiveness.

Regulatory History:

Prior to 1938, there was no premarket review, only response to crises. However, based on the sulfanilamide tragedy that year, drugs had to be shown as “safe” prior to marketing and there was recognition that efficacy was important in this consideration. “If the drug that killed one person in ten thousand was of only minor use therapeutically, it might still be judged to be unsafe, whereas the drug that killed one in a thousand persons, if it had marked and undisputed therapeutic value it would still be a safe and valuable drug.” [J.J. Durett, Chief, Drug Division, FDA, December 1938] In other words, “safety” depends upon context of use – the magnitude of benefit, in what patient population, for what disease and at what dose/exposure.

In 1962, the requirement to demonstrate efficacy (benefits) to justify any potential harms, prior to marketing, was established. The standard of efficacy is “substantial evidence” from “adequate and well-controlled studies.”  Efficacy is not based on p-values alone. You need to show clinically meaningful differences as well as statistical significance. [Warner-Lambert v Heckler 1986] This entails judgment of what is considered “clinically meaningful.” A better way to frame that judgment is to have actual evidence from patients about what is meaningful for them through the development and use of patient reported outcomes (PROs).

The standard for evaluating harms is not very clear in terms of law and regulations. In the Federal Food, Drug, Cosmetic Act, section 505 it states that “adequate tests by all methods reasonably applicable to show whether or not such drug is safe for use under the conditions prescribed, recommended, or suggested in the proposed labeling.” “Adequate tests” are very contextual depending upon what kind of patients and medical interventions you are studying. During a recent court case [Matrixx vs. Siracusano, March 22, 2011], Justice Sotomayor stated statistical significance is not necessary to show harm – “Because adverse event reports can take many forms, assessing their materiality is a fact-specific inquiry…Something more than the mere existence of adverse event reports is needed to satisfy that standard, but that something more is not limited to statistical significance and can come from the source, content and context of the reports.” To address the need for improvement in the clarity and transparency of the FDA’s benefit-risk assessment in human drug review, the FDA has made recent efforts to move toward more structured assessments, entailing both quantitative and qualitative types of decision making.

Current FDA Thinking:

As part of Prescription Drug User Fee Act (PDUFA) V negotiations, the FDA was tasked to develop structured benefit-risk assessment to serve as a template in product reviews. In February 2013, a document entitled “Structured Approach to Benefit-Risk Assessment in Drug Regulatory Decision Making: Draft PDUFA V Implementation Plan” was published to lay out the thinking on the approach to the assessment of benefits and harms when reviewing medical interventions.

One of the things people have been pushing the FDA to do is to use a more “quantitative” approach to benefit-harm assessments. On page four of this document it states: “The term ‘quantitative benefit-risk assessment’ can have various meanings depending on who is asked. Some hold the view that a quantitative benefit-risk assessment encompasses approaches that seek to quantify benefits and risks, as well as the weight that is placed on each of the components such that the entire benefit-risk assessment is quantitative.” “This approach is typical of quantitative decision modeling. It usually requires assigning numerical weights to benefit and risk considerations in a process involving numerous judgments that are at best debatable and at worst arbitrary. The subjective judgments and assumptions that would inevitably be embodied in such quantitative decision modeling would be much less transparent, if not obscured, to those who wish to understand a regulator’s thinking.”  In other words, if you reduce this to only a number, the meaning behind that number will be lost. So, much like efficacy assessments, you cannot boil down assessments of harm to a “p value” which is independent of its clinical meaningfulness.

This document provides a very helpful way to think about doing benefit/harm assessments including the suggestion to divide up the considerations into two major areas: therapeutic and product specific. Therapeutic area considerations cover the disease and the types of patients being studied including an analysis of the condition as well as current treatment options. It lays out the problem but does not address whether the new intervention provides a solution to the problem. Product specific considerations cover the benefits of the particular intervention (what are they, the magnitude, and in whom) and the harms (risk), including how these harms can be mitigated (risk management).  In essence, you want to briefly frame the therapeutic area considerations and then spend your time on how your product actually addresses the problem.  Below is the provided matrix for FDA Benefit-Risk Framework.

fdaframework

Conclusions:

“Uncertainty,” as presented in this matrix, refers to decisions based on assumptions that are not verifiable based on available information. The best way to de-risk development and decrease uncertainties regarding the nature and magnitude of benefits (to balance against harms) is through better measurement of benefits – using improved measurement tools, such as PROs. Oftentimes, sponsors are very hesitant to study additional potential benefits due to “regulatory uncertainty.” However, if we do not move beyond that thinking, FDA approval will become nothing more than a rubber stamp.

The FDA regulatory standard for most interventions, except for life threatening or contagious diseases, is that you only have to be better than nothing. However, patients, clinicians, and payers have no interest in whether your product is slightly better than placebo when there are other interventions out there for that particular disease. People will begin to look beyond FDA approval and compare how you have improved patients’ lives against other interventions in order to decide which they will pay for and ultimately use. Improved and additional information on benefits and harms through the use of PROs leads to more informed decision making. This will help you rely less on assumptions to address uncertainties and get you into more of a fact-based assessment that will actually help patients, clinicians, regulators, and payers understand which medical interventions are the better treatments options.

Posted in Uncategorized | Leave a comment

Drug-induced Hypertension – Potential New FDA Guidance?

Last month, we introduced Ambulatory Blood Pressure Monitoring and its use in clinical trials. Today, we will discuss drug-induced hypertension and why regulatory agencies, such as the FDA, have recently shown some concern. 

Some classes of drugs are known to have off-target hypertensive effects including corticosteroids, androgens, estrogens, progestins, NSAIDs, and several others.  Currently, there isn’t any empirical data regarding the safety/lack of safety regarding drug-induced increases in blood pressure (BP).  However, high blood pressure is well known to produce harmful side effects such as stroke, heart attack, kidney failure, etc.

Concerns about drug-induced hypertension were elevated after Torcetrapib, a cholesteryl ester transfer protein (CETP) inhibitor designed to increase good cholesterol and lower bad cholesterol, increased mortality despite having excellent effects on lipid levels – halting the drug’s development.  In Phase III trials, patients had a 50% increase in HDL and a 20% decrease in LDL, but 60% excess mortality and cardiovascular events.  The average blood pressure increase was merely 2.8-5.4 mm Hg, but 5-9% of patients had an increase of 15 mm Hg.  It is still unclear if the increase in blood pressure was the cause of the increased mortality rates, but it has made the FDA more concerned about off-target drug-induced hypertension. 

The lack of clarity regarding drug induced changes in blood pressure is not surprising as there are many mechanisms by which non-cardiac drugs can alter the BP, and many factors that can influence the BP effect demonstrated in clinical trials.  These include the method of measurement, the population studied (e.g., normal subjects vs. target population vs. higher risk groups), dose and duration of exposure, and background therapies that might mitigate BP risk.

So, is there a new FDA guidance in the works? 

The FDA has held public meetings in which the need for better blood pressure data on drugs has been discussed. It is important to note that the FDA is unlikely to require a “Thorough BP Trial” for all drugs.  However, during the Cardiac Safety Research Consortium Thinktank meeting in July 2012, the FDA hinted about a blood pressure guidance that will likely be directed towards drugs in certain classes or drugs that are administered chronically to populations at risk.  For example, oral contraceptives or corticosteroids could be likely candidates for more extensive BP assessments, but an antibiotic administered for a few days or a drug used to treat children, might not.  Since blood pressure behaves differently in different people, particularly in older populations or those in higher risk groups, intensive blood pressure monitoring performed in healthy young volunteers wouldn’t really tell us much about the BP effects on other populations.

Pharmaceutical companies should be thinking ahead when developing a new member of a class of drugs that is known to increase blood pressure, or a drug for which there is pre-clinical or early clinical evidence of blood pressure effects. In these instances, the FDA may require you to go back and collect more thorough blood pressure data.  You can be prepared for this requirement by utilizing an Ambulatory Blood Pressure Monitoring (ABPM) solution during clinical development. 

Why use ABPM for intense BP assessment? Traditional stethoscope and blood pressure cuff measurements are not very reliable or reproducible and can often produce “white coat hypertension” – which is estimated to cause 15% of patients to have artificially elevated BP.  Also, drug effects on blood pressure may occur throughout the day.  ABPM allows you to measure blood pressure over an extended period of time (typically 24 hours) to gather more precise and reproducible data. 

Unsure if your compound in development will likely require more intense blood pressure data? Want to know when and how you should implement ABPM? Contact us for a complimentary 30-minute consultation with one of ERT’s leading cardiac safety experts to learn more.

Posted in Uncategorized | Leave a comment

Why Use Ambulatory Blood Pressure Monitoring (ABPM) in Clinical Research?

Written by: Dr. Robert Kleiman, Chief Medical Officer & Vice President, Global Cardiology

Hypertension, or high blood pressure (BP), is one of the leading risk factors influencing the global burden of cardiovascular disease, resulting in an increased incidence of cardiovascular mortality, sudden death, stroke, coronary heart disease, heart failure, atrial fibrillation, peripheral artery disease, and renal insufficiency. During clinical trials, BP has traditionally been measured at the bedside by a nurse or physician using a stethoscope and blood pressure cuff.  A more recent development has been the use of Ambulatory Blood Pressure Monitoring (ABPM), which collects a patient’s blood pressure throughout the day, even after they leave the physician’s office or clinical site.  The patient wears a BP cuff connected to a small unit which automatically inflates and deflates the cuff at pre-specified intervals, measuring and recording the BP at the desired times during the day and night.

Early ABPM studies demonstrated that many anti-hypertensive drugs which appeared to be quite efficacious – based on conventional blood pressure measurements – had much less impressive performance when assessed with ABPM.  As a result, many clinical trials were conducted in the US and Europe to assess the value of the fully-automated ABPM technique.

Early trials evaluated the role of ABPM in confirming the efficacy of anti-hypertensive drugs as well as in predicting major cardiovascular outcomes in patients with hypertension.  The results were clear.  In a clinical setting, ABPM correlated much better with efficacy for anti-hypertensive drugs and with cardiovascular outcomes/adverse events than random clinic BP values.

The ABPM technique has also become recognized as the most effective way to eliminate the phenomenon of “white coat hypertension”, which is very common in clinic BP assessments. At least 15% of people will have an artificially elevated BP while in a doctor’s office, yet are otherwise normotensive.  The prevalence appears to be even higher among older adults, females, and non-smokers.

Additionally, in pharmacological research, ABPM is superior to clinic BP measurements in determining appropriate effective drug doses, duration of action of drugs and dosing schedules, and efficacy in “covering” the last 6 hours of the 24-hour dosing interval (morning BP surge).

Many researchers believe that ABPM should become a mandatory tool during development of new anti-hypertensive agents and when conducting additional studies of older medications.  They also advocate expanding the use of ABPM in pharmacological research beyond its use in the assessment of anti-hypertensive drugs, and support the use of ABPM as an additional tool while assessing the cardiac safety of new chemical entities including Parkinson’s, Alzheimer’s and anti-diabetic drugs, as well as medications in areas such as oncology (where BP safety concerns have recently emerged).

It is well known that many classes of medications can elevate blood pressure, and ABPM can help accurately identify this concern. These medications include, but are not limited to, non-steroidal anti-inflammatory drugs (NSAIDs), cough and cold medications, migraine headache medications, weight loss drugs, some antidepressants, oral contraceptives, some antacids, corticosteroids, and cyclosporine.

ABPM enables biopharmaceutical companies to efficiently evaluate the efficacy and cardiac safety of both hypertensive and anti-hypertensive drugs as well as endpoints for cardiovascular and cerebrovascular outcomes.  ERT is able to offer an ABPM solution that captures accurate 24-hour readings of systolic and diastolic blood pressure, mean arterial pressure, and pulse rate, in conjunction with our other suite of cardiac safety services.

To learn more please visit www.ert.com/cardiac or you can register for on-demand viewing of ERT’s latest ABPM webinar presented by Dr. Kleiman.

Thank you for being a part of the global ERT community.

Posted in Clinical Research, Clinical Trials | Tagged , , | 1 Comment

Improving Your Drug’s Value Proposition throughout the Product Lifecycle: How Optimized Clinical Outcomes Assessment (COA) Strategies Can Drive Commercialization Success

Typically, pharmaceutical and biotech companies have focused their Clinical Outcome Assessment (COA) development efforts solely on the requirements necessary for regulatory approval.  While we recognize that is extremely important, today we are going to discuss how pharmaceutical companies may reap benefits from a longer term approach.  An approach that addresses long-term market access hurdles can lead to continued product differentiation in the face of ongoing competition and subsequent long-term payer acceptance.  Drug development strategies that optimize COA data – including patient, clinician, and observer reported outcomes (PRO, ClinRO, and ObsRO) – can be used strategically to support regulatory approval and, increasingly, post-marketing success.

The development of COA endpoints, wherever you are in the product lifecycle, should always answer three clear questions (in order):

1. What messages are you trying to test and convey?

  • Create the messages you want to deliver at the end of your work
  • Identify the concepts, or what you want to measure, that will support your messages
  • What you want to measure depends upon:
    • Who is in your target patient population
    • Who you want to influence with your results – regulators, payers, clinicians, and/or patients themselves

2. How are you going to measure the concepts behind the messages?

  • Select instruments, based on your own research, that measure exactly what you want to measure
  • Only select instruments after deciding on what to measure

3. How are you going to implement the study that gives you the data?

  • Implement the instruments in study designs that give you the optimal endpoints to evaluate what you want to measure

This takes you beyond the very limited thinking regarding regulators and the narrow concepts of what you measure, to the broader focus on what the impact of this product is on the patient’s Health Related Quality of Life (HRQOL).  It should also get you thinking about how you are going to convey that information in the most compelling way.  For example, if you are dealing with a debilitated population, maybe it’s mobility information you want to convey.  If it is a very healthy population, maybe it’s strong physical exertion.  This is why really understanding the message and patient population and then focusing on the concept is important.  Once this is established, you need to determine how you are going to measure and implement that concept, and finally, you need to select which instrument you will use.  We suggest following this process for all stakeholders – not just for regulators.

With that foundation in place, we can now move to where we think the market is heading and how pharmaceutical companies can prepare for those changes. We believe that several combined elements – such as payer interest, big data and manufacturer needs – suggest that there will be an increased demand for and use of phase 4 economic reviews. Ultimately, payers and reviewers are going to evaluate your drug and make the decision whether to cover it (or not) and at what price.

Payer Interest:
As always, payers are interested in cost control and soon they will be able to measure costs very well.  Also, there is the diffusion of use of more formal Health Technology Assessment (HTA) methods, which uses only generic utility measures – unless other credible measures are provided by you.  Finally, there is always the pressure to drive therapeutic substitution.

Big Data:
With the ability to slice and dice data from populations into subgroups, it is much easier to measure hard outcomes.  There is relentless pressure to measure outcomes, so a manufacturer will need to prove a drug’s benefits.

Manufacturer Needs:
Product differentiation is increasingly important, yet there is a need to control development costs.  You’ll want to be able to defend long-term product value without needing to run another huge trial or contract discount.

Based on these factors, if you anticipate additional reviews, you’ll want to start thinking about the type of data payers will want to examine.  If you talk to payers directly, some may lead to you believe that they only care about budget impact and not much else.  However, there are good reasons for investing in the collection of COA data.  It is true, payers are looking to treat their patients and to do so at a good price. However, once the product characteristics are known, it is in the payers’ interests to say they care less about patient benefit. The fact that they say it doesn’t matter doesn’t make it true.  So never stop selling benefit to stakeholders!  This is where COAs can help, because you can come in with credible measures rather than some qualitative statements. Payers need to gather product information, design the rules for the selection, and conduct a selection.  All of these are affected by the medical community, particularly inside the payer and also patients. So the implication is that tangible benefits data will be helpful to a manufacturer during a negotiation, and in the “game” before the negotiation.

You may be thinking, “What will payers likely ask/want at these frequent, ad-hoc reviews?” Here is a list we’ve put together to help you get started:

  • Why is your drug better than your competitor’s?
  • Show us something new in our patient population – we’ve already seen your registrational data.
  • Your competitor went generic. We want a discount.
  • Please respond to your competitor’s observational studies in our patients showing that its life-cycle costs are lower than yours.
  • We just read an AHRQ/NICE/PCORI study that shows that all the drugs in this class have only tiny differences. Give us a reason not to implement the obvious implications of that finding – which is to engage in competitive contracting.

You’ll need to get started early on answering these questions because the lead time to respond is usually very short (weeks or a few months).  How long might it take to produce credible data in publications, which could then be taken to review bodies that control access? We estimate the total time from brainstorming to publication is approximately 5 years.

  • Planning COA strategy: 1 year
  • Beta testing COA instruments: 1 year
  • Full scale trial in the field: 2 years
  • Write up and publications: 1 year

Clinical Outcomes Assessments and Economic Reviews

Post-marketing COA could be a valuable addition in a difficult market.  ERT is readily available to support you – whatever the size, complexity and therapeutic area of your clinical program.  Contact us to speak with an expert consultant on how to optimize your COA strategy.  This is an opportunity to speak with a Senior Scientist and discuss what reasonable steps you can take to reduce risks and ensure the success of your clinical development program.

If you enjoyed this blog and would like to learn more, please visit www.ert.com/webinars to view a full 1-hour presentation on this topic.

Thank you for being a part of the global ERT Community.

Posted in Uncategorized | Leave a comment