Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font

<<   CONTENTS   >>


Novel and nonstandard study designs are promoted both by pharmaceutical companies and regulatory agencies to streamline the current drug development and regulatory approval processes. This is especially given heightened attention in research concerning rare diseases, oncology, and other areas with unmet medical needs. However, these designs are inherently complex, and are associated with important statistical and operational issues that require careful considerations. Notably, a recent guidance document from the US FDA (2018a) highlights four requirements for successful implementation of an adaptive design: controlling the chance of erroneous conclusions, reliable estimation of treatment effects, prespecification of relevant details of the design, and safeguarding trial integrity. Next, we give a summary of a few of the commonly used novel approaches, including adaptive and flexible designs, enrichment studies, and studies conducted under the so-called master protocols.

Adaptive Designs

Adaptive designs permit modifications to various attributes of the trial based on analysis of data from subjects in the study, while ensuring that the integrity of the trial is not compromised. The modification may involve study procedures, including eligibility criteria, dose levels and duration of treatment; sample size; or statistical methods. Examples of adaptive design methods include adaptive randomization, group sequential designs, sample size reestimation, adaptive dose-finding designs, as well as adaptive-seamless Phase II/III trial designs (Chow et al. 2005).

Adaptive Randomization

In comparative trials, assignment of study subjects to treatment groups may be adjusted either based on baseline characteristics (covariate-adaptive treatment assignment) or comparative outcome data. The former is conducted with a view to achieving balance between treatment groups with respect to important baseline covariates. An example is the so-called minimization approach, proposed by Pocock and Simon (1975), in which consecutive patients are systematically allocated to treatments so as to minimize any difference on the selected prognostic factors.

In response-adaptive randomization, the principal goal is to increase the probability of success by modifying the randomization schedule as a function of observed treatment effect. An example is the randomized play-the- winner rule, which uses an urn model for patient allocation (Rosenberger 1999). An appealing feature of response-adaptive randomization is that, on average, the more efficacious treatment arm will be studied on a higher proportion of study subjects. In addition, there are also situations where the approach may lead to efficient statistical procedures, including reduced variability of treatment effect estimates. The assignment probability (na) may be determined using alternative approaches. One method due to Thall and Wathen (2007) computes the probability as:

_________[P(T > C)]'________

8 " [p(T>C)]r +[1-P(T>C)]'

where r is a positive tuning parameter, and P(T > C) is the posterior probability that the new agent (T) is better than the control (C), based on accumulated data, and using the uniform prior distribution.

Adaptive randomization strategies may not be advisable in all situations. In fact, the added complexity, in terms of execution, analysis, and interpretation, may not justify their use relative to standard trial designs. In particular, their application in Phase III studies may require caution due to the potential for bias arising from time trends associated with any prognostic factors. In such cases, block randomization and stratified analysis approaches are recommended. Korn and Freidlin (2011) provide a discussion of the pros and cons of adaptive randomization.

Sample Size Reestimation

In some situations, there may initially be inadequate information about certain parameters involved in sample size determination to achieve the desired power and Type I error rates. Examples of such parameters include the detectable effect size, measures of dispersion, or the null response rate in the comparison of two proportions. Study size, therefore, may be adjusted based on appropriately defined sample size reestimation techniques using data observed during an interim period.

One of the earliest approaches proposed by Wittes and Brittain (1990) uses an internal pilot study. More specifically, the trial is designed using an initial estimate of the parameter of interest. An interim analysis is then performed to estimate the parameter, which in turn is used to recalculate the sample size. This approach typically results in small inflation of the Type I error rate, which may be substantial when the interim analysis is based on very few observations.

When there is uncertainty about the effect size, several strategies may be followed to achieve the desired result. An adaptive approach, often called unblinded sample size reestimation, consists in starting with a modest sample size, and then increasing the size, following an interim comparative analysis. This approach has known shortcomings. First, a large sample size may lead to detection of clinically irrelevant effects. In addition, inadvertent dissemination of interim results may compromise the integrity of the conduct and reporting of the trial (Mauer et al. 2012). Further, without proper adjustment, such an approach can inflate the Type I error probability (Proschan and Hunsberger 1995).

To control the Type I error rate, combination tests have been proposed, using the p-values computed at the different stages of the trial. Specifically, let P, and P2 be the p-values associated, respectively, with the test of the null hypothesis at the interim look and then at the end of the trial based on the reestimated sample size. Bauer and Kohne (1994) propose a test defined on the product, T = P,P2> which has a exp(x24 /2) distribution under the null. One may also construct a test statistic using the inverse normal cumulative distribution transformation:

Z = WyZi + w2Z2

where Z( = -1 (1 - P[), and the w, (i = 1, 2) are prespecified weights such that w,2+ w22 = 1. Under the null, Z has a N(0,1) distribution. See also Cui et al. (1999) and Denne (2001), among others, for related approaches.

It may be noted that approaches based on prespecified weights are often criticized on the ground that they violate the sufficiency principle, and hence may not be efficient. Further, their dependence on nonstandard tests and p-values make them less attractive. A more appealing strategy involves the group sequential approach, in which the trial is designed with maximum sample size, and then interim analyses are performed with the goal of stopping the trial for efficacy or futility or adjusting the sample size. Notably, Mehta and Pocock (2011) proposed the promising-zone approach in which the sample size is increased when interim results appear to be promising. More specifically, at the interim analysis, the promising zone can be characterized with respect to the estimated conditional power (CP), i.e., the probability of statistically significant result at the end of the trial, given the observed data, and assuming no change in the observed treatment effect and planned sample size. Using the estimated CP, the interim outcome may then be classified into unfavorable, favorable, or promising zones. If the interim result falls in the promising zone, the sample size may be increased, assuming plausible values for any relevant parameters; otherwise, there will be no change to the design. The approach is appealing because of its ease of implementation, since conventional final inference can be performed without inflating the overall Type I error. Indeed, Chen et al. (2004) argue that with CP >0.5, one can increase the sample size and use conventional test statistics while preserving the Type I error. However, it has been shown by Gaffney and Ware (2017) that the conventional statistic compared to the standard critical value (e.g., Z - 1.96 for a = 0.05) will be conservative.

In the above discussion, while the focus has been on preserving the Type I error, it should also be noted that determination of a valid point estimate and confidence intervals following sample size reestimation may not be straightforward. This is, in fact, an issue of regulatory and methodological importance associated with all adaptive designs, requiring caution in the reporting of the accompanying study results (Wassmer and Brannath 2016).

<<   CONTENTS   >>

Related topics