Bayesian robustness in meta‐analysis for studies with zero responses
Statistical meta‐analysis is mostly carried out with the help of the random effect normal model, including the case of discrete random variables. We argue that the normal approximation is not always able to adequately capture the underlying uncertainty of the original discrete data. Furthermore, when we examine the influence of the prior distributions considered, in the presence of rare events, the results from this approximation can be very poor. In order to assess the robustness of the quantities of interest in meta‐analysis with respect to the choice of priors, this paper proposes an alternative Bayesian model for b...
Source: Pharmaceutical Statistics - January 1, 2016 Category: Statistics Authors: F. J. Vázquez, E. Moreno, M. A. Negrín, M. Martel Tags: Main Paper Source Type: research

Distribution of the two‐sample t‐test statistic following blinded sample size re‐estimation
We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re‐estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non‐inferiority margin for non‐inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study...
Source: Pharmaceutical Statistics - January 1, 2016 Category: Statistics Authors: Kaifeng Lu Tags: Main Paper Source Type: research

Idle thoughts of a ‘well‐calibrated’ Bayesian in clinical drug development
The use of Bayesian approaches in the regulated world of pharmaceutical drug development has not been without its difficulties or its critics. The recent Food and Drug Administration regulatory guidance on the use of Bayesian approaches in device submissions has mandated an investigation into the operating characteristics of Bayesian approaches and has suggested how to make adjustments in order that the proposed approaches are in a sense calibrated. In this paper, I present examples of frequentist calibration of Bayesian procedures and argue that we need not necessarily aim for perfect calibration but should be allowed to ...
Source: Pharmaceutical Statistics - January 1, 2016 Category: Statistics Authors: Andrew P. Grieve Tags: Main Paper Source Type: research

Robust inference from multiple test statistics via permutations: a better alternative to the single test statistics approach for randomized trials
(Source: Pharmaceutical Statistics)
Source: Pharmaceutical Statistics - January 1, 2016 Category: Statistics Authors: Jitendra Ganju, Xinxin Yu, Guoguang (Julie) Ma Tags: Erratum Source Type: research

Confidence intervals for the symmetry point: an optimal cutpoint in continuous diagnostic tests
Continuous diagnostic tests are often used for discriminating between healthy and diseased populations. For this reason, it is useful to select an appropriate discrimination threshold. There are several optimality criteria: the North‐West corner, the Youden index, the concordance probability and the symmetry point, among others. In this paper, we focus on the symmetry point that maximizes simultaneously the two types of correct classifications. We construct confidence intervals for this optimal cutpoint and its associated specificity and sensitivity indexes using two approaches: one based on the generalized pivotal quant...
Source: Pharmaceutical Statistics - January 1, 2016 Category: Statistics Authors: Mónica López‐Ratón, Carmen Cadarso‐Suárez, Elisa M. Molanes‐López, Emilio Letón Tags: Main Paper Source Type: research

Predicting analysis time in events‐driven clinical trials using accumulating time‐to‐event surrogate information
For clinical trials with time‐to‐event endpoints, predicting the accrual of the events of interest with precision is critical in determining the timing of interim and final analyses. For example, overall survival (OS) is often chosen as the primary efficacy endpoint in oncology studies, with planned interim and final analyses at a pre‐specified number of deaths. Often, correlated surrogate information, such as time‐to‐progression (TTP) and progression‐free survival, are also collected as secondary efficacy endpoints. It would be appealing to borrow strength from the surrogate information to improve the precisio...
Source: Pharmaceutical Statistics - December 22, 2015 Category: Statistics Authors: Jianming Wang, Chunlei Ke, Zhinuan Yu, Lei Fu, Bruce Dornseif Tags: Main Paper Source Type: research

Testing multiple primary endpoints in clinical trials with sample size adaptation
In this paper, we propose a design that uses a short‐term endpoint for accelerated approval at interim analysis and a long‐term endpoint for full approval at final analysis with sample size adaptation based on the long‐term endpoint. Two sample size adaptation rules are compared: an adaptation rule to maintain the conditional power at a prespecified level and a step function type adaptation rule to better address the bias issue. Three testing procedures are proposed: alpha splitting between the two endpoints; alpha exhaustive between the endpoints; and alpha exhaustive with improved critical value based on correlatio...
Source: Pharmaceutical Statistics - November 26, 2015 Category: Statistics Authors: Yi Liu, Mingxiu Hu Tags: Main Paper Source Type: research

Nonparametric covariate adjustment in estimating hazard ratios
In randomized clinical trials with time‐to‐event outcomes, the hazard ratio is commonly used to quantify the treatment effect relative to a control. The Cox regression model is commonly used to adjust for relevant covariates to obtain more accurate estimates of the hazard ratio between treatment groups. However, it is well known that the treatment hazard ratio based on a covariate‐adjusted Cox regression model is conditional on the specific covariates and differs from the unconditional hazard ratio that is an average across the population. Therefore, covariate‐adjusted Cox models cannot be used when the uncondition...
Source: Pharmaceutical Statistics - November 26, 2015 Category: Statistics Authors: Honghua Jiang, Pandurang M Kulkarni, Yanping Wang, Craig H Mallinckrodt Tags: Main Paper Source Type: research

Practical guide to sample size calculations: superiority trials
A sample size justification is a vital part of any investigation. However, estimating the number of participants required to give meaningful results is not always straightforward. A number of components are required to facilitate a suitable sample size calculation. In this paper, the steps for conducting sample size calculations for superiority trials are summarised. Practical advice and examples are provided illustrating how to carry out the calculations by hand and using the app SampSize. Copyright © 2015 John Wiley & Sons, Ltd. (Source: Pharmaceutical Statistics)
Source: Pharmaceutical Statistics - November 20, 2015 Category: Statistics Authors: Laura Flight, Steven A. Julious Tags: Teacher's Corner Source Type: research

Issue Information
Abstract No abstract is available for this article. (Source: Pharmaceutical Statistics)
Source: Pharmaceutical Statistics - November 5, 2015 Category: Statistics Tags: Issue Information Source Type: research

Optimal adaptive sequential designs for crossover bioequivalence studies
In prior works, this group demonstrated the feasibility of valid adaptive sequential designs for crossover bioequivalence studies. In this paper, we extend the prior work to optimize adaptive sequential designs over a range of geometric mean test/reference ratios (GMRs) of 70–143% within each of two ranges of intra‐subject coefficient of variation (10–30% and 30–55%). These designs also introduce a futility decision for stopping the study after the first stage if there is sufficiently low likelihood of meeting bioequivalence criteria if the second stage were completed, as well as an upper limit on total study size....
Source: Pharmaceutical Statistics - November 5, 2015 Category: Statistics Authors: Jialin Xu, Charles Audet, Charles E. DiLiberti, Walter W. Hauck, Timothy H Montague, Alan F. Parr, Diane Potvin, Donald J. Schuirmann Tags: Main Paper Source Type: research

On the need for increased rigour and care in the conduct and interpretation of network meta‐analyses in drug development
The rise over recent years in the use of network meta‐analyses (NMAs) in clinical research and health economic analysis is little short of meteoric driven, in part, by a desire from decision makers to extend inferences beyond direct comparisons in controlled clinical trials. But is the increased use and reliance of NMAs justified? Do such analyses provide a reliable basis for the relative effectiveness assessment of medicines and, in turn, for critical decisions relating to healthcare access and provisioning? And can such analyses also be used earlier, as part of the evidence base for licensure? Despite several important...
Source: Pharmaceutical Statistics - November 1, 2015 Category: Statistics Authors: Kevin Carroll, Robert Hemmings Tags: Viewpoint Source Type: research

Design optimisation for pharmacokinetic modeling of a cocktail of phenotyping drugs
Our paper proposes a methodological strategy to select optimal sampling designs for phenotyping studies including a cocktail of drugs. A cocktail approach is of high interest to determine the simultaneous activity of enzymes responsible for drug metabolism and pharmacokinetics, therefore useful in anticipating drug–drug interactions and in personalized medicine. Phenotyping indexes, which are area under the concentration‐time curves, can be derived from a few samples using nonlinear mixed effect models and maximum a posteriori estimation. Because of clinical constraints in phenotyping studies, the number of samples tha...
Source: Pharmaceutical Statistics - November 1, 2015 Category: Statistics Authors: Thu Thuy Nguyen, Henri Bénech, Marcel Delaforge, Natacha Lenuzza Tags: Main Paper Source Type: research

Robust exchangeability designs for early phase clinical trials with multiple strata
Clinical trials with multiple strata are increasingly used in drug development. They may sometimes be the only option to study a new treatment, for example in small populations and rare diseases. In early phase trials, where data are often sparse, good statistical inference and subsequent decision‐making can be challenging. Inferences from simple pooling or stratification are known to be inferior to hierarchical modeling methods, which build on exchangeable strata parameters and allow borrowing information across strata. However, the standard exchangeability (EX) assumption bears the risk of too much shrinkage and excess...
Source: Pharmaceutical Statistics - November 1, 2015 Category: Statistics Authors: Beat Neuenschwander, Simon Wandel, Satrajit Roychoudhury, Stuart Bailey Tags: Main Paper Source Type: research

Continuous event monitoring via a Bayesian predictive approach
In clinical trials, continuous monitoring of event incidence rate plays a critical role in making timely decisions affecting trial outcome. For example, continuous monitoring of adverse events protects the safety of trial participants, while continuous monitoring of efficacy events helps identify early signals of efficacy or futility. Because the endpoint of interest is often the event incidence associated with a given length of treatment duration (e.g., incidence proportion of an adverse event with 2 years of dosing), assessing the event proportion before reaching the intended treatment duration becomes challenging, espec...
Source: Pharmaceutical Statistics - November 1, 2015 Category: Statistics Authors: Jianing Di, Daniel Wang, H. Robert Brashear, Vladimir Dragalin, Michael Krams Tags: Main Paper Source Type: research