This page lists journal articles and technical reports. You can also find hundreds of articles on applications of math, statistics, and computing on my blog.
Cristian Bologa, Vernon Shane Pankratz, Mark L Unruh, Maria Eleni Roumelioti, Vallabh Shah, Saeed Kamran Shaffi, Soraya Arzhan, John Cook, Christos Argyropoulos. Generalized mixed modeling in massive electronic health record databases: What is a healthy serum potassium? Submitted to SIAM Conference on Data Mining.
Abstract. Converting electronic health record (EHR) entries to useful clinical inferences requires one to address computational challenges due to the large number of repeated observations in individual patients. Unfortunately, the libraries of major statistical environments which implement Generalized Linear Mixed Models for such analyses have been shown to scale poorly in big datasets. The major computational bottleneck concerns the numerical evaluation of multivariable integrals, which even for the simplest EHR analyses may involve hundreds of thousands or millions of dimensions (one for each patient). The Laplace Approximation (LA) plays a major role in the development of the theory of GLMMs and it can approximate integrals in high dimensions with acceptable accuracy. We thus examined the scalability of Laplace based calculations for GLMMs.
Based on our experience on the HKBP we conclude that the combination of the Laplace approximation and automatic differentiation offers a computationally efficient approach for the analysis of big repeated measures data with GLMMs.
Jay Schinfeld, Fady Sharara, Randy Morris, Gianpiero D. Palermo, Zev Rosenwaks, Eric Seaman, Steve Hirshberg, John Cook, Cristina Cardona, G. Charles Ostermeier, and Alexander J. Travis. Cap-Score™ Prospectively Predicts Probability of Pregnancy, Molecular Reproduction and Development. 2018
Abstract. Semen analysis (SA) poorly predicts male fertility, because it does not assess sperm fertilizing ability. The percentage of capacitated sperm determined by GM1 localization (“Cap-Score™”), differs between cohorts of fertile and potentially infertile men, and retrospectively, between men conceiving or failing to conceive by intrauterine insemination (IUI). Here, we prospectively tested whether Cap−Score can predict male fertility with the outcome being clinical pregnancy within ≤3 IUI cycles. Cap-Score and SA were performed (n=208) with outcomes initially available for 91 men. Men were predicted to have either low (n=47) or high (n=44) chance of generating pregnancy using previously-defined Cap-Score reference ranges. Absolute and cumulative pregnancy rates were reduced in men predicted to have low pregnancy rates versus high [(absolute: 10.6% vs 29.5%, p=0.04); (cumulative: 4.3% vs 18.2%, 9.9% vs 29.1%, and 14.0% vs 32.8% for cycles 1-3, n=91, 64 and 41, p=0.02)]. Only Cap-Score, not male/female age or SA results, differed significantly between outcome groups. Logistic regression evaluated Cap-Score and SA results relative to the probability of generating pregnancy (PGP) for men who were successful in, or completed, 3 IUI cycles (n=57). Cap-Score was significantly related to PGP (p=0.01). The model fit was then tested with 67 additional patients (n=124; 5 clinics); the equation changed minimally, but fit improved (p<0.001; margin of error: 4%). The Akaike Information Criterion found the best model used Cap-Score as the only predictor. These data show that Cap-Score provides a practical, predictive assessment of male fertility, with applications in assisted reproduction and treatment of male infertility.
Yining Dua, John D. Cook, J. Jack Lee. Comparing Three Regularization Methods to Avoid Extreme Allocation Probability for Response Adaptive Randomization, Journal of Biopharmaceutical Studies, 2017.
Abstract. Under the Bayesian framework, the simple response adaptive randomization (SAR) scheme is to randomize patients to a treatment with probability p which is a function of the probability that the treatment is better. We examine three variations on response-adaptive randomization (RAR) which are used to compromise between this scheme and equal randomization (ER) by varying the tuning parameter in the allocation probability. The first variation is to apply a power transformation (PT) to p to obtain randomization probabilities. The second is to clip p to live within specified lower and upper bounds. The last is to begin the trial with a burn-in period of equal randomization. In each method, both the mean proportion of patients treated on the superior arm and mean response rate increased as one was closer to SAR, while statistical power increased as one closed to ER. Without early stopping, the PT method put the most patients on the better arm given the same power. With the efficacy early stopping rule, the PT method provided both higher mean proportion to the better treatment arm and higher response rate, while the clip method had higher power. In terms of the mean patients enrolled in the trial, the PT method required the most patients and the burn-in method required the fewest.
John D. Cook, Robert Primmer, Ab de Kwant. Comparing cost and performance of replication in erasure coding. Hitachi Review, vol 63 (July 2014).
Abstract. Data storage systems are more reliable than their individual components. In order to build highly reliable systems out of less reliable parts, systems introduce redundancy. In replicated systems, objects are simply copied several times with each copy residing on a different physical device. While such an approach is simple and direct, more elaborate approaches such as erasure coding can achieve equivalent levels of data protection while using less redundancy. This report examines the trade-offs in cost and performance between replicated and erasure encoded storage systems.
John D. Cook Approximating random inequalities with Edgeworth expansions (2012). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 78.
Abstract. Random inequalities of the form Prob (X > Y + δ) often appear as part of Bayesian clinical trial methods. Simulating trial designs could require calculating millions of random inequalities. When these inequalities require numerical integration, or worse random sampling, the inequality calculations account for the large majority of the simulation time. In this paper we show how to approximate random inequalities using Edgeworth expansions. The calculations required to use these expansions can be done in closed form, as we will see below. Although the calculations are elementary, they are also somewhat tedious, and so we include Python code to illustrate how to use the approximations in practice. We make no distributional assumptions on the random variables X and Y other than requiring that the necessary moments exist. The accuracy of the approximation will depend on how well the densities of these random variables are approximated by the Edgeworth expansions.
John D. Cook Fast approximation of gamma inequalities (2012). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 77.
Abstract. Approximation for computing P(X > Y + δ) for independent gamma random variables X and Y.
John D. Cook Fast approximation of beta inequalities (2012). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 76.
Abstract. Approximation for computing P(X > Y + δ) for independent beta random variables X and Y.
John D. Cook CRM: Prior means and medians (2012). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 73.
Abstract. Resolving the confusion between two ways of specifying a CRM dose-finding trial design.
John D. Cook Random inequalities between survival and uniform distributions (2011). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 71.
Abstract. This note will look at ways of computing Prob(X > Y) where X is a distribution modeling survival (gamma, inverse gamma, Weibull, log-normal) and Y has a uniform distribution. Each of these can be computer in closed form in terms of common statistical functions. We begin with analytical calculations and then include software implementations in R to make some of the details more explicit. Finally, we give a suggestion for using simulation to compute random inequalities that cannot be computed in closed form.
John D. Cook Basic properties of the soft maximum (2011). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 70.
Abstract. This note presents the basic properties of the soft maximum, a smooth approximation to the maximum of two real variables. It concludes by looking at potential numerical difficulties with the soft maximum and how to avoid these difficulties.
John D. Cook, Jairo Fúquene, Luis Pericchi. Skeptical and optimistic robust priors for clinical trials, Revista Columbiana de Estadistica, (2011) 34 no. 2, pp. 333–345.
Abstract. A useful technique from the subjective Bayesian viewpoint, suggested by Spiegelhalter et al. (1994), is to ask the subject matter researchers and other parties involved, such as pharmaceutical companies and regulatory bodies, for reasonable optimistic and pessimistic priors regarding the effectiveness of a new treatment. Up to now, the proposed skeptical and optimistic priors have been limited to conjugate priors, though there is no need for this limitation. The same reasonably adversarial points of view can be taken with robust priors. A recent reference with robust priors usefully applied to clinical trials is in Fuquene, Cook, and Pericchi (2009). Our proposal in this paper is to use Cauchy and intrinsic robust priors for both skeptical and optimistic priors leading to results more closely related with the sampling data when prior and data are in conflict. In other words, the use of robust priors removes the dogmatism implicit in conjugate priors.
John D. Cook. Block Adaptive Randomization (2011). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 63.
Abstract. This note proposes a block-adaptive randomization method to limit the length of runs in an outcome-adaptive randomized trial.
John D. Cook Upper bounds on non-central chi-squared tails and truncated normal moments (2010). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 62.
Abstract. We show that moments of the truncated normal distribution provide upper bounds on the tails of the non-central chi-squared distribution, then develop upper bounds for the former.
John D. Cook Asymptotic results for Normal-Cauchy model (2010). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 61.
Abstract. This report proves asymptotic results for the posterior mean when sampling from a normal distribution with a Cauchy prior on the location parameter.
John D. Cook How to test a random number generator (2009). Chapter 10 from Beautiful Testing: Leading Professionals Reveal How They Improve Software
John D. Cook Determining distribution parameters from quantiles
Abstract. Bayesian statistics often requires eliciting prior probabilities from subject matter experts who are unfamiliar with statistics. While most people an intuitive understanding of the mean of a probability distribution, fewer people understand variance as well, particularly in the context of asymmetric distributions. Prior beliefs may be more accurately captured by asking experts for quantiles rather than for means and variances. This note will explain how to solve for parameters so that common distributions satisfy two quantile conditions. We present algorithms for computing these parameters and point to corresponding software. The distributions discussed are normal, log normal, Cauchy, Weibull, gamma, and inverse gamma. The method given for the normal and Cauchy distributions applies more generally to any location-scale family.
John D. Cook. Exact calculation of inequality probabilities (2009). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 54.
Abstract. This note surveys results for computing the inequality probability Prob(X > Y) in closed form where X and Y are independent continuous random variables. Distribution families discussed include normal, Cauchy, gamma, inverse gamma, Levy, folded normal, and beta. Mixture distributions are also discussed.
Jairo A. Fúquene P., John D. Cook, Luis Raúl Pericchi. A Case for Robust Bayesian priors with Applications to Binary Clinical Trials. Bayesian Analysis (2009) 4, Number 4, pp. 817–846.
Abstract. Bayesian analysis is frequently limited to conjugate Bayesian analysis, particularly in the case in the analysis of clinical trial data. Even though conjugate analysis may be simpler computationally, the price to be paid is high: such analysis is not robust with respect to the prior, i.e., changing the prior may affect the conclusions without bound. Furthermore conjugate Bayesian analysis is blind with respect to the potential conflict between the prior and the data. On the other hand, robust priors have bounded influence. The prior is discounted automatically when there are conflicts between prior information and data. The original proposal of robust priors was made by de-Finetti in the 1960’s. However, the practice has not taken hold in important areas such as in clinical trials where conjugate priors are ubiquitous.
We show here how the Bayesian analysis for simple binary binomial data, after expressing in its exponentially family form, is improved by employing Cauchy priors. Moreover, we also introduce in the analysis of clinical trials a robust prior originally developed by J.O. Berger that gives closed-form results when coupled with a normal log-odds likelihood. Berger’s prior yields the superior robust analysis with no added computational complication compared to the conjugate analysis. We illustrate the results with famous textbook examples and a with data set and a prior from a previous trial.
On the formal side, we give here a theorem that we call the “Polynomial Tails Comparison Theorem.” This theorem establishes the analytical behavior of any likelihood function with tails bounded by a polynomial when used with priors with polynomial-order tails, such as Cauchy or Student’s t. The likelihood does not have to be a location family nor exponential family distribution and the conditions of the theorem are easily verifiable. For Berger’s prior robustness can be established directly since the exact expressions for posterior moments are known.
John D. Cook. Inequality Probabilities for Folded Normal Random Variables (2009). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 52.
Abstract. This note explains how to calculate the probability Prob(|X| > |Y|) for normal random variables X and Y. (A random variable formed by taking the absolute value of a normal random variable is known as a folded normal random variable.) When X and Y have equal variance, a simple expression is obtained. Otherwise the problem is reduced to a well-known problem.
Valen E. Johnson, John D. Cook. Bayesian Design of Single-Arm Phase II Clinical Trials with Continuous Monitoring. Clinical Trials 2009; 6(3):217–26.
Abstract. Many “Bayesian” clinical trial designs use posterior credible intervals as tools to define stopping boundaries for inferiority, futility, or superiority. However, the thresholds on posterior credible intervals that trigger termination of a trial are determined by frequentist operating characteristics. This practice can result in substantial overlap between the credible intervals associated with, say, stopping a trial for superiority and stopping a trial for inferiority, which severely limits the interpretation of posterior probability statements. In this article, we use formal Bayesian hypothesis tests to design single-arm phase II clinical trials. By using non-local prior densities to define null and alternative models, we obtain exponential convergence of Bayes factors under both null and alternative models. When compared to other commonly used Bayesian and frequentist designs, we show that our method provides better operating characteristics, uses fewer patients per correct decision, and provides more directly interpretable results. We also demonstrate that designs based on Bayesian hypothesis tests eliminates a potential source of bias often associated with Bayesian trial designs.
John D. Cook, Luis Raúl Pericchi. Information and Cross-Entropic Approaches, In: Lauretto MS; Pereira CAB; Stern JM (Org.), Bayesian Methods and Maximum Entropy Methods in Science and Engineering 28, Melville: AIP — American Institute of Physics, 2008, v 28, pp 278–285.
Abstract. In a recent working paper, Fúquene, Cook, and Pericchi make a comprehensive proposal putting forward robust, heavy-tailed priors over conjugate, light-tailed priors in Bayesian analysis. The paper focuses particularly on clinical trials, where information from previous trials should be used in an non-dogmatic fashion, suggesting the use of robust priors. Robust priors have bounded influence in the posterior distribution and their influence is inversely related to the conflict between the data in previous and the current trial. Clearly, the likelihood has to be taken into consideration and not only the prior. We explore here a novel proposal based on a cross-entropy measure of comparison between different models, on which the expectations of the log ratio of the evidences of the contender models are taken with respect to each of the different models and then compared. We also compute the expected increase of information within each model. Both criteria seems to justify the use of robust priors.
John D. Cook. Exact operating characteristics for single-arm Phase II trials (2008). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 45.
Abstract. Simulation is so widely used in studying the operating characteristics of clinical trials that we may forget that simulation is not always necessary. This note gives an algorithm for computing the operating characteristics of a stopping rule for a single-arm phase II clinical trial exactly.
Yisheng Li, Benjamin Bekele, Yuan Ji, and John D. Cook. Dose-Schedule Finding in Phase I/II Clinical Trials Using Bayesian Isotonic Transformation (2008). Statistics in Medicine 27:4895–4913
Abstract. The intent of most phase I oncology trials is to determine the maximum-tolerated dose (MTD) of an experimental treatment. One of the main considerations apart from determining the MTD is determining an appropriate schedule for administration of the treatment. Historically, schedules have been fixed prior to the start of dose finding. Recently, an increasing number of trials have been designed to determine the MTDs during a phase I component and subsequently determine a schedule during a phase II component. In this paper, we propose a Bayesian design for dose-schedule finding by jointly modeling binary toxicity and efficacy outcomes. Assuming the probability of toxicity follows an order constraint between schedules, we apply a Bayesian isotonic transformation approach to estimating the constrained parameters. We select a dose-schedule combination based on the joint posterior distribution of toxicity and efficacy. Using simulation studies for a hypothetical and a practical cancer clinical trial, we demonstrate that the proposed design performs well under different clinical scenarios.
John D. Cook. The Effect of Population Drift on Adaptively Randomized Trials (2007). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 39.
Abstract. Adaptively randomized trials aim to treat patients in clinical trials more effectively by increasing the probability of assigning treatments that appear to have a higher probability of response. Studies of adaptive randomization to date have assumed constant probabilities of response on each treatment. This paper examines the effect of response probabilities that change over time due to population drift.
Harry T. Whelan et al, Practical model-based dose-finding in early phase clinical trials: Optimizing tPA dose for treatment of ischemic stroke in children. Stroke. 39 (2008) 2627-2636.
Abstract. A safe and effective tissue plasminogen activator (tPA) dose for childhood stroke has not been established. This paper describes a Bayesian outcome-adaptive method for determining the best dose of an experimental agent, and explains how this method was used to design a dose-finding trial for tPA in childhood acute ischemic stroke (AIS).
J. Kyle Wathen, Peter F. Thall, John D. Cook, Elihu H. Estey, Accounting for Patient Heterogeneity in Phase II Clinical Trials. Statistics in Medicine. 27 (2008) 2802–2815.
Abstract. Phase II clinical trials typically are single-arm studies conducted to decide whether an experimental treatment is sufficiently promising, relative to standard treatment, to warrant further investigation. Many methods exist for conducting phase II trials under the assumption that patients are homogeneous. In the presence of patient heterogeneity, however, these designs are likely to draw incorrect conclusions. We propose a class of model-based Bayesian designs for single-arm phase II trials with a binary or time-to-event outcome and two or more prognostic subgroups. The designs’ early stopping rules are subgroup specific and allow the possibility of terminating some subgroups while continuing others, thus providing superior results when compared with designs that ignore treatment-subgroup interactions. Because our formulation requires informative priors on standard treatment parameters and subgroup main effects, and non-informative priors on experimental treatment parameters and treatment-subgroup interactions, we provide an algorithm for computing prior hyperparameter values. A simulation study is presented and the method is illustrated by a chemotherapy trial in acute leukemia.
Marcos de Lima et al, Phase I/II study of gemtuzumab ozogamicin added to fludarabine, melphalan and allogeneic hematopoietic stem cell transplantation for high-risk CD33 positive myeloid leukemias and myelodysplastic syndrome. Leukemia 22 (2008) pp 258–264.
Abstract. We investigated the hypothesis that gemtuzumab ozogamicin (GO), an anti-CD33 immunotoxin would improve the efficacy of fludarabine/melphalan as a preparative regimen for allogeneic hematopoietic stem cell transplantation (HSCT) in a phase I/II trial. Toxicity was defined as grades III–IV organ damage, engraftment failure or death within 30 days. ‘Response’ was engraftment and remission (CR) on day +30. We sought to determine the GO dose (2, 4 or 6 mg m-2) giving the best trade-off between toxicity and response. All patients were not candidates for myeloablative regimens. Treatment plan: GO (day -12), fludarabine 30 mg m-2 (days -5 to -2), melphalan 140 mg m-2 (day -2) and HSCT (day 0). GVHD prophylaxis was tacrolimus and mini-methotrexate. Diagnoses were AML (n=47), MDS (n=4) or CML (n=1). Median age was 53 years (range, 13–72). All but three patients were not in CR. Donors were related (n=33) or unrelated (n=19). Toxicity and response rates at 4 mg m-2 were 50% (n=4) and 50% (n=4). GO dose was de-escalated to 2 mg m-2: 18% had toxicity (n=8) and 82% responded (n=36). 100-day TRM was 15%; one patient had reversible hepatic VOD. Median follow-up was 37 months. Median event-free and overall survival was 6 and 11 months. GO 2 mg m-2 can be safely added to fludarabine/melphalan, and this regimen merits further evaluation.
John D. Cook, Comparing Methods of Tuning Adaptively Randomized Trials (2007). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 32.
Abstract. The simplest Bayesian adaptive randomization scheme is to randomize patients to a treatment with probability equal to the probability p that the treatment is better. We examine three variations on adaptive randomization which are used to compromise between this scheme and equal randomization. The first variation is to apply a power transformation to p to obtain randomization probabilities. The second is to clip p to live within specified lower and upper bounds. The third is to begin the trial with a burn-in period of equal randomization. We illustrate how each approach effects statistical power and the number of patients assigned to each treatment. We conclude with recommendations for designing adaptively randomized clinical trials.
John D. Cook, Understanding the Exponential Tuning Parameter in Adaptively Randomized Trials (2006). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 27.
Abstract. We examine the effect of a parameter λ used to calibrate how responsive randomization probabilities are to observed data in an adaptively randomized clinical trial. We define and motivate the parameter λ and demonstrate how varying this parameter effects the operating characteristics of example clinical trial designs.
John D. Cook and Saralees Nadarajah. Stochastic Inequality Probabilities for Adaptively Randomized Clinical Trials. Biometrical Journal. 48 (2006) pp 256–365.
Abstract. We examine stochastic inequality probabilities of the form Prob(X > Y) and Prob(X > max(Y, Z)) where X, Y, and Z are random variables with beta, gamma, or inverse gamma distributions. We discuss the applications of such inequality probabilities to adaptively randomized clinical trials as well as methods for calculating their values.
Peter F. Thall and John D. Cook. Using both efficacy and toxicity for dose-finding. In S. Chevret (ed), Statistical Methods for Dose Finding Experiments. New York: John Wiley & Sons, June 2006.
Peter F. Thall and John D. Cook. Adaptive dose-finding based on efficacy-toxicity trade-offs. Encyclopedia of Biopharmaceutical Statistics, 2nd edition, 2006, Chein-Chung Chow editor.
Peter F. Thall, John D. Cook, and Elihu H. Estey. Adaptive dose selection using efficacy-toxicity trade-offs: illustrations and practical considerations. J Biopharmaceutical Stat. 16: 623–638 (2006)
Abstract. The purpose of this paper is to describe and illustrate an outcome-adaptive Bayesian procedure, proposed by Thall and Cook (2004), for assigning doses of an experimental treatment to successive cohorts of patients. The method uses elicited (efficacy, toxicity) probability pairs to construct a family of trade-off contours that are used to quantify the desirability of each dose. This provides a basis for determining a best dose for each cohort. The method combines the goals of conventional Phase I and Phase II trials, and thus may be called a “Phase I-II” design. We first give a general review of the probability model and dose-finding algorithm. We next describe an application to a trial of a biologic agent for treatment of acute myelogenous leukemia, including a computer simulation study to assess the design’s average behavior. To illustrate how the method may work in practice, we present a cohort-by-cohort example of a particular trial. We close with a discussion of some practical issues that may arise during implementation.
John D. Cook. Efficacy-toxicity trade-offs based on Lp norms (2006). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 29.
Abstract. This report examines in detail a family of efficacy-toxicity trade-off functions simpler and more general than those originally proposed in [1]. The new trade-off functions are based on distance in Lp norm to the ideal point and were first presented in [2]. We define and illustrate these functions and demonstrate how to compute their parameters based on elicited values.
J. Kyle Wathen and John D. Cook. Power and bias in adaptively randomized clinical trials (2006). Technical Report UTMDABTR-002-06.
Abstract. This report examines the operating characteristics of adaptively randomized trials relative to equally randomized trials in regard to power and bias. We also examine the number of patients in the trial assigned to the superior treatment. The effects of prior selection, sample size, and patient prognostic factors are investigated for both binary and time-to-event outcomes.
John D. Cook. Numerical evaluation of gamma inequalities (2006). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 30.
Abstract. This paper addresses the problem of numerically evaluating the probabilities P(X > Y), P(X > max(Y,Z)), and P(X < min(Y,Z)) where X, Y, and Z are independent gamma or inverse gamma random variables.
John D. Cook. Continuous safety monitoring in single-arm, time-to-event trials without software (2005). Technical Report UTMDABTR-006-05.
Abstract. This note concerns trial conduct for one-arm trials that monitor safety by comparing time-to-event outcomes of the experimental treatment to an historical treatment. To date, such trials have been conducted using software which evaluates the stopping rule as the trial progresses. We show that is it possible to pre-calculate the stopping conditions, simplifying trial conduct and opening up new possibilities.
John D. Cook. Exact calculation of beta inequalities (2005). Technical Report UTMDABTR-005-05.
Abstract. This paper addresses the problem of evaluating Prob(X > Y) where X and Y are independent beta random variables. We cast the problem in terms of a hypergeometric function and use hypergeometric identities to calculate the probability in closed form for certain values of the distribution parameters.
Peter F. Thall and John D. Cook. Dose-finding based on efficacy-toxicity trade-offs (2004) Biometrics, 60:684–693.
Abstract. We present an adaptive Bayesian method for dose-finding in phase I/II clinical trials based on trade-offs between the probabilities of treatment efficacy and toxicity. The method accommodates either trinary or bivariate binary outcomes, as well as efficacy probabilities that are potentially non-monotone in dose. Doses are selected for successive patient cohorts based on a set of efficacy-toxicity trade-off contours that partition the two-dimensional outcome probability domain. Priors are established by solving for hyperparameters which optimize the fit of the model to elicited mean outcome probabilities. For trinary outcomes, the new algorithm is compared to the method of Thall and Russell by application to a trial of rapid treatment for ischemic stroke. The bivariate binary outcome case is illustrated by a trial of graft-versus-host disease prophylaxis in allogeneic bone marrow transplantation. Computer simulations show that, under a wide rage of dose-outcome scenarios, the new method has high probabilities of making correct decisions and treats most patients at doses with desirable efficacy-toxicity trade-offs.
John D. Cook. Simulation results for phase II clinical trial durations (2004). Technical Report UTMDABTR-014-04
Abstract. This paper investigates the effect of cohort size on phase II clinical trial duration by doing a simulation study of a monitoring method of Thall and Simon. We challenge the assumptions that larger cohort sizes lead to shorter trials and that continuous monitoring is impractical.
John D. Cook. Numerical computation of stochastic inequality probabilities (2003). UT MD Anderson Cancer Center Department of Biostatistics Working Paper Series. Working Paper 46.
Abstract. This paper addresses the problem of numerically evaluating Prob(X > Y) for independent continuous random variables X and Y. This calculation arises in the design of clinical trials and as such appears in the inner loop of simulations of these trials. An early example of this is given in (Thompson 1933). More recent examples are given in (Giles et al 2003), (Berry 2003a, 2003b). It is worthwhile to optimize the calculation of these probabilities as they may be computed millions of times in the course of simulating a single trial. Techniques such as memoization (Orwant 2002) can eliminate redundant calculations of such probabilities over a simulation but the need for a large number of evaluations remains. After considering how to compute Prob(X > Y) in general, we present optimizations for important special cases in which X and Y both belong to one of the following families of classical distributions: exponential, gamma, inverse gamma, normal, Cauchy, beta, and Weibull.
John D. Cook and Ralph E. Showalter. Microstructure Diffusion Models with Secondary Flux (1995) Journal of Mathematical Analysis 1995, pp 731–756
Abstract. Totally fissured media in which the cells are isolated by the fissure system are effectively described by double porosity models with microstructure. These models contain the geometry of the individual cells or pores in the medium and the flux across their interface with the fissures which surround them. We extend these models to include the case of partially fissured media in which a secondary flux effect arises from cell-to-cell diffusion paths. These quasi-linear problems are formulated in appropriate spaced for which the cells respond to the local linearization of the fissure pressure. It is shown that they are well-posed and the solutions depend continuously on parameters that determine the models.
John D. Cook and Ralph E. Showalter. Distributed Systems of PDE in Hilbert Space (1993) Differential and Integral Equations, Vol 6 No 5, Sept 1993, pp. 981–994
Abstract. We present a system of two nonlinear evolution equations and a corresponding approximating system which provide a common framework for studying distributed microstructure models and a variety of other models for transport and diffusion in heterogeneous media. Existence and uniqueness are demonstrated using semigroup methods, and solutions to the approximating system are shown to converge strongly to the solution of the limiting system. In the microstructure case, new results are obtained, and additional PDE examples are provided to show that in general, certain hypotheses cannot be removed.
John D. Cook. A Stefan Problem on a Region and its Boundary (1993) Applicable Analysis, Vol 57 No 3–4 (95) pp 367–381 (online)
Abstract. This paper considers a system of equations consisting of a nonlinear evolution equation on an open set Ω in Rn coupled to another nonlinear evolution equation on the boundary ∂Ω. Rather general assumptions are made concerning the operators involved and the coupling between the two problems. Existence and uniqueness are demonstrated via a semigroup of nonlinear operators on L1(Ω) x L1(∂ Ω).
John D. Cook. Separation of convex sets in linear topological spaces (1988)
Abstract. This paper discusses under what conditions two disjoint convex subsets of a linear topological space can be separated by a continuous linear functional. The equivalence of several forms of the Hahn-Banach theorem is proven. The separation problem is considered in linear topological spaces, locally convex linear topological spaces, Banach spaces, and finally finite dimensional Banach spaces. A number of examples are included to show the necessity of the hypotheses of various theorems.