This R Markdown document provides example code for the the definition of the most commonly used group sequential boundaries in rpact.
In rpact, sample size calculation for a group sequential trial proceeds by following the same two steps regardless of whether the endpoint is a continuous, binary, or a time-to-event endpoint:
getDesignGroupSequential()
.getSampleSizeMeans()
(for continuous endpoints),
getSampleSizeRates()
(for binary endpoints), and
getSampleSizeSurvival()
(for survival endpoints).The mathematical rationale for this two-step approach is that all group sequential trials, regardless of the chosen endpoint type, rely on the fact that the \(z\)-scores at different interim stages follow the same “canonical joint multivariate distribution” (at least asymptotically).
This document covers the more abstract first step, Step 2 is not covered in this document but it is covered in the separate endpoint-specific R Markdown files for continuous, binary, and time to event endpoints. Of note, step 1 can be omitted for trials without interim analyses.
These examples are not intended to replace the official rpact documentation and help pages but rather to supplement them.
In general, rpact supports both one-sided and two-sided group sequential designs. If futility boundaries are specified, however, only one-sided tests are permitted. For simplicity, it is often preferred to use one-sided tests for group sequential designs (typically, with \(\alpha = 0.025\)).
First, load the rpact package
library(rpact)
packageVersion("rpact") # version should be version 2.0.5 or later
## [1] '3.3.2'
Example:
informationRates = c(0.33, 0.67, 1)
). [Note: For equally
spaced interim analyses, one can also specify the maximum number of
stages (kMax
, including the final analysis) instead of the
informationRates
.]typeOfDesign = "asOF"
)<- getDesignGroupSequential(
design sided = 1, alpha = 0.025,
informationRates = c(0.33, 0.67, 1), typeOfDesign = "asOF"
)
The originally published O’Brien & Fleming boundaries are
obtained via typeOfDesign = "OF"
which is also the default
(therefore, if you do not specify typeOfDesign
, this type
is selected). Note that strict Type I error control is only guaranteed
for standard boundaries without \(\alpha\)-spending if the pre-defined
interim schedule (i.e., the information fractions at which interim
analyses are conducted) is exactly adhered to.
<- getDesignGroupSequential(
design sided = 1, alpha = 0.025,
informationRates = c(0.33, 0.67, 1), typeOfDesign = "OF"
)
Pocock (typeOfDesign = "P"
for constant boundaries over
the stages, typeOfDesign = "asP"
for corresponding \(\alpha\)-spending version) or Haybittle
& Peto (typeOfDesign = "HP"
) boundaries (reject at
interim if \(z\)-value exceeds 3) is
obtained with, for example,
<- getDesignGroupSequential(
design sided = 1, alpha = 0.025,
informationRates = c(0.33, 0.67, 1), typeOfDesign = "P"
)
typeOfDesign = "asKD
) with parameter gammaA
(power function: gammaA = 1
is linear spending,
gammaA = 2
quadratic)typeOfDesign = "asHSD"
) with parameter gammaA
(for details, see Wassmer & Brannath 2016, p. 76)typeOfDesign = "WT"
) and
(typeOfDesign = "WToptimum"
)# Quadratic Kim & DeMets alpha-spending
<- getDesignGroupSequential(
design sided = 1, alpha = 0.025,
informationRates = c(0.33, 0.67, 1), typeOfDesign = "asKD", gammaA = 2
)
User-defined \(\alpha\)-spending
functions (typeOfDesign = "asUser"
) can be obtained via the
argument userAlphaSpending
which must contain a numeric
vector with elements \(0< \alpha_1 <
\ldots < \alpha_{kMax} = \alpha\) that define the values of
the cumulative alpha-spending function at each interim analysis.
# Example: User-defined alpha-spending function which is very conservative at
# first interim (spend alpha = 0.001), conservative at second (spend an additional
# alpha = 0.01, i.e., total cumulative alpha spent is 0.011 up to second interim),
# and spends the remaining alpha at the final analysis (i.e., cumulative
# alpha = 0.025)
<- getDesignGroupSequential(
design sided = 1, alpha = 0.025,
informationRates = c(0.33, 0.67, 1),
typeOfDesign = "asUser",
userAlphaSpending = c(0.001, 0.01 + 0.001, 0.025)
)# $stageLevels below extract local significance levels across interim analyses.
# Note that the local significance level is exactly 0.001 at the first
# interim, but slightly >0.01 at the second interim because the design
# exploits correlations between interim analyses.
$stageLevels design
## [1] 0.00100000 0.01052883 0.02004781
futilityBounds
contains a vector of
futility bounds (on the \(z\)-value
scale) for each interim (but not the final analysis).getSampleSizeMeans()
(for continuous endpoints), getSampleSizeRates()
(for
binary endpoints), and getSampleSizeSurvival()
(for
survival endpoints). Please see the R Markdown files for these endpoint
types for further details.)bindingFutility = FALSE
). Binding futility boundaries
(bindingFutility = TRUE
) are not recommended although they
are provided for the sake of completeness.# Example: non-binding futility boundary at each interim in case
# estimated treatment effect is null or goes in "the wrong direction"
<- getDesignGroupSequential(
design sided = 1, alpha = 0.025,
informationRates = c(0.33, 0.67, 1), typeOfDesign = "asOF",
futilityBounds = c(0, 0), bindingFutility = FALSE
)
Formal \(\beta\)-spending functions
are defined in the same way as for \(\alpha\)-spending functions, e.g., a Pocock
type \(\beta\)-spending can be
specified as typeBetaSpending = "bsP"
and beta
needs to be specified, the default is beta = 0.20
.
# Example: beta-spending function approach with O'Brien & Fleming alpha-spending
# function and Pocock beta-spending function
<- getDesignGroupSequential(
design sided = 1, alpha = 0.025, beta = 0.1,
typeOfDesign = "asOF",
typeBetaSpending = "bsP"
)
Another way to formally derive futility bounds is through the
Pampallona and Tsiatis approach. This is through defining
typeBetaSpending = "PT"
, and the specification of two
parameters, deltaPT1
(shape of decision regions for
rejecting the null) and deltaPT0
(shape of shifted decision
regions for rejecting the alternative), for example
# Example: beta-spending function approach with O'Brien & Fleming boundaries for
# rejecting the null and Pocock boundaries for rejecting H1
<- getDesignGroupSequential(
design sided = 1, alpha = 0.025, beta = 0.1,
typeOfDesign = "PT",
deltaPT1 = 0, deltaPT0 = 0.5
)
Note that both the \(\beta\)-spending as well as the Pampallona
& Tsiatis approach can be selected to be one-sided or two-sided, the
bounds for rejecting the alternative to be binding
(bindingFutility = TRUE
) or non-binding
(bindingFutility = FALSE
).
Such designs can be implemented by using a user-defined \(\alpha\)-spending function which spends all of the Type I error at the final analysis. Note that such designs do not allow stopping for efficacy regardless how persuasive the effect is.
# Example: non-binding futility boundary using an O'Brien & Fleming type
# beta spending function. No early stopping for efficacy (i.e., all alpha