Title: | Confirmatory Adaptive Clinical Trial Design and Analysis |
---|---|
Description: | Design and analysis of confirmatory adaptive clinical trials with continuous, binary, and survival endpoints according to the methods described in the monograph by Wassmer and Brannath (2016) <doi:10.1007/978-3-319-32562-0>. This includes classical group sequential as well as multi-stage adaptive hypotheses tests that are based on the combination testing principle. |
Authors: | Gernot Wassmer [aut] |
Maintainer: | Friedrich Pahlke <[email protected]> |
License: | LGPL-3 |
Version: | 4.1.1.9283 |
Built: | 2025-03-06 13:32:30 UTC |
Source: | https://github.com/rpact-com/rpact |
Calculates the Multivariate Normal Distribution with Product Correlation Structure published by Charles Dunnett, Algorithm AS 251.1 Appl.Statist. (1989), Vol.38, No.3, doi:10.2307/2347754.
as251Normal( lower, upper, sigma, ..., eps = 1e-06, errorControl = c("strict", "halvingIntervals"), intervalSimpsonsRule = 0 )
as251Normal( lower, upper, sigma, ..., eps = 1e-06, errorControl = c("strict", "halvingIntervals"), intervalSimpsonsRule = 0 )
lower |
Lower limits of integration. Array of N dimensions |
upper |
Upper limits of integration. Array of N dimensions |
sigma |
Values defining correlation structure. Array of N dimensions |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
eps |
desired accuracy. Defaults to 1e-06 |
errorControl |
error control. If set to 1, strict error control based on fourth derivative is used. If set to zero, error control based on halving intervals is used |
intervalSimpsonsRule |
Interval width for Simpson's rule. Value of zero caused a default .24 to be used |
For a multivariate normal vector with correlation structure defined by rho(i,j) = bpd(i) * bpd(j), computes the probability that the vector falls in a rectangle in n-space with error less than eps.
This function calculates the bdp
value from sigma
, determines the right inf
value and calls mvnprd
.
Calculates the Multivariate Normal Distribution with Product Correlation Structure published by Charles Dunnett, Algorithm AS 251.1 Appl.Statist. (1989), Vol.38, No.3, doi:10.2307/2347754.
as251StudentT( lower, upper, sigma, ..., df, eps = 1e-06, errorControl = c("strict", "halvingIntervals"), intervalSimpsonsRule = 0 )
as251StudentT( lower, upper, sigma, ..., df, eps = 1e-06, errorControl = c("strict", "halvingIntervals"), intervalSimpsonsRule = 0 )
lower |
Lower limits of integration. Array of N dimensions |
upper |
Upper limits of integration. Array of N dimensions |
sigma |
Values defining correlation structure. Array of N dimensions |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
df |
Degrees of Freedom. Use 0 for infinite D.F. |
eps |
desired accuracy. Defaults to 1e-06 |
errorControl |
error control. If set to 1, strict error control based on fourth derivative is used. If set to zero, error control based on halving intervals is used |
intervalSimpsonsRule |
Interval width for Simpson's rule. Value of zero caused a default .24 to be used |
For a multivariate normal vector with correlation structure defined by rho(i,j) = bpd(i) * bpd(j), computes the probability that the vector falls in a rectangle in n-space with error less than eps.
This function calculates the bdp
value from sigma
, determines the right inf
value and calls mvstud
.
Returns an AccrualTime
object that contains the accrual time and the accrual intensity.
getAccrualTime( accrualTime = NA_real_, ..., accrualIntensity = NA_real_, accrualIntensityType = c("auto", "absolute", "relative"), maxNumberOfSubjects = NA_real_ )
getAccrualTime( accrualTime = NA_real_, ..., accrualIntensity = NA_real_, accrualIntensityType = c("auto", "absolute", "relative"), maxNumberOfSubjects = NA_real_ )
accrualTime |
The assumed accrual time intervals for the study, default is
|
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
accrualIntensity |
A numeric vector of accrual intensities, default is the relative
intensity |
accrualIntensityType |
A character value specifying the accrual intensity input type.
Must be one of |
maxNumberOfSubjects |
The maximum number of subjects. |
Returns an AccrualTime
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
accrualTime
is the time period of subjects' accrual in a study.
It can be a value that defines the end of accrual or a vector.
In this case, accrualTime
can be used to define a non-constant accrual over time.
For this, accrualTime
is a vector that defines the accrual intervals.
The first element of accrualTime
must be equal to 0
and, additionally,
accrualIntensity
needs to be specified.
accrualIntensity
itself is a value or a vector (depending on the
length of accrualTime
) that defines the intensity how subjects
enter the trial in the intervals defined through accrualTime
.
accrualTime
can also be a list that combines the definition of the accrual time and
accrual intensity (see below and examples for details).
If the length of accrualTime
and the length of accrualIntensity
are the same
(i.e., the end of accrual is undefined), maxNumberOfSubjects > 0
needs to be specified
and the end of accrual is calculated.
In that case, accrualIntensity
is the number of subjects per time unit, i.e., the absolute accrual intensity.
If the length of accrualTime
equals the length of accrualIntensity - 1
(i.e., the end of accrual is defined), maxNumberOfSubjects
is calculated if the absolute accrual intensity is given.
If all elements in accrualIntensity
are smaller than 1, accrualIntensity
defines
the relative intensity how subjects enter the trial.
For example, accrualIntensity = c(0.1, 0.2)
specifies that in the second accrual interval
the intensity is doubled as compared to the first accrual interval. The actual (absolute) accrual intensity
is calculated for the calculated or given maxNumberOfSubjects
.
Note that the default is accrualIntensity = 0.1
meaning that the absolute accrual intensity
will be calculated.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
getNumberOfSubjects()
for calculating the number of subjects at given time points.
## Not run: # Assume that in a trial the accrual after the first 6 months is doubled # and the total accrual time is 30 months. # Further assume that a total of 1000 subjects are entered in the trial. # The number of subjects to be accrued in the first 6 months and afterwards # is achieved through getAccrualTime( accrualTime = c(0, 6, 30), accrualIntensity = c(0.1, 0.2), maxNumberOfSubjects = 1000 ) # The same result is obtained via the list based definition getAccrualTime( list( "0 - <6" = 0.1, "6 - <=30" = 0.2 ), maxNumberOfSubjects = 1000 ) # Calculate the end of accrual at given absolute intensity: getAccrualTime( accrualTime = c(0, 6), accrualIntensity = c(18, 36), maxNumberOfSubjects = 1000 ) # Via the list based definition this is getAccrualTime( list( "0 - <6" = 18, ">=6" = 36 ), maxNumberOfSubjects = 1000 ) # You can use an accrual time object in getSampleSizeSurvival() or # getPowerSurvival(). # For example, if the maximum number of subjects and the follow up # time needs to be calculated for a given effect size: accrualTime <- getAccrualTime( accrualTime = c(0, 6, 30), accrualIntensity = c(0.1, 0.2) ) getSampleSizeSurvival(accrualTime = accrualTime, pi1 = 0.4, pi2 = 0.2) # Or if the power and follow up time needs to be calculated for given # number of events and subjects: accrualTime <- getAccrualTime( accrualTime = c(0, 6, 30), accrualIntensity = c(0.1, 0.2), maxNumberOfSubjects = 110 ) getPowerSurvival( accrualTime = accrualTime, pi1 = 0.4, pi2 = 0.2, maxNumberOfEvents = 46 ) # How to show accrual time details # You can use a sample size or power object as argument for the function # getAccrualTime(): sampleSize <- getSampleSizeSurvival( accrualTime = c(0, 6), accrualIntensity = c(22, 53), lambda2 = 0.05, hazardRatio = 0.8, followUpTime = 6 ) sampleSize accrualTime <- getAccrualTime(sampleSize) accrualTime ## End(Not run)
## Not run: # Assume that in a trial the accrual after the first 6 months is doubled # and the total accrual time is 30 months. # Further assume that a total of 1000 subjects are entered in the trial. # The number of subjects to be accrued in the first 6 months and afterwards # is achieved through getAccrualTime( accrualTime = c(0, 6, 30), accrualIntensity = c(0.1, 0.2), maxNumberOfSubjects = 1000 ) # The same result is obtained via the list based definition getAccrualTime( list( "0 - <6" = 0.1, "6 - <=30" = 0.2 ), maxNumberOfSubjects = 1000 ) # Calculate the end of accrual at given absolute intensity: getAccrualTime( accrualTime = c(0, 6), accrualIntensity = c(18, 36), maxNumberOfSubjects = 1000 ) # Via the list based definition this is getAccrualTime( list( "0 - <6" = 18, ">=6" = 36 ), maxNumberOfSubjects = 1000 ) # You can use an accrual time object in getSampleSizeSurvival() or # getPowerSurvival(). # For example, if the maximum number of subjects and the follow up # time needs to be calculated for a given effect size: accrualTime <- getAccrualTime( accrualTime = c(0, 6, 30), accrualIntensity = c(0.1, 0.2) ) getSampleSizeSurvival(accrualTime = accrualTime, pi1 = 0.4, pi2 = 0.2) # Or if the power and follow up time needs to be calculated for given # number of events and subjects: accrualTime <- getAccrualTime( accrualTime = c(0, 6, 30), accrualIntensity = c(0.1, 0.2), maxNumberOfSubjects = 110 ) getPowerSurvival( accrualTime = accrualTime, pi1 = 0.4, pi2 = 0.2, maxNumberOfEvents = 46 ) # How to show accrual time details # You can use a sample size or power object as argument for the function # getAccrualTime(): sampleSize <- getSampleSizeSurvival( accrualTime = c(0, 6), accrualIntensity = c(22, 53), lambda2 = 0.05, hazardRatio = 0.8, followUpTime = 6 ) sampleSize accrualTime <- getAccrualTime(sampleSize) accrualTime ## End(Not run)
Calculates and returns the analysis results for the specified design and data.
getAnalysisResults( design, dataInput, ..., directionUpper = NA, thetaH0 = NA_real_, nPlanned = NA_real_, allocationRatioPlanned = 1, stage = NA_integer_, maxInformation = NULL, informationEpsilon = NULL )
getAnalysisResults( design, dataInput, ..., directionUpper = NA, thetaH0 = NA_real_, nPlanned = NA_real_, allocationRatioPlanned = 1, stage = NA_integer_, maxInformation = NULL, informationEpsilon = NULL )
design |
The trial design. |
dataInput |
The summary data used for calculating the test results.
This is either an element of |
... |
Further arguments to be passed to methods (cf., separate functions in "See Also" below), e.g.,
|
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
nPlanned |
The additional (i.e., "new" and not cumulative) sample size planned for each of the subsequent stages. The argument must be a vector with length equal to the number of remaining stages and contain the combined sample size from both treatment groups if two groups are considered. For survival outcomes, it should contain the planned number of additional events. For multi-arm designs, it is the per-comparison (combined) sample size. For enrichment designs, it is the (combined) sample size for the considered sub-population. |
allocationRatioPlanned |
The planned allocation ratio |
stage |
The stage number (optional). Default: total number of existing stages in the data input. |
maxInformation |
Positive value specifying the maximum information. |
informationEpsilon |
Positive integer value specifying the absolute information epsilon, which
defines the maximum distance from the observed information to the maximum information that causes the final analysis.
Updates at the final analysis in case the observed information at the final
analysis is smaller ("under-running") than the planned maximum information |
Given a design and a dataset, at given stage the function calculates the test results (effect sizes, stage-wise test statistics and p-values, overall p-values and test statistics, conditional rejection probability (CRP), conditional power, Repeated Confidence Intervals (RCIs), repeated overall p-values, and final stage p-values, median unbiased effect estimates, and final confidence intervals.
For designs with more than two treatments arms (multi-arm designs) or enrichment designs a closed combination test is performed. That is, additionally the statistics to be used in a closed testing procedure are provided.
The conditional power is calculated if the planned sample size for the subsequent stages (nPlanned
)
is specified. The conditional power is calculated either under the assumption of the observed effect or
under the assumption of an assumed effect, that has to be specified (see above).
For testing rates in a two-armed trial, pi1 and pi2 typically refer to the rates in the treatment
and the control group, respectively. This is not mandatory, however, and so pi1 and pi2 can be interchanged.
In many-to-one multi-armed trials, piTreatments and piControl refer to the rates in the treatment arms and
the one control arm, and so they cannot be interchanged. piTreatments and piControls in enrichment designs
can principally be interchanged, but we use the plural form to indicate that the rates can be differently
specified for the sub-populations.
Median unbiased effect estimates and confidence intervals are calculated if a group sequential design or an inverse normal combination test design was chosen, i.e., it is not applicable for Fisher's p-value combination test design. For the inverse normal combination test design with more than two stages, a warning informs that the validity of the confidence interval is theoretically shown only if no sample size change was performed.
A final stage p-value for Fisher's combination test is calculated only if a two-stage design was chosen. For Fisher's combination test, the conditional power for more than one remaining stages is estimated via simulation.
Final stage p-values, median unbiased effect estimates, and final confidence intervals are not calculated for multi-arm and enrichment designs.
Returns an AnalysisResults
object.
The following generics (R generic functions) are available for this result object:
names
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other analysis functions:
getClosedCombinationTestResults()
,
getClosedConditionalDunnettTestResults()
,
getConditionalPower()
,
getConditionalRejectionProbabilities()
,
getFinalConfidenceInterval()
,
getFinalPValue()
,
getRepeatedConfidenceIntervals()
,
getRepeatedPValues()
,
getStageResults()
,
getTestActions()
## Not run: # Example 1 One-Sample t Test # Perform an analysis within a three-stage group sequential design with # O'Brien & Fleming boundaries and one-sample data with a continuous outcome # where H0: mu = 1.2 is to be tested dsnGS <- getDesignGroupSequential() dataMeans <- getDataset( n = c(30, 30), means = c(1.96, 1.76), stDevs = c(1.92, 2.01) ) getAnalysisResults(design = dsnGS, dataInput = dataMeans, thetaH0 = 1.2) # You can obtain the results when performing an inverse normal combination test # with these data by using the commands dsnIN <- getDesignInverseNormal() getAnalysisResults(design = dsnIN, dataInput = dataMeans, thetaH0 = 1.2) # Example 2 Use Function Approach with Time to Event Data # Perform an analysis within a use function approach according to an # O'Brien & Fleming type use function and survival data where # where H0: hazard ratio = 1 is to be tested. The events were observed # over time and maxInformation = 120, informationEpsilon = 5 specifies # that 116 > 120 - 5 observed events defines the final analysis. design <- getDesignGroupSequential(typeOfDesign = "asOF") dataSurvival <- getDataset( cumulativeEvents = c(33, 72, 116), cumulativeLogRanks = c(1.33, 1.88, 1.902) ) getAnalysisResults(design, dataInput = dataSurvival, maxInformation = 120, informationEpsilon = 5 ) # Example 3 Multi-Arm Design # In a four-stage combination test design with O'Brien & Fleming boundaries # at the first stage the second treatment arm was dropped. With the Bonferroni # intersection test, the results together with the CRP, conditional power # (assuming a total of 40 subjects for each comparison and effect sizes 0.5 # and 0.8 for treatment arm 1 and 3, respectively, and standard deviation 1.2), # RCIs and p-values of a closed adaptive test procedure are # obtained as follows with the given data (treatment arm 4 refers to the # reference group; displayed with summary and plot commands): data <- getDataset( n1 = c(22, 23), n2 = c(21, NA), n3 = c(20, 25), n4 = c(25, 27), means1 = c(1.63, 1.51), means2 = c(1.4, NA), means3 = c(0.91, 0.95), means4 = c(0.83, 0.75), stds1 = c(1.2, 1.4), stds2 = c(1.3, NA), stds3 = c(1.1, 1.14), stds4 = c(1.02, 1.18) ) design <- getDesignInverseNormal(kMax = 4) x <- getAnalysisResults(design, dataInput = data, intersectionTest = "Bonferroni", nPlanned = c(40, 40), thetaH1 = c(0.5, NA, 0.8), assumedStDevs = 1.2 ) summary(x) if (require(ggplot2)) plot(x, thetaRange = c(0, 0.8)) design <- getDesignConditionalDunnett(secondStageConditioning = FALSE) y <- getAnalysisResults(design, dataInput = data, nPlanned = 40, thetaH1 = c(0.5, NA, 0.8), assumedStDevs = 1.2, stage = 1 ) summary(y) if (require(ggplot2)) plot(y, thetaRange = c(0, 0.4)) # Example 4 Enrichment Design # Perform an two-stage enrichment design analysis with O'Brien & Fleming boundaries # where one sub-population (S1) and a full population (F) are considered as primary # analysis sets. At interim, S1 is selected for further analysis and the sample # size is increased accordingly. With the Spiessens & Debois intersection test, # the results of a closed adaptive test procedure together with the CRP, repeated # RCIs and p-values are obtained as follows with the given data (displayed with # summary and plot commands): design <- getDesignInverseNormal(kMax = 2, typeOfDesign = "OF") dataS1 <- getDataset( means1 = c(13.2, 12.8), means2 = c(11.1, 10.8), stDev1 = c(3.4, 3.3), stDev2 = c(2.9, 3.5), n1 = c(21, 42), n2 = c(19, 39) ) dataNotS1 <- getDataset( means1 = c(11.8, NA), means2 = c(10.5, NA), stDev1 = c(3.6, NA), stDev2 = c(2.7, NA), n1 = c(15, NA), n2 = c(13, NA) ) dataBoth <- getDataset(S1 = dataS1, R = dataNotS1) x <- getAnalysisResults(design, dataInput = dataBoth, intersectionTest = "SpiessensDebois", varianceOption = "pooledFromFull", stratifiedAnalysis = TRUE ) summary(x) if (require(ggplot2)) plot(x, type = 2) ## End(Not run)
## Not run: # Example 1 One-Sample t Test # Perform an analysis within a three-stage group sequential design with # O'Brien & Fleming boundaries and one-sample data with a continuous outcome # where H0: mu = 1.2 is to be tested dsnGS <- getDesignGroupSequential() dataMeans <- getDataset( n = c(30, 30), means = c(1.96, 1.76), stDevs = c(1.92, 2.01) ) getAnalysisResults(design = dsnGS, dataInput = dataMeans, thetaH0 = 1.2) # You can obtain the results when performing an inverse normal combination test # with these data by using the commands dsnIN <- getDesignInverseNormal() getAnalysisResults(design = dsnIN, dataInput = dataMeans, thetaH0 = 1.2) # Example 2 Use Function Approach with Time to Event Data # Perform an analysis within a use function approach according to an # O'Brien & Fleming type use function and survival data where # where H0: hazard ratio = 1 is to be tested. The events were observed # over time and maxInformation = 120, informationEpsilon = 5 specifies # that 116 > 120 - 5 observed events defines the final analysis. design <- getDesignGroupSequential(typeOfDesign = "asOF") dataSurvival <- getDataset( cumulativeEvents = c(33, 72, 116), cumulativeLogRanks = c(1.33, 1.88, 1.902) ) getAnalysisResults(design, dataInput = dataSurvival, maxInformation = 120, informationEpsilon = 5 ) # Example 3 Multi-Arm Design # In a four-stage combination test design with O'Brien & Fleming boundaries # at the first stage the second treatment arm was dropped. With the Bonferroni # intersection test, the results together with the CRP, conditional power # (assuming a total of 40 subjects for each comparison and effect sizes 0.5 # and 0.8 for treatment arm 1 and 3, respectively, and standard deviation 1.2), # RCIs and p-values of a closed adaptive test procedure are # obtained as follows with the given data (treatment arm 4 refers to the # reference group; displayed with summary and plot commands): data <- getDataset( n1 = c(22, 23), n2 = c(21, NA), n3 = c(20, 25), n4 = c(25, 27), means1 = c(1.63, 1.51), means2 = c(1.4, NA), means3 = c(0.91, 0.95), means4 = c(0.83, 0.75), stds1 = c(1.2, 1.4), stds2 = c(1.3, NA), stds3 = c(1.1, 1.14), stds4 = c(1.02, 1.18) ) design <- getDesignInverseNormal(kMax = 4) x <- getAnalysisResults(design, dataInput = data, intersectionTest = "Bonferroni", nPlanned = c(40, 40), thetaH1 = c(0.5, NA, 0.8), assumedStDevs = 1.2 ) summary(x) if (require(ggplot2)) plot(x, thetaRange = c(0, 0.8)) design <- getDesignConditionalDunnett(secondStageConditioning = FALSE) y <- getAnalysisResults(design, dataInput = data, nPlanned = 40, thetaH1 = c(0.5, NA, 0.8), assumedStDevs = 1.2, stage = 1 ) summary(y) if (require(ggplot2)) plot(y, thetaRange = c(0, 0.4)) # Example 4 Enrichment Design # Perform an two-stage enrichment design analysis with O'Brien & Fleming boundaries # where one sub-population (S1) and a full population (F) are considered as primary # analysis sets. At interim, S1 is selected for further analysis and the sample # size is increased accordingly. With the Spiessens & Debois intersection test, # the results of a closed adaptive test procedure together with the CRP, repeated # RCIs and p-values are obtained as follows with the given data (displayed with # summary and plot commands): design <- getDesignInverseNormal(kMax = 2, typeOfDesign = "OF") dataS1 <- getDataset( means1 = c(13.2, 12.8), means2 = c(11.1, 10.8), stDev1 = c(3.4, 3.3), stDev2 = c(2.9, 3.5), n1 = c(21, 42), n2 = c(19, 39) ) dataNotS1 <- getDataset( means1 = c(11.8, NA), means2 = c(10.5, NA), stDev1 = c(3.6, NA), stDev2 = c(2.7, NA), n1 = c(15, NA), n2 = c(13, NA) ) dataBoth <- getDataset(S1 = dataS1, R = dataNotS1) x <- getAnalysisResults(design, dataInput = dataBoth, intersectionTest = "SpiessensDebois", varianceOption = "pooledFromFull", stratifiedAnalysis = TRUE ) summary(x) if (require(ggplot2)) plot(x, type = 2) ## End(Not run)
Calculates and returns the results from the closed combination test in multi-arm and population enrichment designs.
getClosedCombinationTestResults(stageResults)
getClosedCombinationTestResults(stageResults)
stageResults |
The results at given stage, obtained from |
Returns a ClosedCombinationTestResults
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other analysis functions:
getAnalysisResults()
,
getClosedConditionalDunnettTestResults()
,
getConditionalPower()
,
getConditionalRejectionProbabilities()
,
getFinalConfidenceInterval()
,
getFinalPValue()
,
getRepeatedConfidenceIntervals()
,
getRepeatedPValues()
,
getStageResults()
,
getTestActions()
## Not run: # In a four-stage combination test design with O'Brien & Fleming boundaries # at the first stage the second treatment arm was dropped. With the Bonferroni # intersection test, the results of a closed adaptive test procedure are # obtained as follows with the given data (treatment arm 4 refers to the # reference group): data <- getDataset( n1 = c(22, 23), n2 = c(21, NA), n3 = c(20, 25), n4 = c(25, 27), means1 = c(1.63, 1.51), means2 = c(1.4, NA), means3 = c(0.91, 0.95), means4 = c(0.83, 0.75), stds1 = c(1.2, 1.4), stds2 = c(1.3, NA), stds3 = c(1.1, 1.14), stds4 = c(1.02, 1.18) ) design <- getDesignInverseNormal(kMax = 4) stageResults <- getStageResults(design, dataInput = data, intersectionTest = "Bonferroni" ) getClosedCombinationTestResults(stageResults) ## End(Not run)
## Not run: # In a four-stage combination test design with O'Brien & Fleming boundaries # at the first stage the second treatment arm was dropped. With the Bonferroni # intersection test, the results of a closed adaptive test procedure are # obtained as follows with the given data (treatment arm 4 refers to the # reference group): data <- getDataset( n1 = c(22, 23), n2 = c(21, NA), n3 = c(20, 25), n4 = c(25, 27), means1 = c(1.63, 1.51), means2 = c(1.4, NA), means3 = c(0.91, 0.95), means4 = c(0.83, 0.75), stds1 = c(1.2, 1.4), stds2 = c(1.3, NA), stds3 = c(1.1, 1.14), stds4 = c(1.02, 1.18) ) design <- getDesignInverseNormal(kMax = 4) stageResults <- getStageResults(design, dataInput = data, intersectionTest = "Bonferroni" ) getClosedCombinationTestResults(stageResults) ## End(Not run)
Calculates and returns the results from the closed conditional Dunnett test.
getClosedConditionalDunnettTestResults( stageResults, ..., stage = stageResults$stage )
getClosedConditionalDunnettTestResults( stageResults, ..., stage = stageResults$stage )
stageResults |
The results at given stage, obtained from |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
stage |
The stage number (optional). Default: total number of existing stages in the data input. |
For performing the conditional Dunnett test the design must be defined through the function
getDesignConditionalDunnett()
.
See Koenig et al. (2008) and Wassmer & Brannath (2016), chapter 11 for details of the test procedure.
Returns a ClosedCombinationTestResults
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other analysis functions:
getAnalysisResults()
,
getClosedCombinationTestResults()
,
getConditionalPower()
,
getConditionalRejectionProbabilities()
,
getFinalConfidenceInterval()
,
getFinalPValue()
,
getRepeatedConfidenceIntervals()
,
getRepeatedPValues()
,
getStageResults()
,
getTestActions()
## Not run: # In a two-stage design a conditional Dunnett test should be performed # where the unconditional second stage p-values should be used for the # test decision. # At the first stage the second treatment arm was dropped. The results of # a closed conditionsal Dunnett test are obtained as follows with the given # data (treatment arm 4 refers to the reference group): data <- getDataset( n1 = c(22, 23), n2 = c(21, NA), n3 = c(20, 25), n4 = c(25, 27), means1 = c(1.63, 1.51), means2 = c(1.4, NA), means3 = c(0.91, 0.95), means4 = c(0.83, 0.75), stds1 = c(1.2, 1.4), stds2 = c(1.3, NA), stds3 = c(1.1, 1.14), stds4 = c(1.02, 1.18) ) # For getting the results of the closed test procedure, use the following commands: design <- getDesignConditionalDunnett(secondStageConditioning = FALSE) stageResults <- getStageResults(design, dataInput = data) getClosedConditionalDunnettTestResults(stageResults) ## End(Not run)
## Not run: # In a two-stage design a conditional Dunnett test should be performed # where the unconditional second stage p-values should be used for the # test decision. # At the first stage the second treatment arm was dropped. The results of # a closed conditionsal Dunnett test are obtained as follows with the given # data (treatment arm 4 refers to the reference group): data <- getDataset( n1 = c(22, 23), n2 = c(21, NA), n3 = c(20, 25), n4 = c(25, 27), means1 = c(1.63, 1.51), means2 = c(1.4, NA), means3 = c(0.91, 0.95), means4 = c(0.83, 0.75), stds1 = c(1.2, 1.4), stds2 = c(1.3, NA), stds3 = c(1.1, 1.14), stds4 = c(1.02, 1.18) ) # For getting the results of the closed test procedure, use the following commands: design <- getDesignConditionalDunnett(secondStageConditioning = FALSE) stageResults <- getStageResults(design, dataInput = data) getClosedConditionalDunnettTestResults(stageResults) ## End(Not run)
Calculates and returns the conditional power.
getConditionalPower(stageResults, ..., nPlanned, allocationRatioPlanned = 1)
getConditionalPower(stageResults, ..., nPlanned, allocationRatioPlanned = 1)
stageResults |
The results at given stage, obtained from |
... |
Further (optional) arguments to be passed:
|
nPlanned |
The additional (i.e., "new" and not cumulative) sample size planned for each of the subsequent stages. The argument must be a vector with length equal to the number of remaining stages and contain the combined sample size from both treatment groups if two groups are considered. For survival outcomes, it should contain the planned number of additional events. For multi-arm designs, it is the per-comparison (combined) sample size. For enrichment designs, it is the (combined) sample size for the considered sub-population. |
allocationRatioPlanned |
The planned allocation ratio |
The conditional power is calculated if the planned sample size for the subsequent stages is specified.
For testing rates in a two-armed trial, pi1 and pi2 typically refer to the rates in the treatment
and the control group, respectively. This is not mandatory, however, and so pi1 and pi2 can be interchanged.
In many-to-one multi-armed trials, piTreatments and piControl refer to the rates in the treatment arms and
the one control arm, and so they cannot be interchanged. piTreatments and piControls in enrichment designs
can principally be interchanged, but we use the plural form to indicate that the rates can be differently
specified for the sub-populations.
For Fisher's combination test, the conditional power for more than one remaining stages is estimated via simulation.
Returns a ConditionalPowerResults
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
plot.StageResults()
or plot.AnalysisResults()
for plotting the conditional power.
Other analysis functions:
getAnalysisResults()
,
getClosedCombinationTestResults()
,
getClosedConditionalDunnettTestResults()
,
getConditionalRejectionProbabilities()
,
getFinalConfidenceInterval()
,
getFinalPValue()
,
getRepeatedConfidenceIntervals()
,
getRepeatedPValues()
,
getStageResults()
,
getTestActions()
## Not run: data <- getDataset( n1 = c(22, 13, 22, 13), n2 = c(22, 11, 22, 11), means1 = c(1, 1.1, 1, 1), means2 = c(1.4, 1.5, 1, 2.5), stds1 = c(1, 2, 2, 1.3), stds2 = c(1, 2, 2, 1.3) ) stageResults <- getStageResults( getDesignGroupSequential(kMax = 4), dataInput = data, stage = 2, directionUpper = FALSE ) getConditionalPower(stageResults, thetaH1 = -0.4, nPlanned = c(64, 64), assumedStDev = 1.5, allocationRatioPlanned = 3 ) ## End(Not run)
## Not run: data <- getDataset( n1 = c(22, 13, 22, 13), n2 = c(22, 11, 22, 11), means1 = c(1, 1.1, 1, 1), means2 = c(1.4, 1.5, 1, 2.5), stds1 = c(1, 2, 2, 1.3), stds2 = c(1, 2, 2, 1.3) ) stageResults <- getStageResults( getDesignGroupSequential(kMax = 4), dataInput = data, stage = 2, directionUpper = FALSE ) getConditionalPower(stageResults, thetaH1 = -0.4, nPlanned = c(64, 64), assumedStDev = 1.5, allocationRatioPlanned = 3 ) ## End(Not run)
Calculates the conditional rejection probabilities (CRP) for given test results.
getConditionalRejectionProbabilities(stageResults, ...)
getConditionalRejectionProbabilities(stageResults, ...)
stageResults |
The results at given stage, obtained from |
... |
Further (optional) arguments to be passed:
|
The conditional rejection probability is the probability, under H0, to reject H0 in one of the subsequent (remaining) stages. The probability is calculated using the specified design. For testing rates and the survival design, the normal approximation is used, i.e., it is calculated with the use of the prototype case testing a mean for normally distributed data with known variance.
The conditional rejection probabilities are provided up to the specified stage.
For Fisher's combination test, you can check the validity of the CRP calculation via simulation.
Returns a numeric
vector of length kMax
or in case of multi-arm stage results
a matrix
(each column represents a stage, each row a comparison)
containing the conditional rejection probabilities.
Other analysis functions:
getAnalysisResults()
,
getClosedCombinationTestResults()
,
getClosedConditionalDunnettTestResults()
,
getConditionalPower()
,
getFinalConfidenceInterval()
,
getFinalPValue()
,
getRepeatedConfidenceIntervals()
,
getRepeatedPValues()
,
getStageResults()
,
getTestActions()
## Not run: # Calculate CRP for a Fisher's combination test design with # two remaining stages and check the results by simulation. design <- getDesignFisher( kMax = 4, alpha = 0.01, informationRates = c(0.1, 0.3, 0.8, 1) ) data <- getDataset(n = c(40, 40), events = c(20, 22)) sr <- getStageResults(design, data, thetaH0 = 0.4) getConditionalRejectionProbabilities(sr) getConditionalRejectionProbabilities(sr, simulateCRP = TRUE, seed = 12345, iterations = 10000 ) ## End(Not run)
## Not run: # Calculate CRP for a Fisher's combination test design with # two remaining stages and check the results by simulation. design <- getDesignFisher( kMax = 4, alpha = 0.01, informationRates = c(0.1, 0.3, 0.8, 1) ) data <- getDataset(n = c(40, 40), events = c(20, 22)) sr <- getStageResults(design, data, thetaH0 = 0.4) getConditionalRejectionProbabilities(sr) getConditionalRejectionProbabilities(sr, simulateCRP = TRUE, seed = 12345, iterations = 10000 ) ## End(Not run)
Returns the aggregated simulation data.
getData(x) getData.SimulationResults(x)
getData(x) getData.SimulationResults(x)
This function can be used to get the aggregated simulated data from an simulation results
object, for example, obtained by getSimulationSurvival()
.
In this case, the data frame contains the following columns:
iterationNumber
: The number of the simulation iteration.
stageNumber
: The stage.
pi1
: The assumed or derived event rate in the treatment group.
pi2
: The assumed or derived event rate in the control group.
hazardRatio
: The hazard ratio under consideration (if available).
analysisTime
: The analysis time.
numberOfSubjects
: The number of subjects under consideration when the
(interim) analysis takes place.
eventsPerStage1
: The observed number of events per stage
in treatment group 1.
eventsPerStage2
: The observed number of events per stage
in treatment group 2.
eventsPerStage
: The observed number of events per stage
in both treatment groups.
rejectPerStage
: 1 if null hypothesis can be rejected, 0 otherwise.
eventsNotAchieved
: 1 if number of events could not be reached with
observed number of subjects, 0 otherwise.
futilityPerStage
: 1 if study should be stopped for futility, 0 otherwise.
testStatistic
: The test statistic that is used for the test decision,
depends on which design was chosen (group sequential, inverse normal,
or Fisher combination test)'
logRankStatistic
: Z-score statistic which corresponds to a one-sided
log-rank test at considered stage.
conditionalPowerAchieved
: The conditional power for the subsequent stage of the trial for
selected sample size and effect. The effect is either estimated from the data or can be
user defined with thetaH1
or pi1H1
and pi2H1
.
trialStop
: TRUE
if study should be stopped for efficacy or futility or final stage, FALSE
otherwise.
hazardRatioEstimateLR
: The estimated hazard ratio, derived from the
log-rank statistic.
A subset of variables is provided for getSimulationMeans()
, getSimulationRates()
, getSimulationMultiArmMeans()
,getSimulationMultiArmRates()
, or getSimulationMultiArmSurvival()
.
Returns a data.frame
.
## Not run: results <- getSimulationSurvival( pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, eventTime = 12, accrualTime = 24, plannedEvents = 40, maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) data <- getData(results) head(data) dim(data) ## End(Not run)
## Not run: results <- getSimulationSurvival( pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, eventTime = 12, accrualTime = 24, plannedEvents = 40, maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) data <- getData(results) head(data) dim(data) ## End(Not run)
Creates a dataset object and returns it.
getDataset(..., floatingPointNumbersEnabled = FALSE) getDataSet(..., floatingPointNumbersEnabled = FALSE)
getDataset(..., floatingPointNumbersEnabled = FALSE) getDataSet(..., floatingPointNumbersEnabled = FALSE)
... |
A |
floatingPointNumbersEnabled |
If |
The different dataset types DatasetMeans
, of DatasetRates
, or
DatasetSurvival
can be created as follows:
An element of DatasetMeans
for one sample is created by getDataset(sampleSizes =, means =, stDevs =)
where sampleSizes
, means
, stDevs
are vectors with stage-wise sample sizes,
means and standard deviations of length given by the number of available stages.
An element of DatasetMeans
for two samples is created by getDataset(sampleSizes1 =, sampleSizes2 =, means1 =, means2 =,
stDevs1 =, stDevs2 =)
where
sampleSizes1
, sampleSizes2
, means1
, means2
,
stDevs1
, stDevs2
are vectors with
stage-wise sample sizes, means and standard deviations for the two treatment groups
of length given by the number of available stages.
An element of DatasetRates
for one sample is created by getDataset(sampleSizes =, events =)
where sampleSizes
, events
are vectors
with stage-wise sample sizes and events of length given by the number of available stages.
An element of DatasetRates
for two samples is created by getDataset(sampleSizes1 =, sampleSizes2 =, events1 =, events2 =)
where
sampleSizes1
, sampleSizes2
, events1
, events2
are vectors with stage-wise sample sizes
and events for the two treatment groups of length given by the number of available stages.
An element of DatasetSurvival
is created by getDataset(events =, logRanks =, allocationRatios =)
where
events
, logRanks
, and allocation ratios
are the stage-wise events,
(one-sided) logrank statistics, and allocation ratios.
An element of DatasetMeans
, DatasetRates
, and DatasetSurvival
for more than one comparison is created by adding subsequent digits to the variable names.
The system can analyze these data in a multi-arm many-to-one comparison setting where the
group with the highest index represents the control group.
Prefix overall[Capital case of first letter of variable name]...
for the variable
names enables entering the overall (cumulative) results and calculates stage-wise statistics.
Since rpact version 3.2, the prefix cumulative[Capital case of first letter of variable name]...
or
cum[Capital case of first letter of variable name]...
can alternatively be used for this.
n
can be used in place of samplesizes
.
Note that in survival design usually the overall (cumulative) events and logrank test statistics are provided
in the output, so getDataset(cumulativeEvents=, cumulativeLogRanks =, cumulativeAllocationRatios =)
is the usual command for entering survival data. Note also that for cumulativeLogranks
also the
z scores from a Cox regression can be used.
For multi-arm designs, the index refers to the considered comparison. For example,
getDataset(events1=c(13, 33), logRanks1 = c(1.23, 1.55), events2 = c(16, NA), logRanks2 = c(1.55, NA))
refers to the case where one active arm (1) is considered at both stages whereas active arm 2
was dropped at interim. Number of events and logrank statistics are entered for the corresponding
comparison to control (see Examples).
For enrichment designs, the comparison of two samples is provided for an unstratified
(sub-population wise) or stratified data input.
For non-stratified (sub-population wise) data input the data sets are defined for the sub-populations
S1, S2, ..., F, where F refers to the full populations. Use of getDataset(S1 = , S2, ..., F = )
defines the data set to be used in getAnalysisResults()
(see examples)
For stratified data input the data sets are defined for the strata S1, S12, S2, ..., R, where R
refers to the remainder of the strata such that the union of all sets is the full population.
Use of getDataset(S1 = , S12 = , S2, ..., R = )
defines the data set to be used in
getAnalysisResults()
(see examples)
For survival data, for enrichment designs the log-rank statistics can only be entered as stratified
log-rank statistics in order to provide strong control of Type I error rate. For stratified data input,
the variables to be specified in getDataset()
are cumEvents
, cumExpectedEvents
,
cumVarianceEvents
, and cumAllocationRatios
or overallEvents
, overallExpectedEvents
,
overallVarianceEvents
, and overallAllocationRatios
. From this, (stratified) log-rank tests and
and the independent increments are calculated.
Returns a Dataset
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
## Not run: # Create a Dataset of Means (one group): datasetOfMeans <- getDataset( n = c(22, 11, 22, 11), means = c(1, 1.1, 1, 1), stDevs = c(1, 2, 2, 1.3) ) datasetOfMeans datasetOfMeans$show(showType = 2) datasetOfMeans2 <- getDataset( cumulativeSampleSizes = c(22, 33, 55, 66), cumulativeMeans = c(1.000, 1.033, 1.020, 1.017), cumulativeStDevs = c(1.00, 1.38, 1.64, 1.58) ) datasetOfMeans2 datasetOfMeans2$show(showType = 2) as.data.frame(datasetOfMeans2) # Create a Dataset of Means (two groups): datasetOfMeans3 <- getDataset( n1 = c(22, 11, 22, 11), n2 = c(22, 13, 22, 13), means1 = c(1, 1.1, 1, 1), means2 = c(1.4, 1.5, 3, 2.5), stDevs1 = c(1, 2, 2, 1.3), stDevs2 = c(1, 2, 2, 1.3) ) datasetOfMeans3 datasetOfMeans4 <- getDataset( cumulativeSampleSizes1 = c(22, 33, 55, 66), cumulativeSampleSizes2 = c(22, 35, 57, 70), cumulativeMeans1 = c(1, 1.033, 1.020, 1.017), cumulativeMeans2 = c(1.4, 1.437, 2.040, 2.126), cumulativeStDevs1 = c(1, 1.38, 1.64, 1.58), cumulativeStDevs2 = c(1, 1.43, 1.82, 1.74) ) datasetOfMeans4 df <- data.frame( stages = 1:4, n1 = c(22, 11, 22, 11), n2 = c(22, 13, 22, 13), means1 = c(1, 1.1, 1, 1), means2 = c(1.4, 1.5, 3, 2.5), stDevs1 = c(1, 2, 2, 1.3), stDevs2 = c(1, 2, 2, 1.3) ) datasetOfMeans5 <- getDataset(df) datasetOfMeans5 # Create a Dataset of Means (three groups) where the comparison of # treatment arm 1 to control is dropped at the second interim stage: datasetOfMeans6 <- getDataset( cumN1 = c(22, 33, NA), cumN2 = c(20, 34, 56), cumN3 = c(22, 31, 52), cumMeans1 = c(1.64, 1.54, NA), cumMeans2 = c(1.7, 1.5, 1.77), cumMeans3 = c(2.5, 2.06, 2.99), cumStDevs1 = c(1.5, 1.9, NA), cumStDevs2 = c(1.3, 1.3, 1.1), cumStDevs3 = c(1, 1.3, 1.8)) datasetOfMeans6 # Create a Dataset of Rates (one group): datasetOfRates <- getDataset( n = c(8, 10, 9, 11), events = c(4, 5, 5, 6) ) datasetOfRates # Create a Dataset of Rates (two groups): datasetOfRates2 <- getDataset( n2 = c(8, 10, 9, 11), n1 = c(11, 13, 12, 13), events2 = c(3, 5, 5, 6), events1 = c(10, 10, 12, 12) ) datasetOfRates2 # Create a Dataset of Rates (three groups) where the comparison of # treatment arm 2 to control is dropped at the first interim stage: datasetOfRates3 <- getDataset( cumN1 = c(22, 33, 44), cumN2 = c(20, NA, NA), cumN3 = c(20, 34, 44), cumEvents1 = c(11, 14, 22), cumEvents2 = c(17, NA, NA), cumEvents3 = c(17, 19, 33)) datasetOfRates3 # Create a Survival Dataset datasetSurvival <- getDataset( cumEvents = c(8, 15, 19, 31), cumAllocationRatios = c(1, 1, 1, 2), cumLogRanks = c(1.52, 1.98, 1.99, 2.11) ) datasetSurvival # Create a Survival Dataset with four comparisons where treatment # arm 2 was dropped at the first interim stage, and treatment arm 4 # at the second. datasetSurvival2 <- getDataset( cumEvents1 = c(18, 45, 56), cumEvents2 = c(22, NA, NA), cumEvents3 = c(12, 41, 56), cumEvents4 = c(27, 56, NA), cumLogRanks1 = c(1.52, 1.98, 1.99), cumLogRanks2 = c(3.43, NA, NA), cumLogRanks3 = c(1.45, 1.67, 1.87), cumLogRanks4 = c(1.12, 1.33, NA) ) datasetSurvival2 # Enrichment: Stratified and unstratified data input # The following data are from one study. Only the first # (stratified) data input enables a stratified analysis. # Stratified data input S1 <- getDataset( sampleSize1 = c(18, 17), sampleSize2 = c(12, 33), mean1 = c(125.6, 111.1), mean2 = c(107.7, 77.7), stDev1 = c(120.1, 145.6), stDev2 = c(128.5, 133.3)) S2 <- getDataset( sampleSize1 = c(11, NA), sampleSize2 = c(14, NA), mean1 = c(100.1, NA), mean2 = c( 68.3, NA), stDev1 = c(116.8, NA), stDev2 = c(124.0, NA)) S12 <- getDataset( sampleSize1 = c(21, 17), sampleSize2 = c(21, 12), mean1 = c(135.9, 117.7), mean2 = c(84.9, 107.7), stDev1 = c(185.0, 92.3), stDev2 = c(139.5, 107.7)) R <- getDataset( sampleSize1 = c(19, NA), sampleSize2 = c(33, NA), mean1 = c(142.4, NA), mean2 = c(77.1, NA), stDev1 = c(120.6, NA), stDev2 = c(163.5, NA)) dataEnrichment <- getDataset(S1 = S1, S2 = S2, S12 = S12, R = R) dataEnrichment # Unstratified data input S1N <- getDataset( sampleSize1 = c(39, 34), sampleSize2 = c(33, 45), stDev1 = c(156.503, 120.084), stDev2 = c(134.025, 126.502), mean1 = c(131.146, 114.4), mean2 = c(93.191, 85.7)) S2N <- getDataset( sampleSize1 = c(32, NA), sampleSize2 = c(35, NA), stDev1 = c(163.645, NA), stDev2 = c(131.888, NA), mean1 = c(123.594, NA), mean2 = c(78.26, NA)) F <- getDataset( sampleSize1 = c(69, NA), sampleSize2 = c(80, NA), stDev1 = c(165.468, NA), stDev2 = c(143.979, NA), mean1 = c(129.296, NA), mean2 = c(82.187, NA)) dataEnrichmentN <- getDataset(S1 = S1N, S2 = S2N, F = F) dataEnrichmentN ## End(Not run)
## Not run: # Create a Dataset of Means (one group): datasetOfMeans <- getDataset( n = c(22, 11, 22, 11), means = c(1, 1.1, 1, 1), stDevs = c(1, 2, 2, 1.3) ) datasetOfMeans datasetOfMeans$show(showType = 2) datasetOfMeans2 <- getDataset( cumulativeSampleSizes = c(22, 33, 55, 66), cumulativeMeans = c(1.000, 1.033, 1.020, 1.017), cumulativeStDevs = c(1.00, 1.38, 1.64, 1.58) ) datasetOfMeans2 datasetOfMeans2$show(showType = 2) as.data.frame(datasetOfMeans2) # Create a Dataset of Means (two groups): datasetOfMeans3 <- getDataset( n1 = c(22, 11, 22, 11), n2 = c(22, 13, 22, 13), means1 = c(1, 1.1, 1, 1), means2 = c(1.4, 1.5, 3, 2.5), stDevs1 = c(1, 2, 2, 1.3), stDevs2 = c(1, 2, 2, 1.3) ) datasetOfMeans3 datasetOfMeans4 <- getDataset( cumulativeSampleSizes1 = c(22, 33, 55, 66), cumulativeSampleSizes2 = c(22, 35, 57, 70), cumulativeMeans1 = c(1, 1.033, 1.020, 1.017), cumulativeMeans2 = c(1.4, 1.437, 2.040, 2.126), cumulativeStDevs1 = c(1, 1.38, 1.64, 1.58), cumulativeStDevs2 = c(1, 1.43, 1.82, 1.74) ) datasetOfMeans4 df <- data.frame( stages = 1:4, n1 = c(22, 11, 22, 11), n2 = c(22, 13, 22, 13), means1 = c(1, 1.1, 1, 1), means2 = c(1.4, 1.5, 3, 2.5), stDevs1 = c(1, 2, 2, 1.3), stDevs2 = c(1, 2, 2, 1.3) ) datasetOfMeans5 <- getDataset(df) datasetOfMeans5 # Create a Dataset of Means (three groups) where the comparison of # treatment arm 1 to control is dropped at the second interim stage: datasetOfMeans6 <- getDataset( cumN1 = c(22, 33, NA), cumN2 = c(20, 34, 56), cumN3 = c(22, 31, 52), cumMeans1 = c(1.64, 1.54, NA), cumMeans2 = c(1.7, 1.5, 1.77), cumMeans3 = c(2.5, 2.06, 2.99), cumStDevs1 = c(1.5, 1.9, NA), cumStDevs2 = c(1.3, 1.3, 1.1), cumStDevs3 = c(1, 1.3, 1.8)) datasetOfMeans6 # Create a Dataset of Rates (one group): datasetOfRates <- getDataset( n = c(8, 10, 9, 11), events = c(4, 5, 5, 6) ) datasetOfRates # Create a Dataset of Rates (two groups): datasetOfRates2 <- getDataset( n2 = c(8, 10, 9, 11), n1 = c(11, 13, 12, 13), events2 = c(3, 5, 5, 6), events1 = c(10, 10, 12, 12) ) datasetOfRates2 # Create a Dataset of Rates (three groups) where the comparison of # treatment arm 2 to control is dropped at the first interim stage: datasetOfRates3 <- getDataset( cumN1 = c(22, 33, 44), cumN2 = c(20, NA, NA), cumN3 = c(20, 34, 44), cumEvents1 = c(11, 14, 22), cumEvents2 = c(17, NA, NA), cumEvents3 = c(17, 19, 33)) datasetOfRates3 # Create a Survival Dataset datasetSurvival <- getDataset( cumEvents = c(8, 15, 19, 31), cumAllocationRatios = c(1, 1, 1, 2), cumLogRanks = c(1.52, 1.98, 1.99, 2.11) ) datasetSurvival # Create a Survival Dataset with four comparisons where treatment # arm 2 was dropped at the first interim stage, and treatment arm 4 # at the second. datasetSurvival2 <- getDataset( cumEvents1 = c(18, 45, 56), cumEvents2 = c(22, NA, NA), cumEvents3 = c(12, 41, 56), cumEvents4 = c(27, 56, NA), cumLogRanks1 = c(1.52, 1.98, 1.99), cumLogRanks2 = c(3.43, NA, NA), cumLogRanks3 = c(1.45, 1.67, 1.87), cumLogRanks4 = c(1.12, 1.33, NA) ) datasetSurvival2 # Enrichment: Stratified and unstratified data input # The following data are from one study. Only the first # (stratified) data input enables a stratified analysis. # Stratified data input S1 <- getDataset( sampleSize1 = c(18, 17), sampleSize2 = c(12, 33), mean1 = c(125.6, 111.1), mean2 = c(107.7, 77.7), stDev1 = c(120.1, 145.6), stDev2 = c(128.5, 133.3)) S2 <- getDataset( sampleSize1 = c(11, NA), sampleSize2 = c(14, NA), mean1 = c(100.1, NA), mean2 = c( 68.3, NA), stDev1 = c(116.8, NA), stDev2 = c(124.0, NA)) S12 <- getDataset( sampleSize1 = c(21, 17), sampleSize2 = c(21, 12), mean1 = c(135.9, 117.7), mean2 = c(84.9, 107.7), stDev1 = c(185.0, 92.3), stDev2 = c(139.5, 107.7)) R <- getDataset( sampleSize1 = c(19, NA), sampleSize2 = c(33, NA), mean1 = c(142.4, NA), mean2 = c(77.1, NA), stDev1 = c(120.6, NA), stDev2 = c(163.5, NA)) dataEnrichment <- getDataset(S1 = S1, S2 = S2, S12 = S12, R = R) dataEnrichment # Unstratified data input S1N <- getDataset( sampleSize1 = c(39, 34), sampleSize2 = c(33, 45), stDev1 = c(156.503, 120.084), stDev2 = c(134.025, 126.502), mean1 = c(131.146, 114.4), mean2 = c(93.191, 85.7)) S2N <- getDataset( sampleSize1 = c(32, NA), sampleSize2 = c(35, NA), stDev1 = c(163.645, NA), stDev2 = c(131.888, NA), mean1 = c(123.594, NA), mean2 = c(78.26, NA)) F <- getDataset( sampleSize1 = c(69, NA), sampleSize2 = c(80, NA), stDev1 = c(165.468, NA), stDev2 = c(143.979, NA), mean1 = c(129.296, NA), mean2 = c(82.187, NA)) dataEnrichmentN <- getDataset(S1 = S1N, S2 = S2N, F = F) dataEnrichmentN ## End(Not run)
Calculates the characteristics of a design and returns it.
getDesignCharacteristics(design = NULL, ...)
getDesignCharacteristics(design = NULL, ...)
design |
The trial design. |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
Calculates the inflation factor (IF), the expected reduction in sample size under H1, under H0, and under a value in between H0 and H1. Furthermore, absolute information values are calculated under the prototype case testing H0: mu = 0 against H1: mu = 1.
Returns a TrialDesignCharacteristics
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other design functions:
getDesignConditionalDunnett()
,
getDesignFisher()
,
getDesignGroupSequential()
,
getDesignInverseNormal()
,
getGroupSequentialProbabilities()
,
getPowerAndAverageSampleNumber()
## Not run: # Calculate design characteristics for a three-stage O'Brien & Fleming # design at power 90% and compare it with Pocock's design. getDesignCharacteristics(getDesignGroupSequential(beta = 0.1)) getDesignCharacteristics(getDesignGroupSequential(beta = 0.1, typeOfDesign = "P")) ## End(Not run)
## Not run: # Calculate design characteristics for a three-stage O'Brien & Fleming # design at power 90% and compare it with Pocock's design. getDesignCharacteristics(getDesignGroupSequential(beta = 0.1)) getDesignCharacteristics(getDesignGroupSequential(beta = 0.1, typeOfDesign = "P")) ## End(Not run)
Defines the design to perform an analysis with the conditional Dunnett test.
getDesignConditionalDunnett( alpha = 0.025, informationAtInterim = 0.5, ..., secondStageConditioning = TRUE, directionUpper = NA )
getDesignConditionalDunnett( alpha = 0.025, informationAtInterim = 0.5, ..., secondStageConditioning = TRUE, directionUpper = NA )
alpha |
The significance level alpha, default is |
informationAtInterim |
The information to be expected at interim, default is |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
secondStageConditioning |
The way the second stage p-values are calculated within the closed system of hypotheses.
If |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
For performing the conditional Dunnett test the design must be defined through this function.
You can define the information fraction and the way of how to compute the second stage
p-values only in the design definition, and not in the analysis call.
See getClosedConditionalDunnettTestResults()
for an example and Koenig et al. (2008) and
Wassmer & Brannath (2016), chapter 11 for details of the test procedure.
Returns a TrialDesign
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other design functions:
getDesignCharacteristics()
,
getDesignFisher()
,
getDesignGroupSequential()
,
getDesignInverseNormal()
,
getGroupSequentialProbabilities()
,
getPowerAndAverageSampleNumber()
Performs Fisher's combination test and returns critical values for this design.
getDesignFisher( ..., kMax = NA_integer_, alpha = NA_real_, method = c("equalAlpha", "fullAlpha", "noInteraction", "userDefinedAlpha"), userAlphaSpending = NA_real_, alpha0Vec = NA_real_, informationRates = NA_real_, sided = 1, bindingFutility = NA, directionUpper = NA, tolerance = 1e-14, iterations = 0, seed = NA_real_ )
getDesignFisher( ..., kMax = NA_integer_, alpha = NA_real_, method = c("equalAlpha", "fullAlpha", "noInteraction", "userDefinedAlpha"), userAlphaSpending = NA_real_, alpha0Vec = NA_real_, informationRates = NA_real_, sided = 1, bindingFutility = NA, directionUpper = NA, tolerance = 1e-14, iterations = 0, seed = NA_real_ )
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
kMax |
The maximum number of stages |
alpha |
The significance level alpha, default is |
method |
|
userAlphaSpending |
The user defined alpha spending.
Numeric vector of length |
alpha0Vec |
Stopping for futility bounds for stage-wise p-values. |
informationRates |
The information rates t_1, ..., t_kMax (that must be fixed prior to the trial),
default is |
sided |
Is the alternative one-sided ( |
bindingFutility |
If |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
tolerance |
The numerical tolerance, default is |
iterations |
The number of simulation iterations, e.g.,
|
seed |
Seed for simulating the power for Fisher's combination test. See above, default is a random seed. |
getDesignFisher()
calculates the critical values and stage levels for
Fisher's combination test as described in Bauer (1989), Bauer and Koehne (1994),
Bauer and Roehmel (1995), and Wassmer (1999) for equally and unequally sized stages.
Returns a TrialDesign
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
getDesignSet()
for creating a set of designs to compare.
Other design functions:
getDesignCharacteristics()
,
getDesignConditionalDunnett()
,
getDesignGroupSequential()
,
getDesignInverseNormal()
,
getGroupSequentialProbabilities()
,
getPowerAndAverageSampleNumber()
## Not run: # Calculate critical values for a two-stage Fisher's combination test # with full level alpha = 0.05 at the final stage and stopping for # futility bound alpha0 = 0.50, as described in Bauer and Koehne (1994). getDesignFisher(kMax = 2, method = "fullAlpha", alpha = 0.05, alpha0Vec = 0.50) ## End(Not run)
## Not run: # Calculate critical values for a two-stage Fisher's combination test # with full level alpha = 0.05 at the final stage and stopping for # futility bound alpha0 = 0.50, as described in Bauer and Koehne (1994). getDesignFisher(kMax = 2, method = "fullAlpha", alpha = 0.05, alpha0Vec = 0.50) ## End(Not run)
Provides adjusted boundaries and defines a group sequential design.
getDesignGroupSequential( ..., kMax = NA_integer_, alpha = NA_real_, beta = NA_real_, sided = 1L, informationRates = NA_real_, futilityBounds = NA_real_, typeOfDesign = c("OF", "P", "WT", "PT", "HP", "WToptimum", "asP", "asOF", "asKD", "asHSD", "asUser", "noEarlyEfficacy"), deltaWT = NA_real_, deltaPT1 = NA_real_, deltaPT0 = NA_real_, optimizationCriterion = c("ASNH1", "ASNIFH1", "ASNsum"), gammaA = NA_real_, typeBetaSpending = c("none", "bsP", "bsOF", "bsKD", "bsHSD", "bsUser"), userAlphaSpending = NA_real_, userBetaSpending = NA_real_, gammaB = NA_real_, bindingFutility = NA, directionUpper = NA, betaAdjustment = NA, constantBoundsHP = 3, twoSidedPower = NA, delayedInformation = NA_real_, tolerance = 1e-08 )
getDesignGroupSequential( ..., kMax = NA_integer_, alpha = NA_real_, beta = NA_real_, sided = 1L, informationRates = NA_real_, futilityBounds = NA_real_, typeOfDesign = c("OF", "P", "WT", "PT", "HP", "WToptimum", "asP", "asOF", "asKD", "asHSD", "asUser", "noEarlyEfficacy"), deltaWT = NA_real_, deltaPT1 = NA_real_, deltaPT0 = NA_real_, optimizationCriterion = c("ASNH1", "ASNIFH1", "ASNsum"), gammaA = NA_real_, typeBetaSpending = c("none", "bsP", "bsOF", "bsKD", "bsHSD", "bsUser"), userAlphaSpending = NA_real_, userBetaSpending = NA_real_, gammaB = NA_real_, bindingFutility = NA, directionUpper = NA, betaAdjustment = NA, constantBoundsHP = 3, twoSidedPower = NA, delayedInformation = NA_real_, tolerance = 1e-08 )
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
kMax |
The maximum number of stages |
alpha |
The significance level alpha, default is |
beta |
Type II error rate, necessary for providing sample size calculations
(e.g., |
sided |
Is the alternative one-sided ( |
informationRates |
The information rates t_1, ..., t_kMax (that must be fixed prior to the trial),
default is |
futilityBounds |
The futility bounds, defined on the test statistic z scale
(numeric vector of length |
typeOfDesign |
The type of design. Type of design is one of the following:
O'Brien & Fleming ( |
deltaWT |
Delta for Wang & Tsiatis Delta class. |
deltaPT1 |
Delta1 for Pampallona & Tsiatis class rejecting H0 boundaries. |
deltaPT0 |
Delta0 for Pampallona & Tsiatis class rejecting H1 boundaries. |
optimizationCriterion |
Optimization criterion for optimum design within
Wang & Tsiatis class ( |
gammaA |
Parameter for alpha spending function. |
typeBetaSpending |
Type of beta spending. Type of of beta spending is one of the following:
O'Brien & Fleming type beta spending, Pocock type beta spending,
Kim & DeMets beta spending, Hwang, Shi & DeCani beta spending, user defined
beta spending ( |
userAlphaSpending |
The user defined alpha spending.
Numeric vector of length |
userBetaSpending |
The user defined beta spending. Vector of length |
gammaB |
Parameter for beta spending function. |
bindingFutility |
Logical. If |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
betaAdjustment |
For two-sided beta spending designs, if |
constantBoundsHP |
The constant bounds up to stage |
twoSidedPower |
For two-sided testing, if |
delayedInformation |
Delay of information for delayed response designs. Can be a numeric value or a
numeric vector of length |
tolerance |
The numerical tolerance, default is |
Depending on typeOfDesign
some parameters are specified, others not.
For example, only if typeOfDesign
"asHSD"
is selected, gammaA
needs to be specified.
If an alpha spending approach was specified ("asOF"
, "asP"
, "asKD"
, "asHSD"
, or "asUser"
)
additionally a beta spending function can be specified to produce futility bounds.
For optimum designs, "ASNH1"
minimizes the expected sample size under H1,
"ASNIFH1"
minimizes the sum of the maximum sample and the expected sample size under H1,
and "ASNsum"
minimizes the sum of the maximum sample size, the expected sample size under a value midway H0 and H1,
and the expected sample size under H1.
Returns a TrialDesign
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
getDesignSet()
for creating a set of designs to compare different designs.
Other design functions:
getDesignCharacteristics()
,
getDesignConditionalDunnett()
,
getDesignFisher()
,
getDesignInverseNormal()
,
getGroupSequentialProbabilities()
,
getPowerAndAverageSampleNumber()
## Not run: # Calculate two-sided critical values for a four-stage # Wang & Tsiatis design with Delta = 0.25 at level alpha = 0.05 getDesignGroupSequential(kMax = 4, alpha = 0.05, sided = 2, typeOfDesign = "WT", deltaWT = 0.25) # Calculate one-sided critical values and binding futility bounds for a three-stage # design with alpha- and beta-spending functions according to Kim & DeMets with gamma = 2.5 # (planned informationRates as specified, default alpha = 0.025 and beta = 0.2) getDesignGroupSequential(kMax = 3, informationRates = c(0.3, 0.75, 1), typeOfDesign = "asKD", gammaA = 2.5, typeBetaSpending = "bsKD", gammaB = 2.5, bindingFutility = TRUE) # Calculate the Pocock type alpha spending critical values if the first # interim analysis was performed after 40% of the maximum information was observed # and the second after 70% of the maximum information was observed (default alpha = 0.025) getDesignGroupSequential(informationRates = c(0.4, 0.7), typeOfDesign = "asP") ## End(Not run)
## Not run: # Calculate two-sided critical values for a four-stage # Wang & Tsiatis design with Delta = 0.25 at level alpha = 0.05 getDesignGroupSequential(kMax = 4, alpha = 0.05, sided = 2, typeOfDesign = "WT", deltaWT = 0.25) # Calculate one-sided critical values and binding futility bounds for a three-stage # design with alpha- and beta-spending functions according to Kim & DeMets with gamma = 2.5 # (planned informationRates as specified, default alpha = 0.025 and beta = 0.2) getDesignGroupSequential(kMax = 3, informationRates = c(0.3, 0.75, 1), typeOfDesign = "asKD", gammaA = 2.5, typeBetaSpending = "bsKD", gammaB = 2.5, bindingFutility = TRUE) # Calculate the Pocock type alpha spending critical values if the first # interim analysis was performed after 40% of the maximum information was observed # and the second after 70% of the maximum information was observed (default alpha = 0.025) getDesignGroupSequential(informationRates = c(0.4, 0.7), typeOfDesign = "asP") ## End(Not run)
Provides adjusted boundaries and defines a group sequential design for its use in the inverse normal combination test.
getDesignInverseNormal( ..., kMax = NA_integer_, alpha = NA_real_, beta = NA_real_, sided = 1L, informationRates = NA_real_, futilityBounds = NA_real_, typeOfDesign = c("OF", "P", "WT", "PT", "HP", "WToptimum", "asP", "asOF", "asKD", "asHSD", "asUser", "noEarlyEfficacy"), deltaWT = NA_real_, deltaPT1 = NA_real_, deltaPT0 = NA_real_, optimizationCriterion = c("ASNH1", "ASNIFH1", "ASNsum"), gammaA = NA_real_, typeBetaSpending = c("none", "bsP", "bsOF", "bsKD", "bsHSD", "bsUser"), userAlphaSpending = NA_real_, userBetaSpending = NA_real_, gammaB = NA_real_, bindingFutility = NA, directionUpper = NA, betaAdjustment = NA, constantBoundsHP = 3, twoSidedPower = NA, tolerance = 1e-08 )
getDesignInverseNormal( ..., kMax = NA_integer_, alpha = NA_real_, beta = NA_real_, sided = 1L, informationRates = NA_real_, futilityBounds = NA_real_, typeOfDesign = c("OF", "P", "WT", "PT", "HP", "WToptimum", "asP", "asOF", "asKD", "asHSD", "asUser", "noEarlyEfficacy"), deltaWT = NA_real_, deltaPT1 = NA_real_, deltaPT0 = NA_real_, optimizationCriterion = c("ASNH1", "ASNIFH1", "ASNsum"), gammaA = NA_real_, typeBetaSpending = c("none", "bsP", "bsOF", "bsKD", "bsHSD", "bsUser"), userAlphaSpending = NA_real_, userBetaSpending = NA_real_, gammaB = NA_real_, bindingFutility = NA, directionUpper = NA, betaAdjustment = NA, constantBoundsHP = 3, twoSidedPower = NA, tolerance = 1e-08 )
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
kMax |
The maximum number of stages |
alpha |
The significance level alpha, default is |
beta |
Type II error rate, necessary for providing sample size calculations
(e.g., |
sided |
Is the alternative one-sided ( |
informationRates |
The information rates t_1, ..., t_kMax (that must be fixed prior to the trial),
default is |
futilityBounds |
The futility bounds, defined on the test statistic z scale
(numeric vector of length |
typeOfDesign |
The type of design. Type of design is one of the following:
O'Brien & Fleming ( |
deltaWT |
Delta for Wang & Tsiatis Delta class. |
deltaPT1 |
Delta1 for Pampallona & Tsiatis class rejecting H0 boundaries. |
deltaPT0 |
Delta0 for Pampallona & Tsiatis class rejecting H1 boundaries. |
optimizationCriterion |
Optimization criterion for optimum design within
Wang & Tsiatis class ( |
gammaA |
Parameter for alpha spending function. |
typeBetaSpending |
Type of beta spending. Type of of beta spending is one of the following:
O'Brien & Fleming type beta spending, Pocock type beta spending,
Kim & DeMets beta spending, Hwang, Shi & DeCani beta spending, user defined
beta spending ( |
userAlphaSpending |
The user defined alpha spending.
Numeric vector of length |
userBetaSpending |
The user defined beta spending. Vector of length |
gammaB |
Parameter for beta spending function. |
bindingFutility |
Logical. If |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
betaAdjustment |
For two-sided beta spending designs, if |
constantBoundsHP |
The constant bounds up to stage |
twoSidedPower |
For two-sided testing, if |
tolerance |
The numerical tolerance, default is |
Depending on typeOfDesign
some parameters are specified, others not.
For example, only if typeOfDesign
"asHSD"
is selected, gammaA
needs to be specified.
If an alpha spending approach was specified ("asOF"
, "asP"
, "asKD"
, "asHSD"
, or "asUser"
)
additionally a beta spending function can be specified to produce futility bounds.
For optimum designs, "ASNH1"
minimizes the expected sample size under H1,
"ASNIFH1"
minimizes the sum of the maximum sample and the expected sample size under H1,
and "ASNsum"
minimizes the sum of the maximum sample size, the expected sample size under a value midway H0 and H1,
and the expected sample size under H1.
Returns a TrialDesign
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
getDesignSet()
for creating a set of designs to compare different designs.
Other design functions:
getDesignCharacteristics()
,
getDesignConditionalDunnett()
,
getDesignFisher()
,
getDesignGroupSequential()
,
getGroupSequentialProbabilities()
,
getPowerAndAverageSampleNumber()
## Not run: # Calculate two-sided critical values for a four-stage # Wang & Tsiatis design with Delta = 0.25 at level alpha = 0.05 getDesignInverseNormal(kMax = 4, alpha = 0.05, sided = 2, typeOfDesign = "WT", deltaWT = 0.25) # Defines a two-stage design at one-sided alpha = 0.025 with provision of early stopping # if the one-sided p-value exceeds 0.5 at interim and no early stopping for efficacy. # The futility bound is non-binding. getDesignInverseNormal(kMax = 2, typeOfDesign = "noEarlyEfficacy", futilityBounds = 0) # Calculate one-sided critical values and binding futility bounds for a three-stage # design with alpha- and beta-spending functions according to Kim & DeMets with gamma = 2.5 # (planned informationRates as specified, default alpha = 0.025 and beta = 0.2) getDesignInverseNormal(kMax = 3, informationRates = c(0.3, 0.75, 1), typeOfDesign = "asKD", gammaA = 2.5, typeBetaSpending = "bsKD", gammaB = 2.5, bindingFutility = TRUE) ## End(Not run)
## Not run: # Calculate two-sided critical values for a four-stage # Wang & Tsiatis design with Delta = 0.25 at level alpha = 0.05 getDesignInverseNormal(kMax = 4, alpha = 0.05, sided = 2, typeOfDesign = "WT", deltaWT = 0.25) # Defines a two-stage design at one-sided alpha = 0.025 with provision of early stopping # if the one-sided p-value exceeds 0.5 at interim and no early stopping for efficacy. # The futility bound is non-binding. getDesignInverseNormal(kMax = 2, typeOfDesign = "noEarlyEfficacy", futilityBounds = 0) # Calculate one-sided critical values and binding futility bounds for a three-stage # design with alpha- and beta-spending functions according to Kim & DeMets with gamma = 2.5 # (planned informationRates as specified, default alpha = 0.025 and beta = 0.2) getDesignInverseNormal(kMax = 3, informationRates = c(0.3, 0.75, 1), typeOfDesign = "asKD", gammaA = 2.5, typeBetaSpending = "bsKD", gammaB = 2.5, bindingFutility = TRUE) ## End(Not run)
Creates a trial design set object and returns it.
getDesignSet(...)
getDesignSet(...)
... |
|
Specify a master design and one or more design parameters or a list of designs.
Returns a TrialDesignSet
object.
The following generics (R generic functions) are available for this result object:
names
to obtain the field names,
length
to obtain the number of design,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: # Example 1 design <- getDesignGroupSequential( alpha = 0.05, kMax = 6, sided = 2, typeOfDesign = "WT", deltaWT = 0.1 ) designSet <- getDesignSet() designSet$add(design = design, deltaWT = c(0.3, 0.4)) if (require(ggplot2)) plot(designSet, type = 1) # Example 2 (shorter script) design <- getDesignGroupSequential( alpha = 0.05, kMax = 6, sided = 2, typeOfDesign = "WT", deltaWT = 0.1 ) designSet <- getDesignSet(design = design, deltaWT = c(0.3, 0.4)) if (require(ggplot2)) plot(designSet, type = 1) # Example 3 (use of designs instead of design) d1 <- getDesignGroupSequential( alpha = 0.05, kMax = 2, sided = 1, beta = 0.2, typeOfDesign = "asHSD", gammaA = 0.5, typeBetaSpending = "bsHSD", gammaB = 0.5 ) d2 <- getDesignGroupSequential( alpha = 0.05, kMax = 4, sided = 1, beta = 0.2, typeOfDesign = "asP", typeBetaSpending = "bsP" ) designSet <- getDesignSet( designs = c(d1, d2), variedParameters = c("typeOfDesign", "kMax") ) if (require(ggplot2)) plot(designSet, type = 8, nMax = 20) ## End(Not run)
## Not run: # Example 1 design <- getDesignGroupSequential( alpha = 0.05, kMax = 6, sided = 2, typeOfDesign = "WT", deltaWT = 0.1 ) designSet <- getDesignSet() designSet$add(design = design, deltaWT = c(0.3, 0.4)) if (require(ggplot2)) plot(designSet, type = 1) # Example 2 (shorter script) design <- getDesignGroupSequential( alpha = 0.05, kMax = 6, sided = 2, typeOfDesign = "WT", deltaWT = 0.1 ) designSet <- getDesignSet(design = design, deltaWT = c(0.3, 0.4)) if (require(ggplot2)) plot(designSet, type = 1) # Example 3 (use of designs instead of design) d1 <- getDesignGroupSequential( alpha = 0.05, kMax = 2, sided = 1, beta = 0.2, typeOfDesign = "asHSD", gammaA = 0.5, typeBetaSpending = "bsHSD", gammaB = 0.5 ) d2 <- getDesignGroupSequential( alpha = 0.05, kMax = 4, sided = 1, beta = 0.2, typeOfDesign = "asP", typeBetaSpending = "bsP" ) designSet <- getDesignSet( designs = c(d1, d2), variedParameters = c("typeOfDesign", "kMax") ) if (require(ggplot2)) plot(designSet, type = 8, nMax = 20) ## End(Not run)
Returns the event probabilities for specified parameters at given time vector.
getEventProbabilities( time, ..., accrualTime = c(0, 12), accrualIntensity = 0.1, accrualIntensityType = c("auto", "absolute", "relative"), kappa = 1, piecewiseSurvivalTime = NA_real_, lambda2 = NA_real_, lambda1 = NA_real_, allocationRatioPlanned = 1, hazardRatio = NA_real_, dropoutRate1 = 0, dropoutRate2 = 0, dropoutTime = 12, maxNumberOfSubjects = NA_real_ )
getEventProbabilities( time, ..., accrualTime = c(0, 12), accrualIntensity = 0.1, accrualIntensityType = c("auto", "absolute", "relative"), kappa = 1, piecewiseSurvivalTime = NA_real_, lambda2 = NA_real_, lambda1 = NA_real_, allocationRatioPlanned = 1, hazardRatio = NA_real_, dropoutRate1 = 0, dropoutRate2 = 0, dropoutTime = 12, maxNumberOfSubjects = NA_real_ )
time |
A numeric vector with time values. |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
accrualTime |
The assumed accrual time intervals for the study, default is
|
accrualIntensity |
A numeric vector of accrual intensities, default is the relative
intensity |
accrualIntensityType |
A character value specifying the accrual intensity input type.
Must be one of |
kappa |
A numeric value > 0. A |
piecewiseSurvivalTime |
A vector that specifies the time intervals for the piecewise
definition of the exponential survival time cumulative distribution function |
lambda2 |
The assumed hazard rate in the reference group, there is no default.
|
lambda1 |
The assumed hazard rate in the treatment group, there is no default.
|
allocationRatioPlanned |
The planned allocation ratio |
hazardRatio |
The vector of hazard ratios under consideration. If the event or hazard rates in both treatment groups are defined, the hazard ratio needs not to be specified as it is calculated, there is no default. Must be a positive numeric of length 1. |
dropoutRate1 |
The assumed drop-out rate in the treatment group, default is |
dropoutRate2 |
The assumed drop-out rate in the control group, default is |
dropoutTime |
The assumed time for drop-out rates in the control and the
treatment group, default is |
maxNumberOfSubjects |
If |
The function computes the overall event probabilities in a two treatment groups design.
For details of the parameters see getSampleSizeSurvival()
.
Returns a EventProbabilities
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: # Calculate event probabilities for staggered subjects' entry, piecewisely defined # survival time and hazards, and plot it. timeVector <- seq(0, 100, 1) y <- getEventProbabilities(timeVector, accrualTime = c(0, 20, 60), accrualIntensity = c(5, 20), piecewiseSurvivalTime = c(0, 20, 80), lambda2 = c(0.02, 0.06, 0.1), hazardRatio = 2 ) plot(timeVector, y$cumulativeEventProbabilities, type = 'l') ## End(Not run)
## Not run: # Calculate event probabilities for staggered subjects' entry, piecewisely defined # survival time and hazards, and plot it. timeVector <- seq(0, 100, 1) y <- getEventProbabilities(timeVector, accrualTime = c(0, 20, 60), accrualIntensity = c(5, 20), piecewiseSurvivalTime = c(0, 20, 80), lambda2 = c(0.02, 0.06, 0.1), hazardRatio = 2 ) plot(timeVector, y$cumulativeEventProbabilities, type = 'l') ## End(Not run)
Returns the final confidence interval for the parameter of interest. It is based on the prototype case, i.e., the test for testing a mean for normally distributed variables.
getFinalConfidenceInterval( design, dataInput, ..., directionUpper = NA, thetaH0 = NA_real_, tolerance = 1e-06, stage = NA_integer_ )
getFinalConfidenceInterval( design, dataInput, ..., directionUpper = NA, thetaH0 = NA_real_, tolerance = 1e-06, stage = NA_integer_ )
design |
The trial design. |
dataInput |
The summary data used for calculating the test results.
This is either an element of |
... |
Further (optional) arguments to be passed:
|
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
tolerance |
The numerical tolerance, default is |
stage |
The stage number (optional). Default: total number of existing stages in the data input. |
Depending on design
and dataInput
the final confidence interval and median unbiased estimate
that is based on the stage-wise ordering of the sample space will be calculated and returned.
Additionally, a non-standardized ("general") version is provided,
the estimated standard deviation must be used to obtain
the confidence interval for the parameter of interest.
For the inverse normal combination test design with more than two stages, a warning informs that the validity of the confidence interval is theoretically shown only if no sample size change was performed.
Returns a list
containing
finalStage
,
medianUnbiased
,
finalConfidenceInterval
,
medianUnbiasedGeneral
, and
finalConfidenceIntervalGeneral
.
Other analysis functions:
getAnalysisResults()
,
getClosedCombinationTestResults()
,
getClosedConditionalDunnettTestResults()
,
getConditionalPower()
,
getConditionalRejectionProbabilities()
,
getFinalPValue()
,
getRepeatedConfidenceIntervals()
,
getRepeatedPValues()
,
getStageResults()
,
getTestActions()
## Not run: design <- getDesignInverseNormal(kMax = 2) data <- getDataset( n = c(20, 30), means = c(50, 51), stDevs = c(130, 140) ) getFinalConfidenceInterval(design, dataInput = data) ## End(Not run)
## Not run: design <- getDesignInverseNormal(kMax = 2) data <- getDataset( n = c(20, 30), means = c(50, 51), stDevs = c(130, 140) ) getFinalConfidenceInterval(design, dataInput = data) ## End(Not run)
Returns the final p-value for given stage results.
getFinalPValue(stageResults, ...)
getFinalPValue(stageResults, ...)
stageResults |
The results at given stage, obtained from |
... |
Only available for backward compatibility. |
The calculation of the final p-value is based on the stage-wise ordering of the sample space.
This enables the calculation for both the non-adaptive and the adaptive case.
For Fisher's combination test, it is available for kMax = 2
only.
Returns a list
containing
finalStage
,
pFinal
.
Other analysis functions:
getAnalysisResults()
,
getClosedCombinationTestResults()
,
getClosedConditionalDunnettTestResults()
,
getConditionalPower()
,
getConditionalRejectionProbabilities()
,
getFinalConfidenceInterval()
,
getRepeatedConfidenceIntervals()
,
getRepeatedPValues()
,
getStageResults()
,
getTestActions()
## Not run: design <- getDesignInverseNormal(kMax = 2) data <- getDataset( n = c( 20, 30), means = c( 50, 51), stDevs = c(130, 140) ) getFinalPValue(getStageResults(design, dataInput = data)) ## End(Not run)
## Not run: design <- getDesignInverseNormal(kMax = 2) data <- getDataset( n = c( 20, 30), means = c( 50, 51), stDevs = c(130, 140) ) getFinalPValue(getStageResults(design, dataInput = data)) ## End(Not run)
Calculates probabilities in the group sequential setting.
getGroupSequentialProbabilities(decisionMatrix, informationRates)
getGroupSequentialProbabilities(decisionMatrix, informationRates)
decisionMatrix |
A matrix with either 2 or 4 rows and kMax = length(informationRates) columns, see details. |
informationRates |
The information rates t_1, ..., t_kMax (that must be fixed prior to the trial),
default is |
Given a sequence of information rates (fixing the correlation structure), and
decisionMatrix with either 2 or 4 rows and kMax = length(informationRates) columns,
this function calculates a probability matrix containing, for two rows, the probabilities:
P(Z_1 < l_1), P(l_1 < Z_1 < u_1, Z_2 < l_2),..., P(l_kMax-1 < Z_kMax-1 < u_kMax-1, Z_kMax < l_l_kMax)
P(Z_1 < u_1), P(l_1 < Z_1 < u_1, Z_2 < u_2),..., P(l_kMax-1 < Z_kMax-1 < u_kMax-1, Z_kMax < u_l_kMax)
P(Z_1 < Inf), P(l_1 < Z_1 < u_1, Z_2 < Inf),..., P(l_kMax-1 < Z_kMax-1 < u_kMax-1, Z_kMax < Inf)
with continuation matrix
l_1,...,l_kMax
u_1,...,u_kMax
That is, the output matrix of the function provides per stage (column) the cumulative probabilities
for values specified in decisionMatrix and Inf, and reaching the stage, i.e., the test
statistics is in the continuation region for the preceding stages.
For 4 rows, the continuation region contains of two regions and the probability matrix is
obtained analogously (cf., Wassmer and Brannath, 2016).
Returns a numeric matrix containing the probabilities described in the details section.
Other design functions:
getDesignCharacteristics()
,
getDesignConditionalDunnett()
,
getDesignFisher()
,
getDesignGroupSequential()
,
getDesignInverseNormal()
,
getPowerAndAverageSampleNumber()
## Not run: # Calculate Type I error rates in the two-sided group sequential setting when # performing kMax stages with constant critical boundaries at level alpha: alpha <- 0.05 kMax <- 10 decisionMatrix <- matrix(c( rep(-qnorm(1 - alpha / 2), kMax), rep(qnorm(1 - alpha / 2), kMax) ), nrow = 2, byrow = TRUE) informationRates <- (1:kMax) / kMax probs <- getGroupSequentialProbabilities(decisionMatrix, informationRates) cumsum(probs[3, ] - probs[2, ] + probs[1, ]) # Do the same for a one-sided design without futility boundaries: decisionMatrix <- matrix(c( rep(-Inf, kMax), rep(qnorm(1 - alpha), kMax) ), nrow = 2, byrow = TRUE) informationRates <- (1:kMax) / kMax probs <- getGroupSequentialProbabilities(decisionMatrix, informationRates) cumsum(probs[3, ] - probs[2, ]) # Check that two-sided Pampallona and Tsiatis boundaries with binding # futility bounds obtain Type I error probabilities equal to alpha: x <- getDesignGroupSequential( alpha = 0.05, beta = 0.1, kMax = 3, typeOfDesign = "PT", deltaPT0 = 0, deltaPT1 = 0.4, sided = 2, bindingFutility = TRUE ) dm <- matrix(c( -x$criticalValues, -x$futilityBounds, 0, x$futilityBounds, 0, x$criticalValues ), nrow = 4, byrow = TRUE) dm[is.na(dm)] <- 0 probs <- getGroupSequentialProbabilities( decisionMatrix = dm, informationRates = (1:3) / 3 ) sum(probs[5, ] - probs[4, ] + probs[1, ]) # Check the Type I error rate decrease when using non-binding futility bounds: x <- getDesignGroupSequential( alpha = 0.05, beta = 0.1, kMax = 3, typeOfDesign = "PT", deltaPT0 = 0, deltaPT1 = 0.4, sided = 2, bindingFutility = FALSE ) dm <- matrix(c( -x$criticalValues, -x$futilityBounds, 0, x$futilityBounds, 0, x$criticalValues ), nrow = 4, byrow = TRUE) dm[is.na(dm)] <- 0 probs <- getGroupSequentialProbabilities( decisionMatrix = dm, informationRates = (1:3) / 3 ) sum(probs[5, ] - probs[4, ] + probs[1, ]) ## End(Not run)
## Not run: # Calculate Type I error rates in the two-sided group sequential setting when # performing kMax stages with constant critical boundaries at level alpha: alpha <- 0.05 kMax <- 10 decisionMatrix <- matrix(c( rep(-qnorm(1 - alpha / 2), kMax), rep(qnorm(1 - alpha / 2), kMax) ), nrow = 2, byrow = TRUE) informationRates <- (1:kMax) / kMax probs <- getGroupSequentialProbabilities(decisionMatrix, informationRates) cumsum(probs[3, ] - probs[2, ] + probs[1, ]) # Do the same for a one-sided design without futility boundaries: decisionMatrix <- matrix(c( rep(-Inf, kMax), rep(qnorm(1 - alpha), kMax) ), nrow = 2, byrow = TRUE) informationRates <- (1:kMax) / kMax probs <- getGroupSequentialProbabilities(decisionMatrix, informationRates) cumsum(probs[3, ] - probs[2, ]) # Check that two-sided Pampallona and Tsiatis boundaries with binding # futility bounds obtain Type I error probabilities equal to alpha: x <- getDesignGroupSequential( alpha = 0.05, beta = 0.1, kMax = 3, typeOfDesign = "PT", deltaPT0 = 0, deltaPT1 = 0.4, sided = 2, bindingFutility = TRUE ) dm <- matrix(c( -x$criticalValues, -x$futilityBounds, 0, x$futilityBounds, 0, x$criticalValues ), nrow = 4, byrow = TRUE) dm[is.na(dm)] <- 0 probs <- getGroupSequentialProbabilities( decisionMatrix = dm, informationRates = (1:3) / 3 ) sum(probs[5, ] - probs[4, ] + probs[1, ]) # Check the Type I error rate decrease when using non-binding futility bounds: x <- getDesignGroupSequential( alpha = 0.05, beta = 0.1, kMax = 3, typeOfDesign = "PT", deltaPT0 = 0, deltaPT1 = 0.4, sided = 2, bindingFutility = FALSE ) dm <- matrix(c( -x$criticalValues, -x$futilityBounds, 0, x$futilityBounds, 0, x$criticalValues ), nrow = 4, byrow = TRUE) dm[is.na(dm)] <- 0 probs <- getGroupSequentialProbabilities( decisionMatrix = dm, informationRates = (1:3) / 3 ) sum(probs[5, ] - probs[4, ] + probs[1, ]) ## End(Not run)
Returns the number of recruited subjects at given time vector.
getNumberOfSubjects( time, ..., accrualTime = c(0, 12), accrualIntensity = 0.1, accrualIntensityType = c("auto", "absolute", "relative"), maxNumberOfSubjects = NA_real_ )
getNumberOfSubjects( time, ..., accrualTime = c(0, 12), accrualIntensity = 0.1, accrualIntensityType = c("auto", "absolute", "relative"), maxNumberOfSubjects = NA_real_ )
time |
A numeric vector with time values. |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
accrualTime |
The assumed accrual time intervals for the study, default is
|
accrualIntensity |
A numeric vector of accrual intensities, default is the relative
intensity |
accrualIntensityType |
A character value specifying the accrual intensity input type.
Must be one of |
maxNumberOfSubjects |
If |
Calculate number of subjects over time range at given accrual time vector
and accrual intensity. Intensity can either be defined in absolute or
relative terms (for the latter, maxNumberOfSubjects
needs to be defined)
The function is used by getSampleSizeSurvival()
.
Returns a NumberOfSubjects
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
AccrualTime
for defining the accrual time.
## Not run: getNumberOfSubjects(time = seq(10, 70, 10), accrualTime = c(0, 20, 60), accrualIntensity = c(5, 20)) getNumberOfSubjects(time = seq(10, 70, 10), accrualTime = c(0, 20, 60), accrualIntensity = c(0.1, 0.4), maxNumberOfSubjects = 900) ## End(Not run)
## Not run: getNumberOfSubjects(time = seq(10, 70, 10), accrualTime = c(0, 20, 60), accrualIntensity = c(5, 20)) getNumberOfSubjects(time = seq(10, 70, 10), accrualTime = c(0, 20, 60), accrualIntensity = c(0.1, 0.4), maxNumberOfSubjects = 900) ## End(Not run)
Recalculates the observed information rates from the specified dataset.
getObservedInformationRates( dataInput, ..., maxInformation = NULL, informationEpsilon = NULL, stage = NA_integer_ )
getObservedInformationRates( dataInput, ..., maxInformation = NULL, informationEpsilon = NULL, stage = NA_integer_ )
dataInput |
The dataset for which the information rates shall be recalculated. |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
maxInformation |
Positive value specifying the maximum information. |
informationEpsilon |
Positive integer value specifying the absolute information epsilon, which
defines the maximum distance from the observed information to the maximum information that causes the final analysis.
Updates at the final analysis in case the observed information at the final
analysis is smaller ("under-running") than the planned maximum information |
stage |
The stage number (optional). Default: total number of existing stages in the data input. |
For means and rates the maximum information is the maximum number of subjects
or the relative proportion if informationEpsilon
< 1;
for survival data it is the maximum number of events
or the relative proportion if informationEpsilon
< 1.
Returns a list that summarizes the observed information rates.
getAnalysisResults()
for using
getObservedInformationRates()
implicit,
www.rpact.org/vignettes/planning/rpact_boundary_update_example
## Not run: # Absolute information epsilon: # decision rule 45 >= 46 - 1, i.e., under-running data <- getDataset( overallN = c(22, 45), overallEvents = c(11, 28) ) getObservedInformationRates(data, maxInformation = 46, informationEpsilon = 1 ) # Relative information epsilon: # last information rate = 45/46 = 0.9783, # is > 1 - 0.03 = 0.97, i.e., under-running data <- getDataset( overallN = c(22, 45), overallEvents = c(11, 28) ) getObservedInformationRates(data, maxInformation = 46, informationEpsilon = 0.03 ) ## End(Not run)
## Not run: # Absolute information epsilon: # decision rule 45 >= 46 - 1, i.e., under-running data <- getDataset( overallN = c(22, 45), overallEvents = c(11, 28) ) getObservedInformationRates(data, maxInformation = 46, informationEpsilon = 1 ) # Relative information epsilon: # last information rate = 45/46 = 0.9783, # is > 1 - 0.03 = 0.97, i.e., under-running data <- getDataset( overallN = c(22, 45), overallEvents = c(11, 28) ) getObservedInformationRates(data, maxInformation = 46, informationEpsilon = 0.03 ) ## End(Not run)
With this function the format of the standard outputs of all rpact
objects can be shown and written to a file.
getOutputFormat( parameterName = NA_character_, ..., file = NA_character_, default = FALSE, fields = TRUE )
getOutputFormat( parameterName = NA_character_, ..., file = NA_character_, default = FALSE, fields = TRUE )
parameterName |
The name of the parameter whose output format shall be returned.
Leave the default |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
file |
An optional file name where to write the output formats (see Details for more information). |
default |
If |
fields |
If |
Output formats can be written to a text file by specifying a file
.
See setOutputFormat()
() to learn how to read a formerly saved file.
Note that the parameterName
must not match exactly, e.g., for p-values the
following parameter names will be recognized amongst others:
p value
p.values
p-value
pValue
rpact.output.format.p.value
A named list of output formats.
Other output formats:
setOutputFormat()
## Not run: # show output format of p values getOutputFormat("p.value") # set new p value output format setOutputFormat("p.value", digits = 5, nsmall = 5) # show sample sizes as smallest integers not less than the not rounded values setOutputFormat("sample size", digits = 0, nsmall = 0, roundFunction = "ceiling") getSampleSizeMeans() # show sample sizes as smallest integers not greater than the not rounded values setOutputFormat("sample size", digits = 0, nsmall = 0, roundFunction = "floor") getSampleSizeMeans() # set new sample size output format without round function setOutputFormat("sample size", digits = 2, nsmall = 2) getSampleSizeMeans() # reset sample size output format to default setOutputFormat("sample size") getSampleSizeMeans() getOutputFormat("sample size") ## End(Not run)
## Not run: # show output format of p values getOutputFormat("p.value") # set new p value output format setOutputFormat("p.value", digits = 5, nsmall = 5) # show sample sizes as smallest integers not less than the not rounded values setOutputFormat("sample size", digits = 0, nsmall = 0, roundFunction = "ceiling") getSampleSizeMeans() # show sample sizes as smallest integers not greater than the not rounded values setOutputFormat("sample size", digits = 0, nsmall = 0, roundFunction = "floor") getSampleSizeMeans() # set new sample size output format without round function setOutputFormat("sample size", digits = 2, nsmall = 2) getSampleSizeMeans() # reset sample size output format to default setOutputFormat("sample size") getSampleSizeMeans() getOutputFormat("sample size") ## End(Not run)
Calculates the conditional performance score, its sub-scores and components according to (Herrmann et al. (2020), doi:10.1002/sim.8534) and (Bokelmann et al. (2024), doi:10.1186/s12874-024-02150-4) for a given simulation result from a two-stage design with continuous or binary endpoint. Larger (sub-)score and component values refer to a better performance.
getPerformanceScore(simulationResult)
getPerformanceScore(simulationResult)
simulationResult |
A simulation result. |
The conditional performance score consists of two sub-scores, one for the sample size (subscoreSampleSize) and one for the conditional power (subscoreConditionalPower). Each of those are composed of a location (locationSampleSize, locationConditionalPower) and variation component (variationSampleSize, variationConditionalPower). The term conditional refers to an evaluation perspective where the interim results suggest a trial continuation with a second stage. The score can take values between 0 and 1. More details on the performance score can be found in Herrmann et al. (2020), doi:10.1002/sim.8534 and Bokelmann et al. (2024) doi:10.1186/s12874-024-02150-4.
Stephen Schueuerhuis
## Not run: # Example from Table 3 in "A new conditional performance score for # the evaluation of adaptive group sequential designs with samplesize # recalculation from Herrmann et al 2023", p. 2097 for # Observed Conditional Power approach and Delta = 0.5 # Create two-stage Pocock design with binding futility boundary at 0 design <- getDesignGroupSequential( kMax = 2, typeOfDesign = "P", futilityBounds = 0, bindingFutility = TRUE) # Initialize sample sizes and effect; # Sample sizes are referring to overall stage-wise sample sizes n1 <- 100 n2 <- 100 nMax <- n1 + n2 alternative <- 0.5 # Perform Simulation; nMax * 1.5 defines the maximum # sample size for the additional stage simulationResult <- getSimulationMeans( design = design, normalApproximation = TRUE, thetaH0 = 0, alternative = alternative, plannedSubjects = c(n1, nMax), minNumberOfSubjectsPerStage = c(NA_real_, 1), maxNumberOfSubjectsPerStage = c(NA_real_, nMax * 1.5), conditionalPower = 0.8, directionUpper = TRUE, maxNumberOfIterations = 1e05, seed = 140 ) # Calculate performance score getPerformanceScore(simulationResult) ## End(Not run)
## Not run: # Example from Table 3 in "A new conditional performance score for # the evaluation of adaptive group sequential designs with samplesize # recalculation from Herrmann et al 2023", p. 2097 for # Observed Conditional Power approach and Delta = 0.5 # Create two-stage Pocock design with binding futility boundary at 0 design <- getDesignGroupSequential( kMax = 2, typeOfDesign = "P", futilityBounds = 0, bindingFutility = TRUE) # Initialize sample sizes and effect; # Sample sizes are referring to overall stage-wise sample sizes n1 <- 100 n2 <- 100 nMax <- n1 + n2 alternative <- 0.5 # Perform Simulation; nMax * 1.5 defines the maximum # sample size for the additional stage simulationResult <- getSimulationMeans( design = design, normalApproximation = TRUE, thetaH0 = 0, alternative = alternative, plannedSubjects = c(n1, nMax), minNumberOfSubjectsPerStage = c(NA_real_, 1), maxNumberOfSubjectsPerStage = c(NA_real_, nMax * 1.5), conditionalPower = 0.8, directionUpper = TRUE, maxNumberOfIterations = 1e05, seed = 140 ) # Calculate performance score getPerformanceScore(simulationResult) ## End(Not run)
Returns a PiecewiseSurvivalTime
object that contains the all relevant parameters
of an exponential survival time cumulative distribution function.
Use names
to obtain the field names.
getPiecewiseSurvivalTime( piecewiseSurvivalTime = NA_real_, ..., lambda1 = NA_real_, lambda2 = NA_real_, hazardRatio = NA_real_, pi1 = NA_real_, pi2 = NA_real_, median1 = NA_real_, median2 = NA_real_, eventTime = 12, kappa = 1, delayedResponseAllowed = FALSE )
getPiecewiseSurvivalTime( piecewiseSurvivalTime = NA_real_, ..., lambda1 = NA_real_, lambda2 = NA_real_, hazardRatio = NA_real_, pi1 = NA_real_, pi2 = NA_real_, median1 = NA_real_, median2 = NA_real_, eventTime = 12, kappa = 1, delayedResponseAllowed = FALSE )
piecewiseSurvivalTime |
A vector that specifies the time intervals for the piecewise definition of the exponential survival time cumulative distribution function (see details). |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
lambda1 |
The assumed hazard rate in the treatment group, there is no default.
|
lambda2 |
The assumed hazard rate in the reference group, there is no default.
|
hazardRatio |
The vector of hazard ratios under consideration. If the event or hazard rates in both treatment groups are defined, the hazard ratio needs not to be specified as it is calculated, there is no default. Must be a positive numeric of length 1. |
pi1 |
A numeric value or vector that represents the assumed event rate in the treatment group,
default is |
pi2 |
A numeric value that represents the assumed event rate in the control group, default is |
median1 |
The assumed median survival time in the treatment group, there is no default. |
median2 |
The assumed median survival time in the reference group, there is no default. Must be a positive numeric of length 1. |
eventTime |
The assumed time under which the event rates are calculated, default is |
kappa |
A numeric value > 0. A |
delayedResponseAllowed |
If |
Returns a PiecewiseSurvivalTime
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
The first element of the vector piecewiseSurvivalTime
must be equal to 0
.
piecewiseSurvivalTime
can also be a list that combines the definition of the
time intervals and hazard rates in the reference group.
The definition of the survival time in the treatment group is obtained by the specification
of the hazard ratio (see examples for details).
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: getPiecewiseSurvivalTime(lambda2 = 0.5, hazardRatio = 0.8) getPiecewiseSurvivalTime(lambda2 = 0.5, lambda1 = 0.4) getPiecewiseSurvivalTime(pi2 = 0.5, hazardRatio = 0.8) getPiecewiseSurvivalTime(pi2 = 0.5, pi1 = 0.4) getPiecewiseSurvivalTime(pi1 = 0.3) getPiecewiseSurvivalTime(hazardRatio = c(0.6, 0.8), lambda2 = 0.4) getPiecewiseSurvivalTime(piecewiseSurvivalTime = c(0, 6, 9), lambda2 = c(0.025, 0.04, 0.015), hazardRatio = 0.8) getPiecewiseSurvivalTime(piecewiseSurvivalTime = c(0, 6, 9), lambda2 = c(0.025, 0.04, 0.015), lambda1 = c(0.025, 0.04, 0.015) * 0.8) pwst <- getPiecewiseSurvivalTime(list( "0 - <6" = 0.025, "6 - <9" = 0.04, "9 - <15" = 0.015, "15 - <21" = 0.01, ">=21" = 0.007), hazardRatio = 0.75) pwst # The object created by getPiecewiseSurvivalTime() can be used directly in # getSampleSizeSurvival(): getSampleSizeSurvival(piecewiseSurvivalTime = pwst) # The object created by getPiecewiseSurvivalTime() can be used directly in # getPowerSurvival(): getPowerSurvival(piecewiseSurvivalTime = pwst, maxNumberOfEvents = 40, maxNumberOfSubjects = 100) ## End(Not run)
## Not run: getPiecewiseSurvivalTime(lambda2 = 0.5, hazardRatio = 0.8) getPiecewiseSurvivalTime(lambda2 = 0.5, lambda1 = 0.4) getPiecewiseSurvivalTime(pi2 = 0.5, hazardRatio = 0.8) getPiecewiseSurvivalTime(pi2 = 0.5, pi1 = 0.4) getPiecewiseSurvivalTime(pi1 = 0.3) getPiecewiseSurvivalTime(hazardRatio = c(0.6, 0.8), lambda2 = 0.4) getPiecewiseSurvivalTime(piecewiseSurvivalTime = c(0, 6, 9), lambda2 = c(0.025, 0.04, 0.015), hazardRatio = 0.8) getPiecewiseSurvivalTime(piecewiseSurvivalTime = c(0, 6, 9), lambda2 = c(0.025, 0.04, 0.015), lambda1 = c(0.025, 0.04, 0.015) * 0.8) pwst <- getPiecewiseSurvivalTime(list( "0 - <6" = 0.025, "6 - <9" = 0.04, "9 - <15" = 0.015, "15 - <21" = 0.01, ">=21" = 0.007), hazardRatio = 0.75) pwst # The object created by getPiecewiseSurvivalTime() can be used directly in # getSampleSizeSurvival(): getSampleSizeSurvival(piecewiseSurvivalTime = pwst) # The object created by getPiecewiseSurvivalTime() can be used directly in # getPowerSurvival(): getPowerSurvival(piecewiseSurvivalTime = pwst, maxNumberOfEvents = 40, maxNumberOfSubjects = 100) ## End(Not run)
Returns the power and average sample number of the specified design.
getPowerAndAverageSampleNumber(design, theta = seq(-1, 1, 0.02), nMax = 100)
getPowerAndAverageSampleNumber(design, theta = seq(-1, 1, 0.02), nMax = 100)
design |
The trial design. |
theta |
A vector of standardized effect sizes (theta values), default is a sequence from -1 to 1. |
nMax |
The maximum sample size. Must be a positive integer of length 1. |
This function returns the power and average sample number (ASN) of the specified
design for the prototype case which is testing H0: mu = mu0 in a one-sample design.
theta
represents the standardized effect (mu - mu0) / sigma
and power and ASN
is calculated for maximum sample size nMax
.
For other designs than the one-sample test of a mean the standardized effect needs to be adjusted accordingly.
Returns a PowerAndAverageSampleNumberResult
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other design functions:
getDesignCharacteristics()
,
getDesignConditionalDunnett()
,
getDesignFisher()
,
getDesignGroupSequential()
,
getDesignInverseNormal()
,
getGroupSequentialProbabilities()
## Not run: # Calculate power, stopping probabilities, and expected sample # size for the default design with specified theta and nMax getPowerAndAverageSampleNumber( getDesignGroupSequential(), theta = seq(-1, 1, 0.5), nMax = 100) ## End(Not run)
## Not run: # Calculate power, stopping probabilities, and expected sample # size for the default design with specified theta and nMax getPowerAndAverageSampleNumber( getDesignGroupSequential(), theta = seq(-1, 1, 0.5), nMax = 100) ## End(Not run)
Returns the power, stopping probabilities, and expected sample size for testing mean rates for negative binomial distributed event numbers in two samples at given sample sizes.
getPowerCounts( design = NULL, ..., directionUpper = NA, maxNumberOfSubjects = NA_real_, lambda1 = NA_real_, lambda2 = NA_real_, lambda = NA_real_, theta = NA_real_, thetaH0 = 1, overdispersion = 0, fixedExposureTime = NA_real_, accrualTime = NA_real_, accrualIntensity = NA_real_, followUpTime = NA_real_, allocationRatioPlanned = NA_real_ )
getPowerCounts( design = NULL, ..., directionUpper = NA, maxNumberOfSubjects = NA_real_, lambda1 = NA_real_, lambda2 = NA_real_, lambda = NA_real_, theta = NA_real_, thetaH0 = 1, overdispersion = 0, fixedExposureTime = NA_real_, accrualTime = NA_real_, accrualIntensity = NA_real_, followUpTime = NA_real_, allocationRatioPlanned = NA_real_ )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
maxNumberOfSubjects |
|
lambda1 |
A numeric value or vector that represents the assumed rate of a homogeneous Poisson process in the active treatment group, there is no default. |
lambda2 |
A numeric value that represents the assumed rate of a homogeneous Poisson process in the control group, there is no default. |
lambda |
A numeric value or vector that represents the assumed rate of a homogeneous Poisson process in the pooled treatment groups, there is no default. |
theta |
A numeric value or vector that represents the assumed mean ratios lambda1/lambda2 of a homogeneous Poisson process, there is no default. |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
overdispersion |
A numeric value that represents the assumed overdispersion of the negative binomial distribution,
default is |
fixedExposureTime |
If specified, the fixed time of exposure per subject for count data, there is no default. |
accrualTime |
If specified, the assumed accrual time interval(s) for the study, there is no default. |
accrualIntensity |
If specified, the assumed accrual intensities for the study, there is no default. |
followUpTime |
If specified, the assumed (additional) follow-up time for the study, there is no default.
The total study duration is |
allocationRatioPlanned |
The planned allocation ratio |
At given design the function calculates the power, stopping probabilities, and expected sample size
for testing the ratio of two mean rates of negative binomial distributed event numbers in two samples
at given maximum sample size and effect.
The power calculation is performed either for a fixed exposure time or a variable exposure time with fixed follow-up
where the information over the stages is calculated according to the specified information rate in the design.
Additionally, an allocation ratio = n1 / n2
can be specified where n1
and n2
are the number
of subjects in the two treatment groups. A null hypothesis value thetaH0
can also be specified.
Returns a TrialDesignPlan
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other power functions:
getPowerMeans()
,
getPowerRates()
,
getPowerSurvival()
## Not run: # Fixed sample size trial where a therapy is assumed to decrease the # exacerbation rate from 1.4 to 1.05 (25% decrease) within an # observation period of 1 year, i.e., each subject has a equal # follow-up of 1 year. # Calculate power at significance level 0.025 at given sample size = 180 # for a range of lambda1 values if the overdispersion is assumed to be # equal to 0.5, is obtained by getPowerCounts(alpha = 0.025, lambda1 = seq(1, 1.4, 0.05), lambda2 = 1.4, maxNumberOfSubjects = 180, overdispersion = 0.5, fixedExposureTime = 1) # Group sequential alpha and beta spending function design with O'Brien and # Fleming type boundaries: Power and test characteristics for N = 286, # under the assumption of a fixed exposure time, and for a range of # lambda1 values: getPowerCounts(design = getDesignGroupSequential( kMax = 3, alpha = 0.025, beta = 0.2, typeOfDesign = "asOF", typeBetaSpending = "bsOF"), lambda1 = seq(0.17, 0.23, 0.01), lambda2 = 0.3, directionUpper = FALSE, overdispersion = 1, maxNumberOfSubjects = 286, fixedExposureTime = 12, accrualTime = 6) # Group sequential design alpha spending function design with O'Brien and # Fleming type boundaries: Power and test characteristics for N = 1976, # under variable exposure time with uniform recruitment over 1.25 months, # study time (accrual + followup) = 4 (lambda1, lambda2, and overdispersion # as specified, no futility stopping): getPowerCounts(design = getDesignGroupSequential( kMax = 3, alpha = 0.025, beta = 0.2, typeOfDesign = "asOF"), lambda1 = seq(0.08, 0.09, 0.0025), lambda2 = 0.125, overdispersion = 5, directionUpper = FALSE, maxNumberOfSubjects = 1976, followUpTime = 2.75, accrualTime = 1.25) ## End(Not run)
## Not run: # Fixed sample size trial where a therapy is assumed to decrease the # exacerbation rate from 1.4 to 1.05 (25% decrease) within an # observation period of 1 year, i.e., each subject has a equal # follow-up of 1 year. # Calculate power at significance level 0.025 at given sample size = 180 # for a range of lambda1 values if the overdispersion is assumed to be # equal to 0.5, is obtained by getPowerCounts(alpha = 0.025, lambda1 = seq(1, 1.4, 0.05), lambda2 = 1.4, maxNumberOfSubjects = 180, overdispersion = 0.5, fixedExposureTime = 1) # Group sequential alpha and beta spending function design with O'Brien and # Fleming type boundaries: Power and test characteristics for N = 286, # under the assumption of a fixed exposure time, and for a range of # lambda1 values: getPowerCounts(design = getDesignGroupSequential( kMax = 3, alpha = 0.025, beta = 0.2, typeOfDesign = "asOF", typeBetaSpending = "bsOF"), lambda1 = seq(0.17, 0.23, 0.01), lambda2 = 0.3, directionUpper = FALSE, overdispersion = 1, maxNumberOfSubjects = 286, fixedExposureTime = 12, accrualTime = 6) # Group sequential design alpha spending function design with O'Brien and # Fleming type boundaries: Power and test characteristics for N = 1976, # under variable exposure time with uniform recruitment over 1.25 months, # study time (accrual + followup) = 4 (lambda1, lambda2, and overdispersion # as specified, no futility stopping): getPowerCounts(design = getDesignGroupSequential( kMax = 3, alpha = 0.025, beta = 0.2, typeOfDesign = "asOF"), lambda1 = seq(0.08, 0.09, 0.0025), lambda2 = 0.125, overdispersion = 5, directionUpper = FALSE, maxNumberOfSubjects = 1976, followUpTime = 2.75, accrualTime = 1.25) ## End(Not run)
Returns the power, stopping probabilities, and expected sample size for testing means in one or two samples at given maximum sample size.
getPowerMeans( design = NULL, ..., groups = 2L, normalApproximation = FALSE, meanRatio = FALSE, thetaH0 = ifelse(meanRatio, 1, 0), alternative = seq(0, 1, 0.2), stDev = 1, directionUpper = NA, maxNumberOfSubjects = NA_real_, allocationRatioPlanned = NA_real_ )
getPowerMeans( design = NULL, ..., groups = 2L, normalApproximation = FALSE, meanRatio = FALSE, thetaH0 = ifelse(meanRatio, 1, 0), alternative = seq(0, 1, 0.2), stDev = 1, directionUpper = NA, maxNumberOfSubjects = NA_real_, allocationRatioPlanned = NA_real_ )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
groups |
The number of treatment groups (1 or 2), default is |
normalApproximation |
The type of computation of the p-values. If |
meanRatio |
If |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
alternative |
The alternative hypothesis value for testing means. This can be a vector of assumed
alternatives, default is |
stDev |
The standard deviation under which the sample size or power
calculation is performed, default is |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
maxNumberOfSubjects |
|
allocationRatioPlanned |
The planned allocation ratio |
At given design the function calculates the power, stopping probabilities,
and expected sample size for testing means at given sample size.
In a two treatment groups design, additionally, an allocation ratio = n1 / n2
can be specified where n1
and n2
are the number
of subjects in the two treatment groups.
A null hypothesis value thetaH0 != 0 for testing the difference of two means
or thetaH0 != 1
for testing the ratio of two means can be specified.
For the specified sample size, critical bounds and stopping for futility
bounds are provided at the effect scale (mean, mean difference, or
mean ratio, respectively)
Returns a TrialDesignPlan
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other power functions:
getPowerCounts()
,
getPowerRates()
,
getPowerSurvival()
## Not run: # Calculate the power, stopping probabilities, and expected sample size # for testing H0: mu1 - mu2 = 0 in a two-armed design against a range of # alternatives H1: mu1 - m2 = delta, delta = (0, 1, 2, 3, 4, 5), # standard deviation sigma = 8, maximum sample size N = 80 (both treatment # arms), and an allocation ratio n1/n2 = 2. The design is a three stage # O'Brien & Fleming design with non-binding futility bounds (-0.5, 0.5) # for the two interims. The computation takes into account that the t test # is used (normalApproximation = FALSE). getPowerMeans(getDesignGroupSequential(alpha = 0.025, sided = 1, futilityBounds = c(-0.5, 0.5)), groups = 2, alternative = c(0:5), stDev = 8, normalApproximation = FALSE, maxNumberOfSubjects = 80, allocationRatioPlanned = 2) ## End(Not run)
## Not run: # Calculate the power, stopping probabilities, and expected sample size # for testing H0: mu1 - mu2 = 0 in a two-armed design against a range of # alternatives H1: mu1 - m2 = delta, delta = (0, 1, 2, 3, 4, 5), # standard deviation sigma = 8, maximum sample size N = 80 (both treatment # arms), and an allocation ratio n1/n2 = 2. The design is a three stage # O'Brien & Fleming design with non-binding futility bounds (-0.5, 0.5) # for the two interims. The computation takes into account that the t test # is used (normalApproximation = FALSE). getPowerMeans(getDesignGroupSequential(alpha = 0.025, sided = 1, futilityBounds = c(-0.5, 0.5)), groups = 2, alternative = c(0:5), stDev = 8, normalApproximation = FALSE, maxNumberOfSubjects = 80, allocationRatioPlanned = 2) ## End(Not run)
Returns the power, stopping probabilities, and expected sample size for testing rates in one or two samples at given maximum sample size.
getPowerRates( design = NULL, ..., groups = 2L, riskRatio = FALSE, thetaH0 = ifelse(riskRatio, 1, 0), pi1 = seq(0.2, 0.5, 0.1), pi2 = 0.2, directionUpper = NA, maxNumberOfSubjects = NA_real_, allocationRatioPlanned = NA_real_ )
getPowerRates( design = NULL, ..., groups = 2L, riskRatio = FALSE, thetaH0 = ifelse(riskRatio, 1, 0), pi1 = seq(0.2, 0.5, 0.1), pi2 = 0.2, directionUpper = NA, maxNumberOfSubjects = NA_real_, allocationRatioPlanned = NA_real_ )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
groups |
The number of treatment groups (1 or 2), default is |
riskRatio |
If |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
pi1 |
A numeric value or vector that represents the assumed probability in
the active treatment group if two treatment groups
are considered, or the alternative probability for a one treatment group design,
default is |
pi2 |
A numeric value that represents the assumed probability in the reference group if two treatment
groups are considered, default is |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
maxNumberOfSubjects |
|
allocationRatioPlanned |
The planned allocation ratio |
At given design the function calculates the power, stopping probabilities, and expected sample size
for testing rates at given maximum sample size.
The sample sizes over the stages are calculated according to the specified information rate in the design.
In a two treatment groups design, additionally, an allocation ratio = n1 / n2
can be specified
where n1
and n2
are the number of subjects in the two treatment groups.
If a null hypothesis value thetaH0 != 0 for testing the difference of two rates
or thetaH0 != 1
for testing the risk ratio is specified, the
formulas according to Farrington & Manning (Statistics in Medicine, 1990) are used (only one-sided testing).
Critical bounds and stopping for futility bounds are provided at the effect scale
(rate, rate difference, or rate ratio, respectively).
For the two-sample case, the calculation here is performed at fixed pi2 as given as argument in the function.
Note that the power calculation for rates is always based on the normal approximation.
Returns a TrialDesignPlan
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other power functions:
getPowerCounts()
,
getPowerMeans()
,
getPowerSurvival()
## Not run: # Calculate the power, stopping probabilities, and expected sample size in a # two-armed design at given maximum sample size N = 200 in a three-stage # O'Brien & Fleming design with information rate vector (0.2,0.5,1), # non-binding futility boundaries (0,0), i.e., the study stops for futility # if the p-value exceeds 0.5 at interm, and allocation ratio = 2 for a range # of pi1 values when testing H0: pi1 - pi2 = -0.1: getPowerRates(getDesignGroupSequential(informationRates = c(0.2, 0.5, 1), futilityBounds = c(0, 0)), groups = 2, thetaH0 = -0.1, pi1 = seq(0.3, 0.6, 0.1), directionUpper = FALSE, pi2 = 0.7, allocationRatioPlanned = 2, maxNumberOfSubjects = 200) # Calculate the power, stopping probabilities, and expected sample size in a single # arm design at given maximum sample size N = 60 in a three-stage two-sided # O'Brien & Fleming design with information rate vector (0.2, 0.5,1) # for a range of pi1 values when testing H0: pi = 0.3: getPowerRates(getDesignGroupSequential(informationRates = c(0.2, 0.5,1), sided = 2), groups = 1, thetaH0 = 0.3, pi1 = seq(0.3, 0.5, 0.05), maxNumberOfSubjects = 60) ## End(Not run)
## Not run: # Calculate the power, stopping probabilities, and expected sample size in a # two-armed design at given maximum sample size N = 200 in a three-stage # O'Brien & Fleming design with information rate vector (0.2,0.5,1), # non-binding futility boundaries (0,0), i.e., the study stops for futility # if the p-value exceeds 0.5 at interm, and allocation ratio = 2 for a range # of pi1 values when testing H0: pi1 - pi2 = -0.1: getPowerRates(getDesignGroupSequential(informationRates = c(0.2, 0.5, 1), futilityBounds = c(0, 0)), groups = 2, thetaH0 = -0.1, pi1 = seq(0.3, 0.6, 0.1), directionUpper = FALSE, pi2 = 0.7, allocationRatioPlanned = 2, maxNumberOfSubjects = 200) # Calculate the power, stopping probabilities, and expected sample size in a single # arm design at given maximum sample size N = 60 in a three-stage two-sided # O'Brien & Fleming design with information rate vector (0.2, 0.5,1) # for a range of pi1 values when testing H0: pi = 0.3: getPowerRates(getDesignGroupSequential(informationRates = c(0.2, 0.5,1), sided = 2), groups = 1, thetaH0 = 0.3, pi1 = seq(0.3, 0.5, 0.05), maxNumberOfSubjects = 60) ## End(Not run)
Returns the power, stopping probabilities, and expected sample size for testing the hazard ratio in a two treatment groups survival design.
getPowerSurvival( design = NULL, ..., typeOfComputation = c("Schoenfeld", "Freedman", "HsiehFreedman"), thetaH0 = 1, directionUpper = NA, pi1 = NA_real_, pi2 = NA_real_, lambda1 = NA_real_, lambda2 = NA_real_, median1 = NA_real_, median2 = NA_real_, kappa = 1, hazardRatio = NA_real_, piecewiseSurvivalTime = NA_real_, allocationRatioPlanned = 1, eventTime = 12, accrualTime = c(0, 12), accrualIntensity = 0.1, accrualIntensityType = c("auto", "absolute", "relative"), maxNumberOfSubjects = NA_real_, maxNumberOfEvents = NA_real_, dropoutRate1 = 0, dropoutRate2 = 0, dropoutTime = 12 )
getPowerSurvival( design = NULL, ..., typeOfComputation = c("Schoenfeld", "Freedman", "HsiehFreedman"), thetaH0 = 1, directionUpper = NA, pi1 = NA_real_, pi2 = NA_real_, lambda1 = NA_real_, lambda2 = NA_real_, median1 = NA_real_, median2 = NA_real_, kappa = 1, hazardRatio = NA_real_, piecewiseSurvivalTime = NA_real_, allocationRatioPlanned = 1, eventTime = 12, accrualTime = c(0, 12), accrualIntensity = 0.1, accrualIntensityType = c("auto", "absolute", "relative"), maxNumberOfSubjects = NA_real_, maxNumberOfEvents = NA_real_, dropoutRate1 = 0, dropoutRate2 = 0, dropoutTime = 12 )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
typeOfComputation |
Three options are available: |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
pi1 |
A numeric value or vector that represents the assumed event rate in the treatment group,
default is |
pi2 |
A numeric value that represents the assumed event rate in the control group, default is |
lambda1 |
The assumed hazard rate in the treatment group, there is no default.
|
lambda2 |
The assumed hazard rate in the reference group, there is no default.
|
median1 |
The assumed median survival time in the treatment group, there is no default. |
median2 |
The assumed median survival time in the reference group, there is no default. Must be a positive numeric of length 1. |
kappa |
A numeric value > 0. A |
hazardRatio |
The vector of hazard ratios under consideration. If the event or hazard rates in both treatment groups are defined, the hazard ratio needs not to be specified as it is calculated, there is no default. Must be a positive numeric of length 1. |
piecewiseSurvivalTime |
A vector that specifies the time intervals for the piecewise
definition of the exponential survival time cumulative distribution function |
allocationRatioPlanned |
The planned allocation ratio |
eventTime |
The assumed time under which the event rates are calculated, default is |
accrualTime |
The assumed accrual time intervals for the study, default is
|
accrualIntensity |
A numeric vector of accrual intensities, default is the relative
intensity |
accrualIntensityType |
A character value specifying the accrual intensity input type.
Must be one of |
maxNumberOfSubjects |
|
maxNumberOfEvents |
|
dropoutRate1 |
The assumed drop-out rate in the treatment group, default is |
dropoutRate2 |
The assumed drop-out rate in the control group, default is |
dropoutTime |
The assumed time for drop-out rates in the control and the
treatment group, default is |
At given design the function calculates the power, stopping probabilities, and expected
sample size at given number of events and number of subjects.
It also calculates the time when the required events are expected under the given
assumptions (exponentially, piecewise exponentially, or Weibull distributed survival times
and constant or non-constant piecewise accrual).
Additionally, an allocation ratio = n1 / n2
can be specified where n1
and n2
are the number
of subjects in the two treatment groups.
The formula of Kim & Tsiatis (Biometrics, 1990)
is used to calculate the expected number of events under the alternative
(see also Lakatos & Lan, Statistics in Medicine, 1992). These formulas are generalized to piecewise survival times and
non-constant piecewise accrual over time.
Returns a TrialDesignPlan
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
The first element of the vector piecewiseSurvivalTime
must be equal to 0
.
piecewiseSurvivalTime
can also be a list that combines the definition of the
time intervals and hazard rates in the reference group.
The definition of the survival time in the treatment group is obtained by the specification
of the hazard ratio (see examples for details).
accrualTime
is the time period of subjects' accrual in a study.
It can be a value that defines the end of accrual or a vector.
In this case, accrualTime
can be used to define a non-constant accrual over time.
For this, accrualTime
is a vector that defines the accrual intervals.
The first element of accrualTime
must be equal to 0
and, additionally,
accrualIntensity
needs to be specified.
accrualIntensity
itself is a value or a vector (depending on the
length of accrualTime
) that defines the intensity how subjects
enter the trial in the intervals defined through accrualTime
.
accrualTime
can also be a list that combines the definition of the accrual time and
accrual intensity (see below and examples for details).
If the length of accrualTime
and the length of accrualIntensity
are the same
(i.e., the end of accrual is undefined), maxNumberOfSubjects > 0
needs to be specified
and the end of accrual is calculated.
In that case, accrualIntensity
is the number of subjects per time unit, i.e., the absolute accrual intensity.
If the length of accrualTime
equals the length of accrualIntensity - 1
(i.e., the end of accrual is defined), maxNumberOfSubjects
is calculated if the absolute accrual intensity is given.
If all elements in accrualIntensity
are smaller than 1, accrualIntensity
defines
the relative intensity how subjects enter the trial.
For example, accrualIntensity = c(0.1, 0.2)
specifies that in the second accrual interval
the intensity is doubled as compared to the first accrual interval. The actual (absolute) accrual intensity
is calculated for the calculated or given maxNumberOfSubjects
.
Note that the default is accrualIntensity = 0.1
meaning that the absolute accrual intensity
will be calculated.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other power functions:
getPowerCounts()
,
getPowerMeans()
,
getPowerRates()
## Not run: # Fixed sample size with minimum required definitions, pi1 = c(0.2, 0.3, 0.4, 0.5) and # pi2 = 0.2 at event time 12, accrual time 12 and follow-up time 6 as default getPowerSurvival(maxNumberOfEvents = 40, maxNumberOfSubjects = 200) # Four stage O'Brien & Fleming group sequential design with minimum required # definitions, pi1 = c(0.2, 0.3, 0.4, 0.5) and pi2 = 0.2 at event time 12, # accrual time 12 and follow-up time 6 as default getPowerSurvival(design = getDesignGroupSequential(kMax = 4), maxNumberOfEvents = 40, maxNumberOfSubjects = 200) # For fixed sample design, determine necessary accrual time if 200 subjects and # 30 subjects per time unit can be recruited getPowerSurvival(maxNumberOfEvents = 40, accrualTime = c(0), accrualIntensity = 30, maxNumberOfSubjects = 200) # Determine necessary accrual time if 200 subjects and if the first 6 time units # 20 subjects per time unit can be recruited, then 30 subjects per time unit getPowerSurvival(maxNumberOfEvents = 40, accrualTime = c(0, 6), accrualIntensity = c(20, 30), maxNumberOfSubjects = 200) # Determine maximum number of Subjects if the first 6 time units 20 subjects per # time unit can be recruited, and after 10 time units 30 subjects per time unit getPowerSurvival(maxNumberOfEvents = 40, accrualTime = c(0, 6, 10), accrualIntensity = c(20, 30)) # Specify accrual time as a list at <- list( "0 - <6" = 20, "6 - Inf" = 30) getPowerSurvival(maxNumberOfEvents = 40, accrualTime = at, maxNumberOfSubjects = 200) # Specify accrual time as a list, if maximum number of subjects need to be calculated at <- list( "0 - <6" = 20, "6 - <=10" = 30) getPowerSurvival(maxNumberOfEvents = 40, accrualTime = at) # Specify effect size for a two-stage group design with O'Brien & Fleming boundaries # Effect size is based on event rates at specified event time, directionUpper = FALSE # needs to be specified because it should be shown that hazard ratio < 1 getPowerSurvival(design = getDesignGroupSequential(kMax = 2), pi1 = 0.2, pi2 = 0.3, eventTime = 24, maxNumberOfEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE) # Effect size is based on event rate at specified event time for the reference group # and hazard ratio, directionUpper = FALSE needs to be specified # because it should be shown that hazard ratio < 1 getPowerSurvival(design = getDesignGroupSequential(kMax = 2), hazardRatio = 0.5, pi2 = 0.3, eventTime = 24, maxNumberOfEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE) # Effect size is based on hazard rate for the reference group and hazard ratio, # directionUpper = FALSE needs to be specified because it should be shown that # hazard ratio < 1 getPowerSurvival(design = getDesignGroupSequential(kMax = 2), hazardRatio = 0.5, lambda2 = 0.02, maxNumberOfEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE) # Specification of piecewise exponential survival time and hazard ratios getPowerSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), hazardRatio = c(1.5, 1.8, 2), maxNumberOfEvents = 40, maxNumberOfSubjects = 200) # Specification of piecewise exponential survival time as list and hazard ratios pws <- list( "0 - <5" = 0.01, "5 - <10" = 0.02, ">=10" = 0.04) getPowerSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = pws, hazardRatio = c(1.5, 1.8, 2), maxNumberOfEvents = 40, maxNumberOfSubjects = 200) # Specification of piecewise exponential survival time for both treatment arms getPowerSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), lambda1 = c(0.015,0.03,0.06), maxNumberOfEvents = 40, maxNumberOfSubjects = 200) # Specification of piecewise exponential survival time as a list pws <- list( "0 - <5" = 0.01, "5 - <10" = 0.02, ">=10" = 0.04) getPowerSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = pws, hazardRatio = c(1.5, 1.8, 2), maxNumberOfEvents = 40, maxNumberOfSubjects = 200) # Specify effect size based on median survival times getPowerSurvival(median1 = 5, median2 = 3, maxNumberOfEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE) # Specify effect size based on median survival times of # Weibull distribtion with kappa = 2 getPowerSurvival(median1 = 5, median2 = 3, kappa = 2, maxNumberOfEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE) ## End(Not run)
## Not run: # Fixed sample size with minimum required definitions, pi1 = c(0.2, 0.3, 0.4, 0.5) and # pi2 = 0.2 at event time 12, accrual time 12 and follow-up time 6 as default getPowerSurvival(maxNumberOfEvents = 40, maxNumberOfSubjects = 200) # Four stage O'Brien & Fleming group sequential design with minimum required # definitions, pi1 = c(0.2, 0.3, 0.4, 0.5) and pi2 = 0.2 at event time 12, # accrual time 12 and follow-up time 6 as default getPowerSurvival(design = getDesignGroupSequential(kMax = 4), maxNumberOfEvents = 40, maxNumberOfSubjects = 200) # For fixed sample design, determine necessary accrual time if 200 subjects and # 30 subjects per time unit can be recruited getPowerSurvival(maxNumberOfEvents = 40, accrualTime = c(0), accrualIntensity = 30, maxNumberOfSubjects = 200) # Determine necessary accrual time if 200 subjects and if the first 6 time units # 20 subjects per time unit can be recruited, then 30 subjects per time unit getPowerSurvival(maxNumberOfEvents = 40, accrualTime = c(0, 6), accrualIntensity = c(20, 30), maxNumberOfSubjects = 200) # Determine maximum number of Subjects if the first 6 time units 20 subjects per # time unit can be recruited, and after 10 time units 30 subjects per time unit getPowerSurvival(maxNumberOfEvents = 40, accrualTime = c(0, 6, 10), accrualIntensity = c(20, 30)) # Specify accrual time as a list at <- list( "0 - <6" = 20, "6 - Inf" = 30) getPowerSurvival(maxNumberOfEvents = 40, accrualTime = at, maxNumberOfSubjects = 200) # Specify accrual time as a list, if maximum number of subjects need to be calculated at <- list( "0 - <6" = 20, "6 - <=10" = 30) getPowerSurvival(maxNumberOfEvents = 40, accrualTime = at) # Specify effect size for a two-stage group design with O'Brien & Fleming boundaries # Effect size is based on event rates at specified event time, directionUpper = FALSE # needs to be specified because it should be shown that hazard ratio < 1 getPowerSurvival(design = getDesignGroupSequential(kMax = 2), pi1 = 0.2, pi2 = 0.3, eventTime = 24, maxNumberOfEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE) # Effect size is based on event rate at specified event time for the reference group # and hazard ratio, directionUpper = FALSE needs to be specified # because it should be shown that hazard ratio < 1 getPowerSurvival(design = getDesignGroupSequential(kMax = 2), hazardRatio = 0.5, pi2 = 0.3, eventTime = 24, maxNumberOfEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE) # Effect size is based on hazard rate for the reference group and hazard ratio, # directionUpper = FALSE needs to be specified because it should be shown that # hazard ratio < 1 getPowerSurvival(design = getDesignGroupSequential(kMax = 2), hazardRatio = 0.5, lambda2 = 0.02, maxNumberOfEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE) # Specification of piecewise exponential survival time and hazard ratios getPowerSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), hazardRatio = c(1.5, 1.8, 2), maxNumberOfEvents = 40, maxNumberOfSubjects = 200) # Specification of piecewise exponential survival time as list and hazard ratios pws <- list( "0 - <5" = 0.01, "5 - <10" = 0.02, ">=10" = 0.04) getPowerSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = pws, hazardRatio = c(1.5, 1.8, 2), maxNumberOfEvents = 40, maxNumberOfSubjects = 200) # Specification of piecewise exponential survival time for both treatment arms getPowerSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), lambda1 = c(0.015,0.03,0.06), maxNumberOfEvents = 40, maxNumberOfSubjects = 200) # Specification of piecewise exponential survival time as a list pws <- list( "0 - <5" = 0.01, "5 - <10" = 0.02, ">=10" = 0.04) getPowerSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = pws, hazardRatio = c(1.5, 1.8, 2), maxNumberOfEvents = 40, maxNumberOfSubjects = 200) # Specify effect size based on median survival times getPowerSurvival(median1 = 5, median2 = 3, maxNumberOfEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE) # Specify effect size based on median survival times of # Weibull distribtion with kappa = 2 getPowerSurvival(median1 = 5, median2 = 3, kappa = 2, maxNumberOfEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE) ## End(Not run)
Returns the raw survival data which was generated for simulation.
getRawData(x, aggregate = FALSE)
getRawData(x, aggregate = FALSE)
x |
A |
aggregate |
Logical. If |
This function works only if getSimulationSurvival()
was called with a maxNumberOfRawDatasetsPerStage
> 0 (default is 0
).
This function can be used to get the simulated raw data from a simulation results
object obtained by getSimulationSurvival()
.
Note that getSimulationSurvival()
must called before with maxNumberOfRawDatasetsPerStage
> 0.
The data frame contains the following columns:
iterationNumber
: The number of the simulation iteration.
stopStage
: The stage of stopping.
subjectId
: The subject id (increasing number 1, 2, 3, ...)
accrualTime
: The accrual time, i.e., the time when the subject entered the trial.
treatmentGroup
: The treatment group number (1 or 2).
survivalTime
: The survival time of the subject.
dropoutTime
: The dropout time of the subject (may be NA
).
lastObservationTime
: The specific observation time.
timeUnderObservation
: The time under observation is defined as follows:
if (event == TRUE) { timeUnderObservation <- survivalTime } else if (dropoutEvent == TRUE) { timeUnderObservation <- dropoutTime } else { timeUnderObservation <- lastObservationTime - accrualTime }
event
: TRUE
if an event occurred; FALSE
otherwise.
dropoutEvent
: TRUE
if an dropout event occurred; FALSE
otherwise.
Returns a data.frame
.
## Not run: results <- getSimulationSurvival( pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, eventTime = 12, accrualTime = 24, plannedEvents = 40, maxNumberOfSubjects = 200, maxNumberOfIterations = 50, maxNumberOfRawDatasetsPerStage = 5 ) rawData <- getRawData(results) head(rawData) dim(rawData) ## End(Not run)
## Not run: results <- getSimulationSurvival( pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, eventTime = 12, accrualTime = 24, plannedEvents = 40, maxNumberOfSubjects = 200, maxNumberOfIterations = 50, maxNumberOfRawDatasetsPerStage = 5 ) rawData <- getRawData(results) head(rawData) dim(rawData) ## End(Not run)
Calculates and returns the lower and upper limit of the repeated confidence intervals of the trial.
getRepeatedConfidenceIntervals( design, dataInput, ..., directionUpper = NA, tolerance = 1e-06, stage = NA_integer_ )
getRepeatedConfidenceIntervals( design, dataInput, ..., directionUpper = NA, tolerance = 1e-06, stage = NA_integer_ )
design |
The trial design. |
dataInput |
The summary data used for calculating the test results.
This is either an element of |
... |
Further arguments to be passed to methods (cf., separate functions in "See Also" below), e.g.,
|
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
tolerance |
The numerical tolerance, default is |
stage |
The stage number (optional). Default: total number of existing stages in the data input. |
The repeated confidence interval at a given stage of the trial contains the parameter values that are not rejected using the specified sequential design. It can be calculated at each stage of the trial and can thus be used as a monitoring tool.
The repeated confidence intervals are provided up to the specified stage.
Returns a matrix
with 2
rows
and kMax
columns containing the lower RCI limits in the first row and
the upper RCI limits in the second row, where each column represents a stage.
Other analysis functions:
getAnalysisResults()
,
getClosedCombinationTestResults()
,
getClosedConditionalDunnettTestResults()
,
getConditionalPower()
,
getConditionalRejectionProbabilities()
,
getFinalConfidenceInterval()
,
getFinalPValue()
,
getRepeatedPValues()
,
getStageResults()
,
getTestActions()
## Not run: design <- getDesignInverseNormal(kMax = 2) data <- getDataset( n = c( 20, 30), means = c( 50, 51), stDevs = c(130, 140) ) getRepeatedConfidenceIntervals(design, dataInput = data) ## End(Not run)
## Not run: design <- getDesignInverseNormal(kMax = 2) data <- getDataset( n = c( 20, 30), means = c( 50, 51), stDevs = c(130, 140) ) getRepeatedConfidenceIntervals(design, dataInput = data) ## End(Not run)
Calculates the repeated p-values for a given test results.
getRepeatedPValues(stageResults, ..., tolerance = 1e-06)
getRepeatedPValues(stageResults, ..., tolerance = 1e-06)
stageResults |
The results at given stage, obtained from |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
tolerance |
The numerical tolerance, default is |
The repeated p-value at a given stage of the trial is defined as the smallest significance level under which at given test design the test results obtain rejection of the null hypothesis. It can be calculated at each stage of the trial and can thus be used as a monitoring tool.
The repeated p-values are provided up to the specified stage.
In multi-arm trials, the repeated p-values are defined separately for each treatment comparison within the closed testing procedure.
Returns a numeric
vector of length kMax
or in case of multi-arm stage results
a matrix
(each column represents a stage, each row a comparison)
containing the repeated p values.
Other analysis functions:
getAnalysisResults()
,
getClosedCombinationTestResults()
,
getClosedConditionalDunnettTestResults()
,
getConditionalPower()
,
getConditionalRejectionProbabilities()
,
getFinalConfidenceInterval()
,
getFinalPValue()
,
getRepeatedConfidenceIntervals()
,
getStageResults()
,
getTestActions()
## Not run: design <- getDesignInverseNormal(kMax = 2) data <- getDataset( n = c( 20, 30), means = c( 50, 51), stDevs = c(130, 140) ) getRepeatedPValues(getStageResults(design, dataInput = data)) ## End(Not run)
## Not run: design <- getDesignInverseNormal(kMax = 2) data <- getDataset( n = c( 20, 30), means = c( 50, 51), stDevs = c(130, 140) ) getRepeatedPValues(getStageResults(design, dataInput = data)) ## End(Not run)
Returns the sample size for testing the ratio of mean rates of negative binomial distributed event numbers in two samples at given effect.
getSampleSizeCounts( design = NULL, ..., lambda1 = NA_real_, lambda2 = NA_real_, lambda = NA_real_, theta = NA_real_, thetaH0 = 1, overdispersion = 0, fixedExposureTime = NA_real_, accrualTime = NA_real_, accrualIntensity = NA_real_, followUpTime = NA_real_, maxNumberOfSubjects = NA_integer_, allocationRatioPlanned = NA_real_ )
getSampleSizeCounts( design = NULL, ..., lambda1 = NA_real_, lambda2 = NA_real_, lambda = NA_real_, theta = NA_real_, thetaH0 = 1, overdispersion = 0, fixedExposureTime = NA_real_, accrualTime = NA_real_, accrualIntensity = NA_real_, followUpTime = NA_real_, maxNumberOfSubjects = NA_integer_, allocationRatioPlanned = NA_real_ )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
lambda1 |
A numeric value or vector that represents the assumed rate of a homogeneous Poisson process in the active treatment group, there is no default. |
lambda2 |
A numeric value that represents the assumed rate of a homogeneous Poisson process in the control group, there is no default. |
lambda |
A numeric value or vector that represents the assumed rate of a homogeneous Poisson process in the pooled treatment groups, there is no default. |
theta |
A numeric value or vector that represents the assumed mean ratios lambda1/lambda2 of a homogeneous Poisson process, there is no default. |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
overdispersion |
A numeric value that represents the assumed overdispersion of the negative binomial distribution,
default is |
fixedExposureTime |
If specified, the fixed time of exposure per subject for count data, there is no default. |
accrualTime |
If specified, the assumed accrual time interval(s) for the study, there is no default. |
accrualIntensity |
If specified, the assumed accrual intensities for the study, there is no default. |
followUpTime |
If specified, the assumed (additional) follow-up time for the study, there is no default.
The total study duration is |
maxNumberOfSubjects |
|
allocationRatioPlanned |
The planned allocation ratio |
At given design the function calculates the information, and stage-wise and maximum sample size for testing mean rates
of negative binomial distributed event numbers in two samples at given effect.
The sample size calculation is performed either for a fixed exposure time or a variable exposure time with fixed follow-up.
For the variable exposure time case, at given maximum sample size the necessary follow-up time is calculated.
The planned calendar time of interim stages is calculated if an accrual time is defined.
Additionally, an allocation ratio = n1 / n2
can be specified where n1
and n2
are the number
of subjects in the two treatment groups. A null hypothesis value thetaH0
can also be specified.
Returns a TrialDesignPlan
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other sample size functions:
getSampleSizeMeans()
,
getSampleSizeRates()
,
getSampleSizeSurvival()
## Not run: # Fixed sample size trial where a therapy is assumed to decrease the # exacerbation rate from 1.4 to 1.05 (25% decrease) within an observation # period of 1 year, i.e., each subject has an equal follow-up of 1 year. # The sample size that yields 90% power at significance level 0.025 for # detecting such a difference, if the overdispersion is assumed to be # equal to 0.5, is obtained by getSampleSizeCounts(alpha = 0.025, beta = 0.1, lambda2 = 1.4, theta = 0.75, overdispersion = 0.5, fixedExposureTime = 1) # Noninferiority test with blinded sample size reassessment to reproduce # Table 2 from Friede and Schmidli (2010): getSampleSizeCounts(alpha = 0.025, beta = 0.2, lambda = 1, theta = 1, thetaH0 = 1.15, overdispersion = 0.4, fixedExposureTime = 1) # Group sequential alpha and beta spending function design with O'Brien and # Fleming type boundaries: Estimate observation time under uniform # recruitment of patients over 6 months and a fixed exposure time of 12 # months (lambda1, lambda2, and overdispersion as specified): getSampleSizeCounts(design = getDesignGroupSequential( kMax = 3, alpha = 0.025, beta = 0.2, typeOfDesign = "asOF", typeBetaSpending = "bsOF"), lambda1 = 0.2, lambda2 = 0.3, overdispersion = 1, fixedExposureTime = 12, accrualTime = 6) # Group sequential alpha spending function design with O'Brien and Fleming # type boundaries: Sample size for variable exposure time with uniform # recruitment over 1.25 months and study time (accrual + followup) = 4 # (lambda1, lambda2, and overdispersion as specified, no futility stopping): getSampleSizeCounts(design = getDesignGroupSequential( kMax = 3, alpha = 0.025, beta = 0.2, typeOfDesign = "asOF"), lambda1 = 0.0875, lambda2 = 0.125, overdispersion = 5, followUpTime = 2.75, accrualTime = 1.25) ## End(Not run)
## Not run: # Fixed sample size trial where a therapy is assumed to decrease the # exacerbation rate from 1.4 to 1.05 (25% decrease) within an observation # period of 1 year, i.e., each subject has an equal follow-up of 1 year. # The sample size that yields 90% power at significance level 0.025 for # detecting such a difference, if the overdispersion is assumed to be # equal to 0.5, is obtained by getSampleSizeCounts(alpha = 0.025, beta = 0.1, lambda2 = 1.4, theta = 0.75, overdispersion = 0.5, fixedExposureTime = 1) # Noninferiority test with blinded sample size reassessment to reproduce # Table 2 from Friede and Schmidli (2010): getSampleSizeCounts(alpha = 0.025, beta = 0.2, lambda = 1, theta = 1, thetaH0 = 1.15, overdispersion = 0.4, fixedExposureTime = 1) # Group sequential alpha and beta spending function design with O'Brien and # Fleming type boundaries: Estimate observation time under uniform # recruitment of patients over 6 months and a fixed exposure time of 12 # months (lambda1, lambda2, and overdispersion as specified): getSampleSizeCounts(design = getDesignGroupSequential( kMax = 3, alpha = 0.025, beta = 0.2, typeOfDesign = "asOF", typeBetaSpending = "bsOF"), lambda1 = 0.2, lambda2 = 0.3, overdispersion = 1, fixedExposureTime = 12, accrualTime = 6) # Group sequential alpha spending function design with O'Brien and Fleming # type boundaries: Sample size for variable exposure time with uniform # recruitment over 1.25 months and study time (accrual + followup) = 4 # (lambda1, lambda2, and overdispersion as specified, no futility stopping): getSampleSizeCounts(design = getDesignGroupSequential( kMax = 3, alpha = 0.025, beta = 0.2, typeOfDesign = "asOF"), lambda1 = 0.0875, lambda2 = 0.125, overdispersion = 5, followUpTime = 2.75, accrualTime = 1.25) ## End(Not run)
Returns the sample size for testing means in one or two samples.
getSampleSizeMeans( design = NULL, ..., groups = 2L, normalApproximation = FALSE, meanRatio = FALSE, thetaH0 = ifelse(meanRatio, 1, 0), alternative = seq(0.2, 1, 0.2), stDev = 1, allocationRatioPlanned = NA_real_ )
getSampleSizeMeans( design = NULL, ..., groups = 2L, normalApproximation = FALSE, meanRatio = FALSE, thetaH0 = ifelse(meanRatio, 1, 0), alternative = seq(0.2, 1, 0.2), stDev = 1, allocationRatioPlanned = NA_real_ )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
groups |
The number of treatment groups (1 or 2), default is |
normalApproximation |
The type of computation of the p-values. If |
meanRatio |
If |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
alternative |
The alternative hypothesis value for testing means. This can be a vector of assumed
alternatives, default is |
stDev |
The standard deviation under which the sample size or power
calculation is performed, default is |
allocationRatioPlanned |
The planned allocation ratio |
At given design the function calculates the stage-wise and maximum sample size for testing means.
In a two treatment groups design, additionally, an allocation ratio = n1 / n2
can be specified where n1
and n2
are the number of subjects in the two treatment groups.
A null hypothesis value thetaH0 != 0 for testing the difference of two means or
thetaH0 != 1 for testing the ratio of two means can be specified.
Critical bounds and stopping for futility bounds are provided at the effect scale
(mean, mean difference, or mean ratio, respectively) for each sample size calculation separately.
Returns a TrialDesignPlan
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other sample size functions:
getSampleSizeCounts()
,
getSampleSizeRates()
,
getSampleSizeSurvival()
## Not run: # Calculate sample sizes in a fixed sample size parallel group design # with allocation ratio \code{n1 / n2 = 2} for a range of # alternative values 1, ..., 5 with assumed standard deviation = 3.5; # two-sided alpha = 0.05, power 1 - beta = 90%: getSampleSizeMeans(alpha = 0.05, beta = 0.1, sided = 2, groups = 2, alternative = seq(1, 5, 1), stDev = 3.5, allocationRatioPlanned = 2) # Calculate sample sizes in a three-stage Pocock paired comparison design testing # H0: mu = 2 for a range of alternative values 3,4,5 with assumed standard # deviation = 3.5; one-sided alpha = 0.05, power 1 - beta = 90%: getSampleSizeMeans(getDesignGroupSequential(typeOfDesign = "P", alpha = 0.05, sided = 1, beta = 0.1), groups = 1, thetaH0 = 2, alternative = seq(3, 5, 1), stDev = 3.5) # Calculate sample sizes in a three-stage Pocock two-armed design testing # H0: mu = 2 for a range of alternative values 3,4,5 with assumed standard # deviations = 3 and 4, respectively, in the two groups of observations; # one-sided alpha = 0.05, power 1 - beta = 90%: getSampleSizeMeans(getDesignGroupSequential(typeOfDesign = "P", alpha = 0.05, sided = 1, beta = 0.1), groups = 2, alternative = seq(3, 5, 1), stDev = c(3, 4)) ## End(Not run)
## Not run: # Calculate sample sizes in a fixed sample size parallel group design # with allocation ratio \code{n1 / n2 = 2} for a range of # alternative values 1, ..., 5 with assumed standard deviation = 3.5; # two-sided alpha = 0.05, power 1 - beta = 90%: getSampleSizeMeans(alpha = 0.05, beta = 0.1, sided = 2, groups = 2, alternative = seq(1, 5, 1), stDev = 3.5, allocationRatioPlanned = 2) # Calculate sample sizes in a three-stage Pocock paired comparison design testing # H0: mu = 2 for a range of alternative values 3,4,5 with assumed standard # deviation = 3.5; one-sided alpha = 0.05, power 1 - beta = 90%: getSampleSizeMeans(getDesignGroupSequential(typeOfDesign = "P", alpha = 0.05, sided = 1, beta = 0.1), groups = 1, thetaH0 = 2, alternative = seq(3, 5, 1), stDev = 3.5) # Calculate sample sizes in a three-stage Pocock two-armed design testing # H0: mu = 2 for a range of alternative values 3,4,5 with assumed standard # deviations = 3 and 4, respectively, in the two groups of observations; # one-sided alpha = 0.05, power 1 - beta = 90%: getSampleSizeMeans(getDesignGroupSequential(typeOfDesign = "P", alpha = 0.05, sided = 1, beta = 0.1), groups = 2, alternative = seq(3, 5, 1), stDev = c(3, 4)) ## End(Not run)
Returns the sample size for testing rates in one or two samples.
getSampleSizeRates( design = NULL, ..., groups = 2L, normalApproximation = TRUE, riskRatio = FALSE, thetaH0 = ifelse(riskRatio, 1, 0), pi1 = c(0.4, 0.5, 0.6), pi2 = 0.2, allocationRatioPlanned = NA_real_ )
getSampleSizeRates( design = NULL, ..., groups = 2L, normalApproximation = TRUE, riskRatio = FALSE, thetaH0 = ifelse(riskRatio, 1, 0), pi1 = c(0.4, 0.5, 0.6), pi2 = 0.2, allocationRatioPlanned = NA_real_ )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
groups |
The number of treatment groups (1 or 2), default is |
normalApproximation |
If |
riskRatio |
If |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
pi1 |
A numeric value or vector that represents the assumed probability in
the active treatment group if two treatment groups
are considered, or the alternative probability for a one treatment group design,
default is |
pi2 |
A numeric value that represents the assumed probability in the reference group if two treatment
groups are considered, default is |
allocationRatioPlanned |
The planned allocation ratio |
At given design the function calculates the stage-wise and maximum sample size for testing rates.
In a two treatment groups design, additionally, an allocation ratio = n1 / n2
can be specified
where n1
and n2
are the number of subjects in the two treatment groups.
If a null hypothesis value thetaH0 != 0 for testing the difference of two rates or
thetaH0 != 1 for testing the risk ratio is specified, the sample size
formula according to Farrington & Manning (Statistics in Medicine, 1990) is used.
Critical bounds and stopping for futility bounds are provided at the effect scale
(rate, rate difference, or rate ratio, respectively) for each sample size calculation separately.
For the two-sample case, the calculation here is performed at fixed pi2 as given as argument
in the function.
Returns a TrialDesignPlan
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other sample size functions:
getSampleSizeCounts()
,
getSampleSizeMeans()
,
getSampleSizeSurvival()
## Not run: # Calculate the stage-wise sample sizes, maximum sample sizes, and the optimum # allocation ratios for a range of pi1 values when testing # H0: pi1 - pi2 = -0.1 within a two-stage O'Brien & Fleming design; # alpha = 0.05 one-sided, power 1 - beta = 90%: getSampleSizeRates(getDesignGroupSequential(kMax = 2, alpha = 0.05, beta = 0.1), groups = 2, thetaH0 = -0.1, pi1 = seq(0.4, 0.55, 0.025), pi2 = 0.4, allocationRatioPlanned = 0) # Calculate the stage-wise sample sizes, maximum sample sizes, and the optimum # allocation ratios for a range of pi1 values when testing # H0: pi1 / pi2 = 0.80 within a three-stage O'Brien & Fleming design; # alpha = 0.025 one-sided, power 1 - beta = 90%: getSampleSizeRates(getDesignGroupSequential(kMax = 3, alpha = 0.025, beta = 0.1), groups = 2, riskRatio = TRUE, thetaH0 = 0.80, pi1 = seq(0.3, 0.5, 0.025), pi2 = 0.3, allocationRatioPlanned = 0) ## End(Not run)
## Not run: # Calculate the stage-wise sample sizes, maximum sample sizes, and the optimum # allocation ratios for a range of pi1 values when testing # H0: pi1 - pi2 = -0.1 within a two-stage O'Brien & Fleming design; # alpha = 0.05 one-sided, power 1 - beta = 90%: getSampleSizeRates(getDesignGroupSequential(kMax = 2, alpha = 0.05, beta = 0.1), groups = 2, thetaH0 = -0.1, pi1 = seq(0.4, 0.55, 0.025), pi2 = 0.4, allocationRatioPlanned = 0) # Calculate the stage-wise sample sizes, maximum sample sizes, and the optimum # allocation ratios for a range of pi1 values when testing # H0: pi1 / pi2 = 0.80 within a three-stage O'Brien & Fleming design; # alpha = 0.025 one-sided, power 1 - beta = 90%: getSampleSizeRates(getDesignGroupSequential(kMax = 3, alpha = 0.025, beta = 0.1), groups = 2, riskRatio = TRUE, thetaH0 = 0.80, pi1 = seq(0.3, 0.5, 0.025), pi2 = 0.3, allocationRatioPlanned = 0) ## End(Not run)
Returns the sample size for testing the hazard ratio in a two treatment groups survival design.
getSampleSizeSurvival( design = NULL, ..., typeOfComputation = c("Schoenfeld", "Freedman", "HsiehFreedman"), thetaH0 = 1, pi1 = NA_real_, pi2 = NA_real_, lambda1 = NA_real_, lambda2 = NA_real_, median1 = NA_real_, median2 = NA_real_, kappa = 1, hazardRatio = NA_real_, piecewiseSurvivalTime = NA_real_, allocationRatioPlanned = NA_real_, eventTime = 12, accrualTime = c(0, 12), accrualIntensity = 0.1, accrualIntensityType = c("auto", "absolute", "relative"), followUpTime = NA_real_, maxNumberOfSubjects = NA_real_, dropoutRate1 = 0, dropoutRate2 = 0, dropoutTime = 12 )
getSampleSizeSurvival( design = NULL, ..., typeOfComputation = c("Schoenfeld", "Freedman", "HsiehFreedman"), thetaH0 = 1, pi1 = NA_real_, pi2 = NA_real_, lambda1 = NA_real_, lambda2 = NA_real_, median1 = NA_real_, median2 = NA_real_, kappa = 1, hazardRatio = NA_real_, piecewiseSurvivalTime = NA_real_, allocationRatioPlanned = NA_real_, eventTime = 12, accrualTime = c(0, 12), accrualIntensity = 0.1, accrualIntensityType = c("auto", "absolute", "relative"), followUpTime = NA_real_, maxNumberOfSubjects = NA_real_, dropoutRate1 = 0, dropoutRate2 = 0, dropoutTime = 12 )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
typeOfComputation |
Three options are available: |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
pi1 |
A numeric value or vector that represents the assumed event rate in the treatment group,
default is |
pi2 |
A numeric value that represents the assumed event rate in the control group, default is |
lambda1 |
The assumed hazard rate in the treatment group, there is no default.
|
lambda2 |
The assumed hazard rate in the reference group, there is no default.
|
median1 |
The assumed median survival time in the treatment group, there is no default. |
median2 |
The assumed median survival time in the reference group, there is no default. Must be a positive numeric of length 1. |
kappa |
A numeric value > 0. A |
hazardRatio |
The vector of hazard ratios under consideration. If the event or hazard rates in both treatment groups are defined, the hazard ratio needs not to be specified as it is calculated, there is no default. Must be a positive numeric of length 1. |
piecewiseSurvivalTime |
A vector that specifies the time intervals for the piecewise
definition of the exponential survival time cumulative distribution function |
allocationRatioPlanned |
The planned allocation ratio |
eventTime |
The assumed time under which the event rates are calculated, default is |
accrualTime |
The assumed accrual time intervals for the study, default is
|
accrualIntensity |
A numeric vector of accrual intensities, default is the relative
intensity |
accrualIntensityType |
A character value specifying the accrual intensity input type.
Must be one of |
followUpTime |
The assumed (additional) follow-up time for the study, default is |
maxNumberOfSubjects |
If |
dropoutRate1 |
The assumed drop-out rate in the treatment group, default is |
dropoutRate2 |
The assumed drop-out rate in the control group, default is |
dropoutTime |
The assumed time for drop-out rates in the control and the
treatment group, default is |
At given design the function calculates the number of events and an estimate for the
necessary number of subjects for testing the hazard ratio in a survival design.
It also calculates the time when the required events are expected under the given
assumptions (exponentially, piecewise exponentially, or Weibull distributed survival times
and constant or non-constant piecewise accrual).
Additionally, an allocation ratio = n1 / n2
can be specified where n1
and n2
are the number
of subjects in the two treatment groups.
Optional argument accountForObservationTimes
: if accountForObservationTimes = TRUE
, the number of
subjects is calculated assuming specific accrual and follow-up time, default is TRUE
.
The formula of Kim & Tsiatis (Biometrics, 1990)
is used to calculate the expected number of events under the alternative
(see also Lakatos & Lan, Statistics in Medicine, 1992). These formulas are generalized
to piecewise survival times and non-constant piecewise accrual over time.
Optional argument accountForObservationTimes
: if accountForObservationTimes = FALSE
,
only the event rates are used for the calculation of the maximum number of subjects.
Returns a TrialDesignPlan
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
The first element of the vector piecewiseSurvivalTime
must be equal to 0
.
piecewiseSurvivalTime
can also be a list that combines the definition of the
time intervals and hazard rates in the reference group.
The definition of the survival time in the treatment group is obtained by the specification
of the hazard ratio (see examples for details).
accrualTime
is the time period of subjects' accrual in a study.
It can be a value that defines the end of accrual or a vector.
In this case, accrualTime
can be used to define a non-constant accrual over time.
For this, accrualTime
is a vector that defines the accrual intervals.
The first element of accrualTime
must be equal to 0
and, additionally,
accrualIntensity
needs to be specified.
accrualIntensity
itself is a value or a vector (depending on the
length of accrualTime
) that defines the intensity how subjects
enter the trial in the intervals defined through accrualTime
.
accrualTime
can also be a list that combines the definition of the accrual time and
accrual intensity (see below and examples for details).
If the length of accrualTime
and the length of accrualIntensity
are the same
(i.e., the end of accrual is undefined), maxNumberOfSubjects > 0
needs to be specified
and the end of accrual is calculated.
In that case, accrualIntensity
is the number of subjects per time unit, i.e., the absolute accrual intensity.
If the length of accrualTime
equals the length of accrualIntensity - 1
(i.e., the end of accrual is defined), maxNumberOfSubjects
is calculated if the absolute accrual intensity is given.
If all elements in accrualIntensity
are smaller than 1, accrualIntensity
defines
the relative intensity how subjects enter the trial.
For example, accrualIntensity = c(0.1, 0.2)
specifies that in the second accrual interval
the intensity is doubled as compared to the first accrual interval. The actual (absolute) accrual intensity
is calculated for the calculated or given maxNumberOfSubjects
.
Note that the default is accrualIntensity = 0.1
meaning that the absolute accrual intensity
will be calculated.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other sample size functions:
getSampleSizeCounts()
,
getSampleSizeMeans()
,
getSampleSizeRates()
## Not run: # Fixed sample size trial with median survival 20 vs. 30 months in treatment and # reference group, respectively, alpha = 0.05 (two-sided), and power 1 - beta = 90%. # 20 subjects will be recruited per month up to 400 subjects, i.e., accrual time # is 20 months. getSampleSizeSurvival(alpha = 0.05, sided = 2, beta = 0.1, lambda1 = log(2) / 20, lambda2 = log(2) / 30, accrualTime = c(0,20), accrualIntensity = 20) # Fixed sample size with minimum required definitions, pi1 = c(0.4,0.5,0.6) and # pi2 = 0.2 at event time 12, accrual time 12 and follow-up time 6 as default, # only alpha = 0.01 is specified getSampleSizeSurvival(alpha = 0.01) # Four stage O'Brien & Fleming group sequential design with minimum required # definitions, pi1 = c(0.4,0.5,0.6) and pi2 = 0.2 at event time 12, # accrual time 12 and follow-up time 6 as default getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 4)) # For fixed sample design, determine necessary accrual time if 200 subjects and # 30 subjects per time unit can be recruited getSampleSizeSurvival(accrualTime = c(0), accrualIntensity = c(30), maxNumberOfSubjects = 200) # Determine necessary accrual time if 200 subjects and if the first 6 time units # 20 subjects per time unit can be recruited, then 30 subjects per time unit getSampleSizeSurvival(accrualTime = c(0, 6), accrualIntensity = c(20, 30), maxNumberOfSubjects = 200) # Determine maximum number of Subjects if the first 6 time units 20 subjects # per time unit can be recruited, and after 10 time units 30 subjects per time unit getSampleSizeSurvival(accrualTime = c(0, 6, 10), accrualIntensity = c(20, 30)) # Specify accrual time as a list at <- list( "0 - <6" = 20, "6 - Inf" = 30) getSampleSizeSurvival(accrualTime = at, maxNumberOfSubjects = 200) # Specify accrual time as a list, if maximum number of subjects need to be calculated at <- list( "0 - <6" = 20, "6 - <=10" = 30) getSampleSizeSurvival(accrualTime = at) # Specify effect size for a two-stage group design with O'Brien & Fleming boundaries # Effect size is based on event rates at specified event time # needs to be specified because it should be shown that hazard ratio < 1 getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), pi1 = 0.2, pi2 = 0.3, eventTime = 24) # Effect size is based on event rate at specified event # time for the reference group and hazard ratio getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), hazardRatio = 0.5, pi2 = 0.3, eventTime = 24) # Effect size is based on hazard rate for the reference group and hazard ratio getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), hazardRatio = 0.5, lambda2 = 0.02) # Specification of piecewise exponential survival time and hazard ratios getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), hazardRatio = c(1.5, 1.8, 2)) # Specification of piecewise exponential survival time as a list and hazard ratios pws <- list( "0 - <5" = 0.01, "5 - <10" = 0.02, ">=10" = 0.04) getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = pws, hazardRatio = c(1.5, 1.8, 2)) # Specification of piecewise exponential survival time for both treatment arms getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), lambda1 = c(0.015, 0.03, 0.06)) # Specification of piecewise exponential survival time as a list pws <- list( "0 - <5" = 0.01, "5 - <10" = 0.02, ">=10" = 0.04) getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = pws, hazardRatio = c(1.5, 1.8, 2)) # Specify effect size based on median survival times getSampleSizeSurvival(median1 = 5, median2 = 3) # Specify effect size based on median survival times of Weibull distribtion with # kappa = 2 getSampleSizeSurvival(median1 = 5, median2 = 3, kappa = 2) # Identify minimal and maximal required subjects to # reach the required events in spite of dropouts getSampleSizeSurvival(accrualTime = c(0, 18), accrualIntensity = c(20, 30), lambda2 = 0.4, lambda1 = 0.3, followUpTime = Inf, dropoutRate1 = 0.001, dropoutRate2 = 0.005) getSampleSizeSurvival(accrualTime = c(0, 18), accrualIntensity = c(20, 30), lambda2 = 0.4, lambda1 = 0.3, followUpTime = 0, dropoutRate1 = 0.001, dropoutRate2 = 0.005) ## End(Not run)
## Not run: # Fixed sample size trial with median survival 20 vs. 30 months in treatment and # reference group, respectively, alpha = 0.05 (two-sided), and power 1 - beta = 90%. # 20 subjects will be recruited per month up to 400 subjects, i.e., accrual time # is 20 months. getSampleSizeSurvival(alpha = 0.05, sided = 2, beta = 0.1, lambda1 = log(2) / 20, lambda2 = log(2) / 30, accrualTime = c(0,20), accrualIntensity = 20) # Fixed sample size with minimum required definitions, pi1 = c(0.4,0.5,0.6) and # pi2 = 0.2 at event time 12, accrual time 12 and follow-up time 6 as default, # only alpha = 0.01 is specified getSampleSizeSurvival(alpha = 0.01) # Four stage O'Brien & Fleming group sequential design with minimum required # definitions, pi1 = c(0.4,0.5,0.6) and pi2 = 0.2 at event time 12, # accrual time 12 and follow-up time 6 as default getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 4)) # For fixed sample design, determine necessary accrual time if 200 subjects and # 30 subjects per time unit can be recruited getSampleSizeSurvival(accrualTime = c(0), accrualIntensity = c(30), maxNumberOfSubjects = 200) # Determine necessary accrual time if 200 subjects and if the first 6 time units # 20 subjects per time unit can be recruited, then 30 subjects per time unit getSampleSizeSurvival(accrualTime = c(0, 6), accrualIntensity = c(20, 30), maxNumberOfSubjects = 200) # Determine maximum number of Subjects if the first 6 time units 20 subjects # per time unit can be recruited, and after 10 time units 30 subjects per time unit getSampleSizeSurvival(accrualTime = c(0, 6, 10), accrualIntensity = c(20, 30)) # Specify accrual time as a list at <- list( "0 - <6" = 20, "6 - Inf" = 30) getSampleSizeSurvival(accrualTime = at, maxNumberOfSubjects = 200) # Specify accrual time as a list, if maximum number of subjects need to be calculated at <- list( "0 - <6" = 20, "6 - <=10" = 30) getSampleSizeSurvival(accrualTime = at) # Specify effect size for a two-stage group design with O'Brien & Fleming boundaries # Effect size is based on event rates at specified event time # needs to be specified because it should be shown that hazard ratio < 1 getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), pi1 = 0.2, pi2 = 0.3, eventTime = 24) # Effect size is based on event rate at specified event # time for the reference group and hazard ratio getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), hazardRatio = 0.5, pi2 = 0.3, eventTime = 24) # Effect size is based on hazard rate for the reference group and hazard ratio getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), hazardRatio = 0.5, lambda2 = 0.02) # Specification of piecewise exponential survival time and hazard ratios getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), hazardRatio = c(1.5, 1.8, 2)) # Specification of piecewise exponential survival time as a list and hazard ratios pws <- list( "0 - <5" = 0.01, "5 - <10" = 0.02, ">=10" = 0.04) getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = pws, hazardRatio = c(1.5, 1.8, 2)) # Specification of piecewise exponential survival time for both treatment arms getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), lambda1 = c(0.015, 0.03, 0.06)) # Specification of piecewise exponential survival time as a list pws <- list( "0 - <5" = 0.01, "5 - <10" = 0.02, ">=10" = 0.04) getSampleSizeSurvival(design = getDesignGroupSequential(kMax = 2), piecewiseSurvivalTime = pws, hazardRatio = c(1.5, 1.8, 2)) # Specify effect size based on median survival times getSampleSizeSurvival(median1 = 5, median2 = 3) # Specify effect size based on median survival times of Weibull distribtion with # kappa = 2 getSampleSizeSurvival(median1 = 5, median2 = 3, kappa = 2) # Identify minimal and maximal required subjects to # reach the required events in spite of dropouts getSampleSizeSurvival(accrualTime = c(0, 18), accrualIntensity = c(20, 30), lambda2 = 0.4, lambda1 = 0.3, followUpTime = Inf, dropoutRate1 = 0.001, dropoutRate2 = 0.005) getSampleSizeSurvival(accrualTime = c(0, 18), accrualIntensity = c(20, 30), lambda2 = 0.4, lambda1 = 0.3, followUpTime = 0, dropoutRate1 = 0.001, dropoutRate2 = 0.005) ## End(Not run)
Returns the simulated power, stopping probabilities, conditional power, and expected sample size for testing mean rates for negative binomial distributed event numbers in the two treatment groups testing situation.
getSimulationCounts( design = NULL, ..., plannedCalendarTime = NA_real_, maxNumberOfSubjects = NA_real_, lambda1 = NA_real_, lambda2 = NA_real_, lambda = NA_real_, theta = NA_real_, directionUpper = NA, thetaH0 = 1, overdispersion = 0, fixedExposureTime = NA_real_, accrualTime = NA_real_, accrualIntensity = NA_real_, followUpTime = NA_real_, allocationRatioPlanned = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, showStatistics = FALSE )
getSimulationCounts( design = NULL, ..., plannedCalendarTime = NA_real_, maxNumberOfSubjects = NA_real_, lambda1 = NA_real_, lambda2 = NA_real_, lambda = NA_real_, theta = NA_real_, directionUpper = NA, thetaH0 = 1, overdispersion = 0, fixedExposureTime = NA_real_, accrualTime = NA_real_, accrualIntensity = NA_real_, followUpTime = NA_real_, allocationRatioPlanned = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, showStatistics = FALSE )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
plannedCalendarTime |
For simulating count data, the time points where an analysis is planned to be performed.
Should be a vector of length |
maxNumberOfSubjects |
|
lambda1 |
A numeric value or vector that represents the assumed rate of a homogeneous Poisson process in the active treatment group, there is no default. |
lambda2 |
A numeric value that represents the assumed rate of a homogeneous Poisson process in the control group, there is no default. |
lambda |
A numeric value or vector that represents the assumed rate of a homogeneous Poisson process in the pooled treatment groups, there is no default. |
theta |
A numeric value or vector that represents the assumed mean ratios lambda1/lambda2 of a homogeneous Poisson process, there is no default. |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
overdispersion |
A numeric value that represents the assumed overdispersion of the negative binomial distribution,
default is |
fixedExposureTime |
If specified, the fixed time of exposure per subject for count data, there is no default. |
accrualTime |
If specified, the assumed accrual time interval(s) for the study, there is no default. |
accrualIntensity |
If specified, the assumed accrual intensities for the study, there is no default. |
followUpTime |
If specified, the assumed (additional) follow-up time for the study, there is no default.
The total study duration is |
allocationRatioPlanned |
The planned allocation ratio |
maxNumberOfIterations |
The number of simulation iterations, default is |
seed |
The seed to reproduce the simulation, default is a random seed. |
showStatistics |
Logical. If |
At given design the function simulates the power, stopping probabilities, conditional power, and expected
sample size at given number of subjects and parameter configuration.
Additionally, an allocation ratio = n1/n2
and a null hypothesis value thetaH0
can be specified.
Returns a SimulationResults
object.
The following generics (R generic functions) are available for this object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
The summary statistics "Simulated data" contains the following parameters: median range; mean +/-sd
$show(showStatistics = FALSE)
or $setShowStatistics(FALSE)
can be used to disable
the output of the aggregated simulated data.
getData()
can be used to get the aggregated simulated data from the
object as data.frame
. The data frame contains the following columns:
iterationNumber
: The number of the simulation iteration.
stageNumber
: The stage.
lambda1
: The assumed or derived event rate in the treatment group.
lambda2
: The assumed or derived event rate in the control group.
accrualTime
: The assumed accrualTime.
followUpTime
: The assumed followUpTime.
overdispersion
: The assumed overdispersion.
fixedFollowUp
: The assumed fixedFollowUp.
numberOfSubjects
: The number of subjects under consideration when the (interim) analysis takes place.
rejectPerStage
: 1 if null hypothesis can be rejected, 0 otherwise.
futilityPerStage
: 1 if study should be stopped for futility, 0 otherwise.
testStatistic
: The test statistic that is used for the test decision
estimatedLambda1
: The estimated rate in treatment group 1.
estimatedLambda2
: The estimated rate in treatment group 2.
estimatedOverdispersion
: The estimated overdispersion.
infoAnalysis
: The Fisher information at interim stage.
trialStop
: TRUE
if study should be stopped for efficacy or futility or final stage, FALSE
otherwise.
conditionalPowerAchieved
: Not yet available
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: # Fixed sample size design with two groups, fixed exposure time getSimulationCounts( theta = 1.8, lambda2 = 0.2, maxNumberOfSubjects = 200, plannedCalendarTime = 8, maxNumberOfIterations = 1000, fixedExposureTime = 6, accrualTime = 3, overdispersion = 2) # Group sequential design alpha spending function design with O'Brien and # Fleming type boundaries: Power and test characteristics for N = 264, # under variable exposure time with uniform recruitment over 1.25 months, # study time (accrual + followup) = 4, interim analysis take place after # equidistant time points (lambda1, lambda2, and overdispersion as specified, # no futility stopping): dOF <- getDesignGroupSequential( kMax = 3, alpha = 0.025, beta = 0.2, typeOfDesign = "asOF") getSimulationCounts(design = dOF, lambda1 = seq(0.04, 0.12, 0.02), lambda2 = 0.12, directionUpper = FALSE, overdispersion = 5, plannedCalendarTime = (1:3)/3*4, maxNumberOfSubjects = 264, followUpTime = 2.75, accrualTime = 1.25, maxNumberOfIterations = 1000) ## End(Not run)
## Not run: # Fixed sample size design with two groups, fixed exposure time getSimulationCounts( theta = 1.8, lambda2 = 0.2, maxNumberOfSubjects = 200, plannedCalendarTime = 8, maxNumberOfIterations = 1000, fixedExposureTime = 6, accrualTime = 3, overdispersion = 2) # Group sequential design alpha spending function design with O'Brien and # Fleming type boundaries: Power and test characteristics for N = 264, # under variable exposure time with uniform recruitment over 1.25 months, # study time (accrual + followup) = 4, interim analysis take place after # equidistant time points (lambda1, lambda2, and overdispersion as specified, # no futility stopping): dOF <- getDesignGroupSequential( kMax = 3, alpha = 0.025, beta = 0.2, typeOfDesign = "asOF") getSimulationCounts(design = dOF, lambda1 = seq(0.04, 0.12, 0.02), lambda2 = 0.12, directionUpper = FALSE, overdispersion = 5, plannedCalendarTime = (1:3)/3*4, maxNumberOfSubjects = 264, followUpTime = 2.75, accrualTime = 1.25, maxNumberOfIterations = 1000) ## End(Not run)
Returns the simulated power, stopping and selection probabilities, conditional power, and expected sample size or testing means in an enrichment design testing situation.
getSimulationEnrichmentMeans( design = NULL, ..., effectList = NULL, intersectionTest = c("Simes", "SpiessensDebois", "Bonferroni", "Sidak"), stratifiedAnalysis = TRUE, adaptations = NA, typeOfSelection = c("best", "rBest", "epsilon", "all", "userDefined"), effectMeasure = c("effectEstimate", "testStatistic"), successCriterion = c("all", "atLeastOne"), epsilonValue = NA_real_, rValue = NA_real_, threshold = -Inf, plannedSubjects = NA_integer_, allocationRatioPlanned = NA_real_, minNumberOfSubjectsPerStage = NA_real_, maxNumberOfSubjectsPerStage = NA_real_, conditionalPower = NA_real_, thetaH1 = NA_real_, stDevH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcSubjectsFunction = NULL, selectPopulationsFunction = NULL, showStatistics = FALSE )
getSimulationEnrichmentMeans( design = NULL, ..., effectList = NULL, intersectionTest = c("Simes", "SpiessensDebois", "Bonferroni", "Sidak"), stratifiedAnalysis = TRUE, adaptations = NA, typeOfSelection = c("best", "rBest", "epsilon", "all", "userDefined"), effectMeasure = c("effectEstimate", "testStatistic"), successCriterion = c("all", "atLeastOne"), epsilonValue = NA_real_, rValue = NA_real_, threshold = -Inf, plannedSubjects = NA_integer_, allocationRatioPlanned = NA_real_, minNumberOfSubjectsPerStage = NA_real_, maxNumberOfSubjectsPerStage = NA_real_, conditionalPower = NA_real_, thetaH1 = NA_real_, stDevH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcSubjectsFunction = NULL, selectPopulationsFunction = NULL, showStatistics = FALSE )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
effectList |
List of subsets, prevalences, and effect sizes with columns and number of rows reflecting the different situations to consider (see examples). |
intersectionTest |
Defines the multiple test for the intersection
hypotheses in the closed system of hypotheses.
Four options are available in enrichment designs: |
stratifiedAnalysis |
Logical. For enrichment designs, typically a stratified analysis should be chosen.
For testing rates, also a non-stratified analysis based on overall data can be performed.
For survival data, only a stratified analysis is possible (see Brannath et al., 2009),
default is |
adaptations |
A logical vector of length |
typeOfSelection |
The way the treatment arms or populations are selected at interim.
Five options are available: |
effectMeasure |
Criterion for treatment arm/population selection, either based on test statistic
( |
successCriterion |
Defines when the study is stopped for efficacy at interim.
Two options are available: |
epsilonValue |
For |
rValue |
For |
threshold |
Selection criterion: treatment arm / population is selected only if |
plannedSubjects |
|
allocationRatioPlanned |
The planned allocation ratio |
minNumberOfSubjectsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
maxNumberOfSubjectsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
conditionalPower |
If |
thetaH1 |
If specified, the value of the alternative under which the conditional power or sample size recalculation calculation is performed. Must be a numeric of length 1. |
stDevH1 |
If specified, the value of the standard deviation under which
the conditional power or sample size recalculation calculation is performed,
default is the value of |
maxNumberOfIterations |
The number of simulation iterations, default is |
seed |
The seed to reproduce the simulation, default is a random seed. |
calcSubjectsFunction |
Optionally, a function can be entered that defines the way of performing the sample size
recalculation. By default, sample size recalculation is performed with conditional power and specified
|
selectPopulationsFunction |
Optionally, a function can be entered that defines the way of how populations
are selected. This function is allowed to depend on |
showStatistics |
Logical. If |
At given design the function simulates the power, stopping probabilities, selection probabilities, and expected sample size at given number of subjects, parameter configuration, and population selection rule in the enrichment situation. An allocation ratio can be specified referring to the ratio of number of subjects in the active treatment groups as compared to the control group.
The definition of thetaH1
and/or stDevH1
makes only sense if kMax
> 1
and if conditionalPower
, minNumberOfSubjectsPerStage
, and
maxNumberOfSubjectsPerStage
(or calcSubjectsFunction
) are defined.
calcSubjectsFunction
This function returns the number of subjects at given conditional power and conditional
critical value for specified testing situation. The function might depend on the variables
stage
,
selectedPopulations
,
plannedSubjects
,
allocationRatioPlanned
,
minNumberOfSubjectsPerStage
,
maxNumberOfSubjectsPerStage
,
conditionalPower
,
conditionalCriticalValue
,
overallEffects
, and
stDevH1
.
The function has to contain the three-dots argument '...' (see examples).
Returns a SimulationResults
object.
The following generics (R generic functions) are available for this object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: # Assess a population selection strategy with one subset population. # If the subset is better than the full population, then the subset # is selected for the second stage, otherwise the full. Print and plot # design characteristics. # Define design designIN <- getDesignInverseNormal(kMax = 2) # Define subgroups and their prevalences subGroups <- c("S", "R") # fixed names! prevalences <- c(0.2, 0.8) # Define effect matrix and variability effectR <- 0.2 m <- c() for (effectS in seq(0, 0.5, 0.25)) { m <- c(m, effectS, effectR) } effects <- matrix(m, byrow = TRUE, ncol = 2) stDev <- c(0.4, 0.8) # Define effect list effectList <- list(subGroups=subGroups, prevalences=prevalences, stDevs = stDev, effects = effects) # Perform simulation simResultsPE <- getSimulationEnrichmentMeans(design = designIN, effectList = effectList, plannedSubjects = c(50, 100), maxNumberOfIterations = 100) print(simResultsPE) # Assess the design characteristics of a user defined selection # strategy in a three-stage design with no interim efficacy stop # using the inverse normal method for combining the stages. # Only the second interim is used for a selecting of a study # population. There is a small probability for stopping the trial # at the first interim. # Define design designIN2 <- getDesignInverseNormal(typeOfDesign = "noEarlyEfficacy", kMax = 3) # Define selection function mySelection <- function(effectVector, stage) { selectedPopulations <- rep(TRUE, 3) if (stage == 2) { selectedPopulations <- (effectVector >= c(1, 2, 3)) } return(selectedPopulations) } # Define subgroups and their prevalences subGroups <- c("S1", "S12", "S2", "R") # fixed names! prevalences <- c(0.2, 0.3, 0.4, 0.1) effectR <- 1.5 effectS12 = 5 m <- c() for (effectS1 in seq(0, 5, 1)) { for (effectS2 in seq(0, 5, 1)) { m <- c(m, effectS1, effectS12, effectS2, effectR) } } effects <- matrix(m, byrow = TRUE, ncol = 4) stDev <- 10 # Define effect list effectList <- list(subGroups=subGroups, prevalences=prevalences, stDevs = stDev, effects = effects) # Perform simulation simResultsPE <- getSimulationEnrichmentMeans( design = designIN2, effectList = effectList, typeOfSelection = "userDefined", selectPopulationsFunction = mySelection, intersectionTest = "Simes", plannedSubjects = c(50, 100, 150), maxNumberOfIterations = 100) print(simResultsPE) if (require(ggplot2)) plot(simResultsPE, type = 3) ## End(Not run)
## Not run: # Assess a population selection strategy with one subset population. # If the subset is better than the full population, then the subset # is selected for the second stage, otherwise the full. Print and plot # design characteristics. # Define design designIN <- getDesignInverseNormal(kMax = 2) # Define subgroups and their prevalences subGroups <- c("S", "R") # fixed names! prevalences <- c(0.2, 0.8) # Define effect matrix and variability effectR <- 0.2 m <- c() for (effectS in seq(0, 0.5, 0.25)) { m <- c(m, effectS, effectR) } effects <- matrix(m, byrow = TRUE, ncol = 2) stDev <- c(0.4, 0.8) # Define effect list effectList <- list(subGroups=subGroups, prevalences=prevalences, stDevs = stDev, effects = effects) # Perform simulation simResultsPE <- getSimulationEnrichmentMeans(design = designIN, effectList = effectList, plannedSubjects = c(50, 100), maxNumberOfIterations = 100) print(simResultsPE) # Assess the design characteristics of a user defined selection # strategy in a three-stage design with no interim efficacy stop # using the inverse normal method for combining the stages. # Only the second interim is used for a selecting of a study # population. There is a small probability for stopping the trial # at the first interim. # Define design designIN2 <- getDesignInverseNormal(typeOfDesign = "noEarlyEfficacy", kMax = 3) # Define selection function mySelection <- function(effectVector, stage) { selectedPopulations <- rep(TRUE, 3) if (stage == 2) { selectedPopulations <- (effectVector >= c(1, 2, 3)) } return(selectedPopulations) } # Define subgroups and their prevalences subGroups <- c("S1", "S12", "S2", "R") # fixed names! prevalences <- c(0.2, 0.3, 0.4, 0.1) effectR <- 1.5 effectS12 = 5 m <- c() for (effectS1 in seq(0, 5, 1)) { for (effectS2 in seq(0, 5, 1)) { m <- c(m, effectS1, effectS12, effectS2, effectR) } } effects <- matrix(m, byrow = TRUE, ncol = 4) stDev <- 10 # Define effect list effectList <- list(subGroups=subGroups, prevalences=prevalences, stDevs = stDev, effects = effects) # Perform simulation simResultsPE <- getSimulationEnrichmentMeans( design = designIN2, effectList = effectList, typeOfSelection = "userDefined", selectPopulationsFunction = mySelection, intersectionTest = "Simes", plannedSubjects = c(50, 100, 150), maxNumberOfIterations = 100) print(simResultsPE) if (require(ggplot2)) plot(simResultsPE, type = 3) ## End(Not run)
Returns the simulated power, stopping and selection probabilities, conditional power, and expected sample size for testing rates in an enrichment design testing situation.
getSimulationEnrichmentRates( design = NULL, ..., effectList = NULL, intersectionTest = c("Simes", "SpiessensDebois", "Bonferroni", "Sidak"), stratifiedAnalysis = TRUE, directionUpper = NA, adaptations = NA, typeOfSelection = c("best", "rBest", "epsilon", "all", "userDefined"), effectMeasure = c("effectEstimate", "testStatistic"), successCriterion = c("all", "atLeastOne"), epsilonValue = NA_real_, rValue = NA_real_, threshold = -Inf, plannedSubjects = NA_real_, allocationRatioPlanned = NA_real_, minNumberOfSubjectsPerStage = NA_real_, maxNumberOfSubjectsPerStage = NA_real_, conditionalPower = NA_real_, piTreatmentH1 = NA_real_, piControlH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcSubjectsFunction = NULL, selectPopulationsFunction = NULL, showStatistics = FALSE )
getSimulationEnrichmentRates( design = NULL, ..., effectList = NULL, intersectionTest = c("Simes", "SpiessensDebois", "Bonferroni", "Sidak"), stratifiedAnalysis = TRUE, directionUpper = NA, adaptations = NA, typeOfSelection = c("best", "rBest", "epsilon", "all", "userDefined"), effectMeasure = c("effectEstimate", "testStatistic"), successCriterion = c("all", "atLeastOne"), epsilonValue = NA_real_, rValue = NA_real_, threshold = -Inf, plannedSubjects = NA_real_, allocationRatioPlanned = NA_real_, minNumberOfSubjectsPerStage = NA_real_, maxNumberOfSubjectsPerStage = NA_real_, conditionalPower = NA_real_, piTreatmentH1 = NA_real_, piControlH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcSubjectsFunction = NULL, selectPopulationsFunction = NULL, showStatistics = FALSE )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
effectList |
List of subsets, prevalences, and effect sizes with columns and number of rows reflecting the different situations to consider (see examples). |
intersectionTest |
Defines the multiple test for the intersection
hypotheses in the closed system of hypotheses.
Four options are available in enrichment designs: |
stratifiedAnalysis |
Logical. For enrichment designs, typically a stratified analysis should be chosen.
For testing rates, also a non-stratified analysis based on overall data can be performed.
For survival data, only a stratified analysis is possible (see Brannath et al., 2009),
default is |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
adaptations |
A logical vector of length |
typeOfSelection |
The way the treatment arms or populations are selected at interim.
Five options are available: |
effectMeasure |
Criterion for treatment arm/population selection, either based on test statistic
( |
successCriterion |
Defines when the study is stopped for efficacy at interim.
Two options are available: |
epsilonValue |
For |
rValue |
For |
threshold |
Selection criterion: treatment arm / population is selected only if |
plannedSubjects |
|
allocationRatioPlanned |
The planned allocation ratio |
minNumberOfSubjectsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
maxNumberOfSubjectsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
conditionalPower |
If |
piTreatmentH1 |
If specified, the assumed probabilities in the active arm under which the sample size recalculation was performed and the conditional power was calculated. |
piControlH1 |
If specified, the assumed probabilities in the control arm under which the sample size recalculation was performed and the conditional power was calculated. |
maxNumberOfIterations |
The number of simulation iterations, default is |
seed |
The seed to reproduce the simulation, default is a random seed. |
calcSubjectsFunction |
Optionally, a function can be entered that defines the way of performing the sample size
recalculation. By default, sample size recalculation is performed with conditional power and specified
|
selectPopulationsFunction |
Optionally, a function can be entered that defines the way of how populations
are selected. This function is allowed to depend on |
showStatistics |
Logical. If |
At given design the function simulates the power, stopping probabilities, selection probabilities, and expected sample size at given number of subjects, parameter configuration, and treatment arm selection rule in the enrichment situation. An allocation ratio can be specified referring to the ratio of number of subjects in the active treatment groups as compared to the control group.
The definition of piTreatmentH1
and/or piControlH1
makes only sense if kMax
> 1
and if conditionalPower
, minNumberOfSubjectsPerStage
, and
maxNumberOfSubjectsPerStage
(or calcSubjectsFunction
) are defined.
calcSubjectsFunction
This function returns the number of subjects at given conditional power and
conditional critical value for specified testing situation.
The function might depend on the variables
stage
,
selectedPopulations
,
directionUpper
,
plannedSubjects
,
allocationRatioPlanned
,
minNumberOfSubjectsPerStage
,
maxNumberOfSubjectsPerStage
,
conditionalPower
,
conditionalCriticalValue
,
overallRatesTreatment
,
overallRatesControl
,
piTreatmentH1
, and
piControlH1
.
The function has to contain the three-dots argument '...' (see examples).
Returns a SimulationResults
object.
The following generics (R generic functions) are available for this object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: # Assess a population selection strategy with two subset populations and # a binary endpoint using a stratified analysis. No early efficacy stop, # weighted inverse normal method with weight sqrt(0.4). pi2 <- c(0.3, 0.4, 0.3, 0.55) pi1Seq <- seq(0.0, 0.2, 0.2) pi1 <- matrix(rep(pi1Seq, length(pi2)), ncol = length(pi1Seq), byrow = TRUE) + pi2 effectList <- list( subGroups = c("S1", "S2", "S12", "R"), prevalences = c(0.1, 0.4, 0.2, 0.3), piControl = pi2, piTreatments = expand.grid(pi1[1, ], pi1[2, ], pi1[3, ], pi1[4, ]) ) design <- getDesignInverseNormal(informationRates = c(0.4, 1), typeOfDesign = "noEarlyEfficacy") simResultsPE <- getSimulationEnrichmentRates(design, plannedSubjects = c(150, 300), allocationRatioPlanned = 1.5, directionUpper = TRUE, effectList = effectList, stratifiedAnalysis = TRUE, intersectionTest = "Sidak", typeOfSelection = "epsilon", epsilonValue = 0.025, maxNumberOfIterations = 100) print(simResultsPE) ## End(Not run)
## Not run: # Assess a population selection strategy with two subset populations and # a binary endpoint using a stratified analysis. No early efficacy stop, # weighted inverse normal method with weight sqrt(0.4). pi2 <- c(0.3, 0.4, 0.3, 0.55) pi1Seq <- seq(0.0, 0.2, 0.2) pi1 <- matrix(rep(pi1Seq, length(pi2)), ncol = length(pi1Seq), byrow = TRUE) + pi2 effectList <- list( subGroups = c("S1", "S2", "S12", "R"), prevalences = c(0.1, 0.4, 0.2, 0.3), piControl = pi2, piTreatments = expand.grid(pi1[1, ], pi1[2, ], pi1[3, ], pi1[4, ]) ) design <- getDesignInverseNormal(informationRates = c(0.4, 1), typeOfDesign = "noEarlyEfficacy") simResultsPE <- getSimulationEnrichmentRates(design, plannedSubjects = c(150, 300), allocationRatioPlanned = 1.5, directionUpper = TRUE, effectList = effectList, stratifiedAnalysis = TRUE, intersectionTest = "Sidak", typeOfSelection = "epsilon", epsilonValue = 0.025, maxNumberOfIterations = 100) print(simResultsPE) ## End(Not run)
Returns the simulated power, stopping and selection probabilities, conditional power,
and expected sample size for testing hazard ratios in an enrichment design testing situation.
In contrast to getSimulationSurvival()
(where survival times are simulated), normally
distributed logrank test statistics are simulated.
getSimulationEnrichmentSurvival( design = NULL, ..., effectList = NULL, intersectionTest = c("Simes", "SpiessensDebois", "Bonferroni", "Sidak"), stratifiedAnalysis = TRUE, directionUpper = NA, adaptations = NA, typeOfSelection = c("best", "rBest", "epsilon", "all", "userDefined"), effectMeasure = c("effectEstimate", "testStatistic"), successCriterion = c("all", "atLeastOne"), epsilonValue = NA_real_, rValue = NA_real_, threshold = -Inf, plannedEvents = NA_real_, allocationRatioPlanned = NA_real_, minNumberOfEventsPerStage = NA_real_, maxNumberOfEventsPerStage = NA_real_, conditionalPower = NA_real_, thetaH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcEventsFunction = NULL, selectPopulationsFunction = NULL, showStatistics = FALSE )
getSimulationEnrichmentSurvival( design = NULL, ..., effectList = NULL, intersectionTest = c("Simes", "SpiessensDebois", "Bonferroni", "Sidak"), stratifiedAnalysis = TRUE, directionUpper = NA, adaptations = NA, typeOfSelection = c("best", "rBest", "epsilon", "all", "userDefined"), effectMeasure = c("effectEstimate", "testStatistic"), successCriterion = c("all", "atLeastOne"), epsilonValue = NA_real_, rValue = NA_real_, threshold = -Inf, plannedEvents = NA_real_, allocationRatioPlanned = NA_real_, minNumberOfEventsPerStage = NA_real_, maxNumberOfEventsPerStage = NA_real_, conditionalPower = NA_real_, thetaH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcEventsFunction = NULL, selectPopulationsFunction = NULL, showStatistics = FALSE )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
effectList |
List of subsets, prevalences, and effect sizes with columns and number of rows reflecting the different situations to consider (see examples). |
intersectionTest |
Defines the multiple test for the intersection
hypotheses in the closed system of hypotheses.
Four options are available in enrichment designs: |
stratifiedAnalysis |
Logical. For enrichment designs, typically a stratified analysis should be chosen.
For testing rates, also a non-stratified analysis based on overall data can be performed.
For survival data, only a stratified analysis is possible (see Brannath et al., 2009),
default is |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
adaptations |
A logical vector of length |
typeOfSelection |
The way the treatment arms or populations are selected at interim.
Five options are available: |
effectMeasure |
Criterion for treatment arm/population selection, either based on test statistic
( |
successCriterion |
Defines when the study is stopped for efficacy at interim.
Two options are available: |
epsilonValue |
For |
rValue |
For |
threshold |
Selection criterion: treatment arm / population is selected only if |
plannedEvents |
|
allocationRatioPlanned |
The planned allocation ratio |
minNumberOfEventsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
maxNumberOfEventsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
conditionalPower |
If |
thetaH1 |
If specified, the value of the alternative under which the conditional power or sample size recalculation calculation is performed. Must be a numeric of length 1. |
maxNumberOfIterations |
The number of simulation iterations, default is |
seed |
The seed to reproduce the simulation, default is a random seed. |
calcEventsFunction |
Optionally, a function can be entered that defines the way of performing the sample size
recalculation. By default, event number recalculation is performed with conditional power and specified
|
selectPopulationsFunction |
Optionally, a function can be entered that defines the way of how populations
are selected. This function is allowed to depend on |
showStatistics |
Logical. If |
At given design the function simulates the power, stopping probabilities, selection probabilities, and expected event number at given number of events, parameter configuration, and population selection rule in the enrichment situation. An allocation ratio can be specified referring to the ratio of number of subjects in the active treatment group as compared to the control group.
The definition of thetaH1
makes only sense if kMax
> 1
and if conditionalPower
, minNumberOfEventsPerStage
, and
maxNumberOfEventsPerStage
(or calcEventsFunction
) are defined.
calcEventsFunction
This function returns the number of events at given conditional power
and conditional critical value for specified testing situation.
The function might depend on the variables
stage
,
selectedPopulations
,
plannedEvents
,
directionUpper
,
allocationRatioPlanned
,
minNumberOfEventsPerStage
,
maxNumberOfEventsPerStage
,
conditionalPower
,
conditionalCriticalValue
, and
overallEffects
.
The function has to contain the three-dots argument '...' (see examples).
Returns a SimulationResults
object.
The following generics (R generic functions) are available for this object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: # Assess a population selection strategy with one subset population and # a survival endpoint. The considered situations are defined through the # event rates yielding a range of hazard ratios in the subsets. Design # with O'Brien and Fleming alpha spending and a reassessment of event # number in the first interim based on conditional power and assumed # hazard ratio using weighted inverse normal combination test. subGroups <- c("S", "R") prevalences <- c(0.40, 0.60) p2 <- c(0.3, 0.4) range1 <- p2[1] + seq(0, 0.3, 0.05) p1 <- c() for (x1 in range1) { p1 <- c(p1, x1, p2[2] + 0.1) } hazardRatios <- log(matrix(1 - p1, byrow = TRUE, ncol = 2)) / matrix(log(1 - p2), byrow = TRUE, ncol = 2, nrow = length(range1)) effectList <- list(subGroups=subGroups, prevalences=prevalences, hazardRatios = hazardRatios) design <- getDesignInverseNormal(informationRates = c(0.3, 0.7, 1), typeOfDesign = "asOF") simResultsPE <- getSimulationEnrichmentSurvival(design, plannedEvents = c(40, 90, 120), effectList = effectList, typeOfSelection = "rbest", rValue = 2, conditionalPower = 0.8, minNumberOfEventsPerStage = c(NA, 50, 30), maxNumberOfEventsPerStage = c(NA, 150, 30), thetaH1 = 4 / 3, maxNumberOfIterations = 100) print(simResultsPE) ## End(Not run)
## Not run: # Assess a population selection strategy with one subset population and # a survival endpoint. The considered situations are defined through the # event rates yielding a range of hazard ratios in the subsets. Design # with O'Brien and Fleming alpha spending and a reassessment of event # number in the first interim based on conditional power and assumed # hazard ratio using weighted inverse normal combination test. subGroups <- c("S", "R") prevalences <- c(0.40, 0.60) p2 <- c(0.3, 0.4) range1 <- p2[1] + seq(0, 0.3, 0.05) p1 <- c() for (x1 in range1) { p1 <- c(p1, x1, p2[2] + 0.1) } hazardRatios <- log(matrix(1 - p1, byrow = TRUE, ncol = 2)) / matrix(log(1 - p2), byrow = TRUE, ncol = 2, nrow = length(range1)) effectList <- list(subGroups=subGroups, prevalences=prevalences, hazardRatios = hazardRatios) design <- getDesignInverseNormal(informationRates = c(0.3, 0.7, 1), typeOfDesign = "asOF") simResultsPE <- getSimulationEnrichmentSurvival(design, plannedEvents = c(40, 90, 120), effectList = effectList, typeOfSelection = "rbest", rValue = 2, conditionalPower = 0.8, minNumberOfEventsPerStage = c(NA, 50, 30), maxNumberOfEventsPerStage = c(NA, 150, 30), thetaH1 = 4 / 3, maxNumberOfIterations = 100) print(simResultsPE) ## End(Not run)
Returns the simulated power, stopping probabilities, conditional power, and expected sample size for testing means in a one or two treatment groups testing situation.
getSimulationMeans( design = NULL, ..., groups = 2L, normalApproximation = TRUE, meanRatio = FALSE, thetaH0 = ifelse(meanRatio, 1, 0), alternative = seq(0, 1, 0.2), stDev = 1, plannedSubjects = NA_real_, directionUpper = NA, allocationRatioPlanned = NA_real_, minNumberOfSubjectsPerStage = NA_real_, maxNumberOfSubjectsPerStage = NA_real_, conditionalPower = NA_real_, thetaH1 = NA_real_, stDevH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcSubjectsFunction = NULL, showStatistics = FALSE )
getSimulationMeans( design = NULL, ..., groups = 2L, normalApproximation = TRUE, meanRatio = FALSE, thetaH0 = ifelse(meanRatio, 1, 0), alternative = seq(0, 1, 0.2), stDev = 1, plannedSubjects = NA_real_, directionUpper = NA, allocationRatioPlanned = NA_real_, minNumberOfSubjectsPerStage = NA_real_, maxNumberOfSubjectsPerStage = NA_real_, conditionalPower = NA_real_, thetaH1 = NA_real_, stDevH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcSubjectsFunction = NULL, showStatistics = FALSE )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
groups |
The number of treatment groups (1 or 2), default is |
normalApproximation |
The type of computation of the p-values. Default is |
meanRatio |
If |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
alternative |
The alternative hypothesis value for testing means under which the data is simulated.
This can be a vector of assumed alternatives, default is |
stDev |
The standard deviation under which the data is simulated,
default is |
plannedSubjects |
|
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
allocationRatioPlanned |
The planned allocation ratio |
minNumberOfSubjectsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
maxNumberOfSubjectsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
conditionalPower |
If |
thetaH1 |
If specified, the value of the alternative under which the conditional power or sample size recalculation calculation is performed. Must be a numeric of length 1. |
stDevH1 |
If specified, the value of the standard deviation under which
the conditional power or sample size recalculation calculation is performed,
default is the value of |
maxNumberOfIterations |
The number of simulation iterations, default is |
seed |
The seed to reproduce the simulation, default is a random seed. |
calcSubjectsFunction |
Optionally, a function can be entered that defines the way of performing the sample size
recalculation. By default, sample size recalculation is performed with conditional power and specified
|
showStatistics |
Logical. If |
At given design the function simulates the power, stopping probabilities, conditional power, and expected sample size at given number of subjects and parameter configuration. Additionally, an allocation ratio = n1/n2 can be specified where n1 and n2 are the number of subjects in the two treatment groups.
The definition of thetaH1
makes only sense if kMax
> 1
and if conditionalPower
, minNumberOfSubjectsPerStage
, and
maxNumberOfSubjectsPerStage
(or calcSubjectsFunction
) are defined.
calcSubjectsFunction
This function returns the number of subjects at given conditional power and conditional critical value for specified
testing situation. The function might depend on variables
stage
,
meanRatio
,
thetaH0
,
groups
,
plannedSubjects
,
sampleSizesPerStage
,
directionUpper
,
allocationRatioPlanned
,
minNumberOfSubjectsPerStage
,
maxNumberOfSubjectsPerStage
,
conditionalPower
,
conditionalCriticalValue
,
thetaH1
, and
stDevH1
.
The function has to contain the three-dots argument '...' (see examples).
Returns a SimulationResults
object.
The following generics (R generic functions) are available for this object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
The summary statistics "Simulated data" contains the following parameters: median range; mean +/-sd
$show(showStatistics = FALSE)
or $setShowStatistics(FALSE)
can be used to disable
the output of the aggregated simulated data.
Example 1: simulationResults <- getSimulationMeans(plannedSubjects = 40)
simulationResults$show(showStatistics = FALSE)
Example 2: simulationResults <- getSimulationMeans(plannedSubjects = 40)
simulationResults$setShowStatistics(FALSE)
simulationResults
getData()
can be used to get the aggregated simulated data from the
object as data.frame
. The data frame contains the following columns:
iterationNumber
: The number of the simulation iteration.
stageNumber
: The stage.
alternative
: The alternative hypothesis value.
numberOfSubjects
: The number of subjects under consideration when the
(interim) analysis takes place.
rejectPerStage
: 1 if null hypothesis can be rejected, 0 otherwise.
futilityPerStage
: 1 if study should be stopped for futility, 0 otherwise.
testStatistic
: The test statistic that is used for the test decision,
depends on which design was chosen (group sequential, inverse normal, or Fisher's combination test).
testStatisticsPerStage
: The test statistic for each stage if only data from
the considered stage is taken into account.
effectEstimate
: Overall simulated standardized effect estimate.
trialStop
: TRUE
if study should be stopped for efficacy or futility or final stage, FALSE
otherwise.
conditionalPowerAchieved
: The conditional power for the subsequent stage of the trial for
selected sample size and effect. The effect is either estimated from the data or can be
user defined with thetaH1
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: # Fixed sample size design with two groups, total sample size 40, # alternative = c(0, 0.2, 0.4, 0.8, 1), and standard deviation = 1 (the default) getSimulationMeans(plannedSubjects = 40, maxNumberOfIterations = 10) # Increase number of simulation iterations and compare results # with power calculator using normal approximation getSimulationMeans( alternative = 0:4, stDev = 5, plannedSubjects = 40, maxNumberOfIterations = 1000 ) getPowerMeans( alternative = 0:4, stDev = 5, maxNumberOfSubjects = 40, normalApproximation = TRUE ) # Do the same for a three-stage O'Brien&Fleming inverse # normal group sequential design with non-binding futility stops designIN <- getDesignInverseNormal(typeOfDesign = "OF", futilityBounds = c(0, 0)) x <- getSimulationMeans(designIN, alternative = c(0:4), stDev = 5, plannedSubjects = c(20, 40, 60), maxNumberOfIterations = 1000 ) getPowerMeans(designIN, alternative = 0:4, stDev = 5, maxNumberOfSubjects = 60, normalApproximation = TRUE ) # Assess power and average sample size if a sample size increase is foreseen # at conditional power 80% for each subsequent stage based on observed overall # effect and specified minNumberOfSubjectsPerStage and # maxNumberOfSubjectsPerStage getSimulationMeans(designIN, alternative = 0:4, stDev = 5, plannedSubjects = c(20, 40, 60), minNumberOfSubjectsPerStage = c(NA, 20, 20), maxNumberOfSubjectsPerStage = c(NA, 80, 80), conditionalPower = 0.8, maxNumberOfIterations = 50 ) # Do the same under the assumption that a sample size increase only takes # place at the first interim. The sample size for the third stage is set equal # to the second stage sample size. mySampleSizeCalculationFunction <- function(..., stage, minNumberOfSubjectsPerStage, maxNumberOfSubjectsPerStage, sampleSizesPerStage, conditionalPower, conditionalCriticalValue, allocationRatioPlanned, thetaH1, stDevH1) { if (stage <= 2) { # Note that allocationRatioPlanned is as a vector of length kMax stageSubjects <- (1 + allocationRatioPlanned[stage])^2 / allocationRatioPlanned[stage] * (max(0, conditionalCriticalValue + stats::qnorm(conditionalPower)))^2 / (max(1e-12, thetaH1 / stDevH1))^2 stageSubjects <- min(max( minNumberOfSubjectsPerStage[stage], stageSubjects ), maxNumberOfSubjectsPerStage[stage]) } else { stageSubjects <- sampleSizesPerStage[stage - 1] } return(stageSubjects) } getSimulationMeans(designIN, alternative = 0:4, stDev = 5, plannedSubjects = c(20, 40, 60), minNumberOfSubjectsPerStage = c(NA, 20, 20), maxNumberOfSubjectsPerStage = c(NA, 80, 80), conditionalPower = 0.8, calcSubjectsFunction = mySampleSizeCalculationFunction, maxNumberOfIterations = 50 ) ## End(Not run)
## Not run: # Fixed sample size design with two groups, total sample size 40, # alternative = c(0, 0.2, 0.4, 0.8, 1), and standard deviation = 1 (the default) getSimulationMeans(plannedSubjects = 40, maxNumberOfIterations = 10) # Increase number of simulation iterations and compare results # with power calculator using normal approximation getSimulationMeans( alternative = 0:4, stDev = 5, plannedSubjects = 40, maxNumberOfIterations = 1000 ) getPowerMeans( alternative = 0:4, stDev = 5, maxNumberOfSubjects = 40, normalApproximation = TRUE ) # Do the same for a three-stage O'Brien&Fleming inverse # normal group sequential design with non-binding futility stops designIN <- getDesignInverseNormal(typeOfDesign = "OF", futilityBounds = c(0, 0)) x <- getSimulationMeans(designIN, alternative = c(0:4), stDev = 5, plannedSubjects = c(20, 40, 60), maxNumberOfIterations = 1000 ) getPowerMeans(designIN, alternative = 0:4, stDev = 5, maxNumberOfSubjects = 60, normalApproximation = TRUE ) # Assess power and average sample size if a sample size increase is foreseen # at conditional power 80% for each subsequent stage based on observed overall # effect and specified minNumberOfSubjectsPerStage and # maxNumberOfSubjectsPerStage getSimulationMeans(designIN, alternative = 0:4, stDev = 5, plannedSubjects = c(20, 40, 60), minNumberOfSubjectsPerStage = c(NA, 20, 20), maxNumberOfSubjectsPerStage = c(NA, 80, 80), conditionalPower = 0.8, maxNumberOfIterations = 50 ) # Do the same under the assumption that a sample size increase only takes # place at the first interim. The sample size for the third stage is set equal # to the second stage sample size. mySampleSizeCalculationFunction <- function(..., stage, minNumberOfSubjectsPerStage, maxNumberOfSubjectsPerStage, sampleSizesPerStage, conditionalPower, conditionalCriticalValue, allocationRatioPlanned, thetaH1, stDevH1) { if (stage <= 2) { # Note that allocationRatioPlanned is as a vector of length kMax stageSubjects <- (1 + allocationRatioPlanned[stage])^2 / allocationRatioPlanned[stage] * (max(0, conditionalCriticalValue + stats::qnorm(conditionalPower)))^2 / (max(1e-12, thetaH1 / stDevH1))^2 stageSubjects <- min(max( minNumberOfSubjectsPerStage[stage], stageSubjects ), maxNumberOfSubjectsPerStage[stage]) } else { stageSubjects <- sampleSizesPerStage[stage - 1] } return(stageSubjects) } getSimulationMeans(designIN, alternative = 0:4, stDev = 5, plannedSubjects = c(20, 40, 60), minNumberOfSubjectsPerStage = c(NA, 20, 20), maxNumberOfSubjectsPerStage = c(NA, 80, 80), conditionalPower = 0.8, calcSubjectsFunction = mySampleSizeCalculationFunction, maxNumberOfIterations = 50 ) ## End(Not run)
Returns the simulated power, stopping and selection probabilities, conditional power, and expected sample size for testing means in a multi-arm treatment groups testing situation.
getSimulationMultiArmMeans( design = NULL, ..., activeArms = 3L, effectMatrix = NULL, typeOfShape = c("linear", "sigmoidEmax", "userDefined"), muMaxVector = seq(0, 1, 0.2), gED50 = NA_real_, slope = 1, doseLevels = NA_real_, intersectionTest = c("Dunnett", "Bonferroni", "Simes", "Sidak", "Hierarchical"), stDev = 1, adaptations = NA, typeOfSelection = c("best", "rBest", "epsilon", "all", "userDefined"), effectMeasure = c("effectEstimate", "testStatistic"), successCriterion = c("all", "atLeastOne"), epsilonValue = NA_real_, rValue = NA_real_, threshold = -Inf, plannedSubjects = NA_integer_, allocationRatioPlanned = NA_real_, minNumberOfSubjectsPerStage = NA_real_, maxNumberOfSubjectsPerStage = NA_real_, conditionalPower = NA_real_, thetaH1 = NA_real_, stDevH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcSubjectsFunction = NULL, selectArmsFunction = NULL, showStatistics = FALSE )
getSimulationMultiArmMeans( design = NULL, ..., activeArms = 3L, effectMatrix = NULL, typeOfShape = c("linear", "sigmoidEmax", "userDefined"), muMaxVector = seq(0, 1, 0.2), gED50 = NA_real_, slope = 1, doseLevels = NA_real_, intersectionTest = c("Dunnett", "Bonferroni", "Simes", "Sidak", "Hierarchical"), stDev = 1, adaptations = NA, typeOfSelection = c("best", "rBest", "epsilon", "all", "userDefined"), effectMeasure = c("effectEstimate", "testStatistic"), successCriterion = c("all", "atLeastOne"), epsilonValue = NA_real_, rValue = NA_real_, threshold = -Inf, plannedSubjects = NA_integer_, allocationRatioPlanned = NA_real_, minNumberOfSubjectsPerStage = NA_real_, maxNumberOfSubjectsPerStage = NA_real_, conditionalPower = NA_real_, thetaH1 = NA_real_, stDevH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcSubjectsFunction = NULL, selectArmsFunction = NULL, showStatistics = FALSE )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
activeArms |
The number of active treatment arms to be compared with control, default is |
effectMatrix |
Matrix of effect sizes with |
typeOfShape |
The shape of the dose-response relationship over the treatment groups.
This can be either |
muMaxVector |
Range of effect sizes for the treatment group with highest response
for |
gED50 |
If |
slope |
If |
doseLevels |
The dose levels for the dose response relationship.
If not specified, these dose levels are |
intersectionTest |
Defines the multiple test for the intersection
hypotheses in the closed system of hypotheses.
Five options are available in multi-arm designs: |
stDev |
The standard deviation under which the data is simulated,
default is |
adaptations |
A logical vector of length |
typeOfSelection |
The way the treatment arms or populations are selected at interim.
Five options are available: |
effectMeasure |
Criterion for treatment arm/population selection, either based on test statistic
( |
successCriterion |
Defines when the study is stopped for efficacy at interim.
Two options are available: |
epsilonValue |
For |
rValue |
For |
threshold |
Selection criterion: treatment arm / population is selected only if |
plannedSubjects |
|
allocationRatioPlanned |
The planned allocation ratio |
minNumberOfSubjectsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
maxNumberOfSubjectsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
conditionalPower |
If |
thetaH1 |
If specified, the value of the alternative under which the conditional power or sample size recalculation calculation is performed. Must be a numeric of length 1. |
stDevH1 |
If specified, the value of the standard deviation under which
the conditional power or sample size recalculation calculation is performed,
default is the value of |
maxNumberOfIterations |
The number of simulation iterations, default is |
seed |
The seed to reproduce the simulation, default is a random seed. |
calcSubjectsFunction |
Optionally, a function can be entered that defines the way of performing the sample size
recalculation. By default, sample size recalculation is performed with conditional power and specified
|
selectArmsFunction |
Optionally, a function can be entered that defines the way of how treatment arms
are selected. This function is allowed to depend on |
showStatistics |
Logical. If |
At given design the function simulates the power, stopping probabilities, selection probabilities, and expected sample size at given number of subjects, parameter configuration, and treatment arm selection rule in the multi-arm situation. An allocation ratio can be specified referring to the ratio of number of subjects in the active treatment groups as compared to the control group.
The definition of thetaH1
and/or stDevH1
makes only sense if kMax
> 1
and if conditionalPower
, minNumberOfSubjectsPerStage
, and
maxNumberOfSubjectsPerStage
(or calcSubjectsFunction
) are defined.
calcSubjectsFunction
This function returns the number of subjects at given conditional power and conditional
critical value for specified testing situation. The function might depend on the variables
stage
,
selectedArms
,
plannedSubjects
,
allocationRatioPlanned
,
minNumberOfSubjectsPerStage
,
maxNumberOfSubjectsPerStage
,
conditionalPower
,
conditionalCriticalValue
,
overallEffects
, and
stDevH1
.
The function has to contain the three-dots argument '...' (see examples).
Returns a SimulationResults
object.
The following generics (R generic functions) are available for this object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: # Assess a treatment-arm selection strategy with three active arms, # if the better of the arms is selected for the second stage, and # compare it with the no-selection case. # Assume a linear dose-response relationship maxNumberOfIterations <- 100 designIN <- getDesignInverseNormal(typeOfDesign = "OF", kMax = 2) sim <- getSimulationMultiArmMeans(design = designIN, activeArms = 3, typeOfShape = "linear", muMaxVector = seq(0,0.8,0.2), intersectionTest = "Simes", typeOfSelection = "best", plannedSubjects = c(30,60), maxNumberOfIterations = maxNumberOfIterations) sim0 <- getSimulationMultiArmMeans(design = designIN, activeArms = 3, typeOfShape = "linear", muMaxVector = seq(0,0.8,0.2), intersectionTest = "Simes", typeOfSelection = "all", plannedSubjects = c(30,60), maxNumberOfIterations = maxNumberOfIterations) sim$rejectAtLeastOne sim$expectedNumberOfSubjects sim0$rejectAtLeastOne sim0$expectedNumberOfSubjects # Compare the power of the conditional Dunnett test with the power of the # combination test using Dunnett's intersection tests if no treatment arm # selection takes place. Asseume a linear dose-response relationship. maxNumberOfIterations <- 100 designIN <- getDesignInverseNormal(typeOfDesign = "asUser", userAlphaSpending = c(0, 0.025)) designCD <- getDesignConditionalDunnett(secondStageConditioning = TRUE) index <- 1 for (design in c(designIN, designCD)) { results <- getSimulationMultiArmMeans(design, activeArms = 3, muMaxVector = seq(0, 1, 0.2), typeOfShape = "linear", plannedSubjects = cumsum(rep(20, 2)), intersectionTest = "Dunnett", typeOfSelection = "all", maxNumberOfIterations = maxNumberOfIterations) if (index == 1) { drift <- results$effectMatrix[nrow(results$effectMatrix), ] plot(drift, results$rejectAtLeastOne, type = "l", lty = 1, lwd = 3, col = "black", ylab = "Power") } else { lines(drift,results$rejectAtLeastOne, type = "l", lty = index, lwd = 3, col = "red") } index <- index + 1 } legend("topleft", legend=c("Combination Dunnett", "Conditional Dunnett"), col=c("black", "red"), lty = (1:2), cex = 0.8) # Assess the design characteristics of a user defined selection # strategy in a two-stage design using the inverse normal method # with constant bounds. Stopping for futility due to # de-selection of all treatment arms. designIN <- getDesignInverseNormal(typeOfDesign = "P", kMax = 2) mySelection <- function(effectVector) { selectedArms <- (effectVector >= c(0, 0.1, 0.3)) return(selectedArms) } results <- getSimulationMultiArmMeans(designIN, activeArms = 3, muMaxVector = seq(0, 1, 0.2), typeOfShape = "linear", plannedSubjects = c(30,60), intersectionTest = "Dunnett", typeOfSelection = "userDefined", selectArmsFunction = mySelection, maxNumberOfIterations = 100) options(rpact.summary.output.size = "medium") summary(results) if (require(ggplot2)) plot(results, type = c(5,3,9), grid = 4) ## End(Not run)
## Not run: # Assess a treatment-arm selection strategy with three active arms, # if the better of the arms is selected for the second stage, and # compare it with the no-selection case. # Assume a linear dose-response relationship maxNumberOfIterations <- 100 designIN <- getDesignInverseNormal(typeOfDesign = "OF", kMax = 2) sim <- getSimulationMultiArmMeans(design = designIN, activeArms = 3, typeOfShape = "linear", muMaxVector = seq(0,0.8,0.2), intersectionTest = "Simes", typeOfSelection = "best", plannedSubjects = c(30,60), maxNumberOfIterations = maxNumberOfIterations) sim0 <- getSimulationMultiArmMeans(design = designIN, activeArms = 3, typeOfShape = "linear", muMaxVector = seq(0,0.8,0.2), intersectionTest = "Simes", typeOfSelection = "all", plannedSubjects = c(30,60), maxNumberOfIterations = maxNumberOfIterations) sim$rejectAtLeastOne sim$expectedNumberOfSubjects sim0$rejectAtLeastOne sim0$expectedNumberOfSubjects # Compare the power of the conditional Dunnett test with the power of the # combination test using Dunnett's intersection tests if no treatment arm # selection takes place. Asseume a linear dose-response relationship. maxNumberOfIterations <- 100 designIN <- getDesignInverseNormal(typeOfDesign = "asUser", userAlphaSpending = c(0, 0.025)) designCD <- getDesignConditionalDunnett(secondStageConditioning = TRUE) index <- 1 for (design in c(designIN, designCD)) { results <- getSimulationMultiArmMeans(design, activeArms = 3, muMaxVector = seq(0, 1, 0.2), typeOfShape = "linear", plannedSubjects = cumsum(rep(20, 2)), intersectionTest = "Dunnett", typeOfSelection = "all", maxNumberOfIterations = maxNumberOfIterations) if (index == 1) { drift <- results$effectMatrix[nrow(results$effectMatrix), ] plot(drift, results$rejectAtLeastOne, type = "l", lty = 1, lwd = 3, col = "black", ylab = "Power") } else { lines(drift,results$rejectAtLeastOne, type = "l", lty = index, lwd = 3, col = "red") } index <- index + 1 } legend("topleft", legend=c("Combination Dunnett", "Conditional Dunnett"), col=c("black", "red"), lty = (1:2), cex = 0.8) # Assess the design characteristics of a user defined selection # strategy in a two-stage design using the inverse normal method # with constant bounds. Stopping for futility due to # de-selection of all treatment arms. designIN <- getDesignInverseNormal(typeOfDesign = "P", kMax = 2) mySelection <- function(effectVector) { selectedArms <- (effectVector >= c(0, 0.1, 0.3)) return(selectedArms) } results <- getSimulationMultiArmMeans(designIN, activeArms = 3, muMaxVector = seq(0, 1, 0.2), typeOfShape = "linear", plannedSubjects = c(30,60), intersectionTest = "Dunnett", typeOfSelection = "userDefined", selectArmsFunction = mySelection, maxNumberOfIterations = 100) options(rpact.summary.output.size = "medium") summary(results) if (require(ggplot2)) plot(results, type = c(5,3,9), grid = 4) ## End(Not run)
Returns the simulated power, stopping and selection probabilities, conditional power, and expected sample size for testing rates in a multi-arm treatment groups testing situation.
getSimulationMultiArmRates( design = NULL, ..., activeArms = 3L, effectMatrix = NULL, typeOfShape = c("linear", "sigmoidEmax", "userDefined"), piMaxVector = seq(0.2, 0.5, 0.1), piControl = 0.2, gED50 = NA_real_, slope = 1, doseLevels = NA_real_, intersectionTest = c("Dunnett", "Bonferroni", "Simes", "Sidak", "Hierarchical"), directionUpper = NA, adaptations = NA, typeOfSelection = c("best", "rBest", "epsilon", "all", "userDefined"), effectMeasure = c("effectEstimate", "testStatistic"), successCriterion = c("all", "atLeastOne"), epsilonValue = NA_real_, rValue = NA_real_, threshold = -Inf, plannedSubjects = NA_real_, allocationRatioPlanned = NA_real_, minNumberOfSubjectsPerStage = NA_real_, maxNumberOfSubjectsPerStage = NA_real_, conditionalPower = NA_real_, piTreatmentsH1 = NA_real_, piControlH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcSubjectsFunction = NULL, selectArmsFunction = NULL, showStatistics = FALSE )
getSimulationMultiArmRates( design = NULL, ..., activeArms = 3L, effectMatrix = NULL, typeOfShape = c("linear", "sigmoidEmax", "userDefined"), piMaxVector = seq(0.2, 0.5, 0.1), piControl = 0.2, gED50 = NA_real_, slope = 1, doseLevels = NA_real_, intersectionTest = c("Dunnett", "Bonferroni", "Simes", "Sidak", "Hierarchical"), directionUpper = NA, adaptations = NA, typeOfSelection = c("best", "rBest", "epsilon", "all", "userDefined"), effectMeasure = c("effectEstimate", "testStatistic"), successCriterion = c("all", "atLeastOne"), epsilonValue = NA_real_, rValue = NA_real_, threshold = -Inf, plannedSubjects = NA_real_, allocationRatioPlanned = NA_real_, minNumberOfSubjectsPerStage = NA_real_, maxNumberOfSubjectsPerStage = NA_real_, conditionalPower = NA_real_, piTreatmentsH1 = NA_real_, piControlH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcSubjectsFunction = NULL, selectArmsFunction = NULL, showStatistics = FALSE )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
activeArms |
The number of active treatment arms to be compared with control, default is |
effectMatrix |
Matrix of effect sizes with |
typeOfShape |
The shape of the dose-response relationship over the treatment groups.
This can be either |
piMaxVector |
Range of assumed probabilities for the treatment group with
highest response for |
piControl |
If specified, the assumed probability in the control arm for simulation and under which the sample size recalculation is performed. |
gED50 |
If |
slope |
If |
doseLevels |
The dose levels for the dose response relationship.
If not specified, these dose levels are |
intersectionTest |
Defines the multiple test for the intersection
hypotheses in the closed system of hypotheses.
Five options are available in multi-arm designs: |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
adaptations |
A logical vector of length |
typeOfSelection |
The way the treatment arms or populations are selected at interim.
Five options are available: |
effectMeasure |
Criterion for treatment arm/population selection, either based on test statistic
( |
successCriterion |
Defines when the study is stopped for efficacy at interim.
Two options are available: |
epsilonValue |
For |
rValue |
For |
threshold |
Selection criterion: treatment arm / population is selected only if |
plannedSubjects |
|
allocationRatioPlanned |
The planned allocation ratio |
minNumberOfSubjectsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
maxNumberOfSubjectsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
conditionalPower |
If |
piTreatmentsH1 |
If specified, the assumed probability in the active treatment arm(s) under which the sample size recalculation is performed. |
piControlH1 |
If specified, the assumed probability in the reference group
(if different from |
maxNumberOfIterations |
The number of simulation iterations, default is |
seed |
The seed to reproduce the simulation, default is a random seed. |
calcSubjectsFunction |
Optionally, a function can be entered that defines the way of performing the sample size
recalculation. By default, sample size recalculation is performed with conditional power and specified
|
selectArmsFunction |
Optionally, a function can be entered that defines the way of how treatment arms
are selected. This function is allowed to depend on |
showStatistics |
Logical. If |
At given design the function simulates the power, stopping probabilities, selection probabilities, and expected sample size at given number of subjects, parameter configuration, and treatment arm selection rule in the multi-arm situation. An allocation ratio can be specified referring to the ratio of number of subjects in the active treatment groups as compared to the control group.
The definition of piTreatmentsH1
and/or piControlH1
makes only sense if kMax
> 1
and if conditionalPower
, minNumberOfSubjectsPerStage
, and
maxNumberOfSubjectsPerStage
(or calcSubjectsFunction
) are defined.
calcSubjectsFunction
This function returns the number of subjects at given conditional power and
conditional critical value for specified testing situation.
The function might depend on the variables
stage
,
selectedArms
,
directionUpper
,
plannedSubjects
,
allocationRatioPlanned
,
minNumberOfSubjectsPerStage
,
maxNumberOfSubjectsPerStage
,
conditionalPower
,
conditionalCriticalValue
,
overallRates
,
overallRatesControl
,
piTreatmentsH1
, and
piControlH1
.
The function has to contain the three-dots argument '...' (see examples).
Returns a SimulationResults
object.
The following generics (R generic functions) are available for this object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: # Simulate the power of the combination test with two interim stages and # O'Brien & Fleming boundaries using Dunnett's intersection tests if the # best treatment arm is selected at first interim. Selection only take # place if a non-negative treatment effect is observed (threshold = 0); # 20 subjects per stage and treatment arm, simulation is performed for # four parameter configurations. design <- getDesignInverseNormal(typeOfDesign = "OF") effectMatrix <- matrix(c(0.2,0.2,0.2, 0.4,0.4,0.4, 0.4,0.5,0.5, 0.4,0.5,0.6), byrow = TRUE, nrow = 4, ncol = 3) x <- getSimulationMultiArmRates(design = design, typeOfShape = "userDefined", effectMatrix = effectMatrix , piControl = 0.2, typeOfSelection = "best", threshold = 0, intersectionTest = "Dunnett", plannedSubjects = c(20, 40, 60), maxNumberOfIterations = 50) summary(x) ## End(Not run)
## Not run: # Simulate the power of the combination test with two interim stages and # O'Brien & Fleming boundaries using Dunnett's intersection tests if the # best treatment arm is selected at first interim. Selection only take # place if a non-negative treatment effect is observed (threshold = 0); # 20 subjects per stage and treatment arm, simulation is performed for # four parameter configurations. design <- getDesignInverseNormal(typeOfDesign = "OF") effectMatrix <- matrix(c(0.2,0.2,0.2, 0.4,0.4,0.4, 0.4,0.5,0.5, 0.4,0.5,0.6), byrow = TRUE, nrow = 4, ncol = 3) x <- getSimulationMultiArmRates(design = design, typeOfShape = "userDefined", effectMatrix = effectMatrix , piControl = 0.2, typeOfSelection = "best", threshold = 0, intersectionTest = "Dunnett", plannedSubjects = c(20, 40, 60), maxNumberOfIterations = 50) summary(x) ## End(Not run)
Returns the simulated power, stopping and selection probabilities, conditional power, and
expected sample size for testing hazard ratios in a multi-arm treatment groups testing situation.
In contrast to getSimulationSurvival()
(where survival times are simulated), normally
distributed logrank test statistics are simulated.
getSimulationMultiArmSurvival( design = NULL, ..., activeArms = 3L, effectMatrix = NULL, typeOfShape = c("linear", "sigmoidEmax", "userDefined"), omegaMaxVector = seq(1, 2.6, 0.4), gED50 = NA_real_, slope = 1, doseLevels = NA_real_, intersectionTest = c("Dunnett", "Bonferroni", "Simes", "Sidak", "Hierarchical"), directionUpper = NA, adaptations = NA, typeOfSelection = c("best", "rBest", "epsilon", "all", "userDefined"), effectMeasure = c("effectEstimate", "testStatistic"), successCriterion = c("all", "atLeastOne"), correlationComputation = c("alternative", "null"), epsilonValue = NA_real_, rValue = NA_real_, threshold = -Inf, plannedEvents = NA_real_, allocationRatioPlanned = NA_real_, minNumberOfEventsPerStage = NA_real_, maxNumberOfEventsPerStage = NA_real_, conditionalPower = NA_real_, thetaH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcEventsFunction = NULL, selectArmsFunction = NULL, showStatistics = FALSE )
getSimulationMultiArmSurvival( design = NULL, ..., activeArms = 3L, effectMatrix = NULL, typeOfShape = c("linear", "sigmoidEmax", "userDefined"), omegaMaxVector = seq(1, 2.6, 0.4), gED50 = NA_real_, slope = 1, doseLevels = NA_real_, intersectionTest = c("Dunnett", "Bonferroni", "Simes", "Sidak", "Hierarchical"), directionUpper = NA, adaptations = NA, typeOfSelection = c("best", "rBest", "epsilon", "all", "userDefined"), effectMeasure = c("effectEstimate", "testStatistic"), successCriterion = c("all", "atLeastOne"), correlationComputation = c("alternative", "null"), epsilonValue = NA_real_, rValue = NA_real_, threshold = -Inf, plannedEvents = NA_real_, allocationRatioPlanned = NA_real_, minNumberOfEventsPerStage = NA_real_, maxNumberOfEventsPerStage = NA_real_, conditionalPower = NA_real_, thetaH1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcEventsFunction = NULL, selectArmsFunction = NULL, showStatistics = FALSE )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
activeArms |
The number of active treatment arms to be compared with control, default is |
effectMatrix |
Matrix of effect sizes with |
typeOfShape |
The shape of the dose-response relationship over the treatment groups.
This can be either |
omegaMaxVector |
Range of hazard ratios with highest response for |
gED50 |
If |
slope |
If |
doseLevels |
The dose levels for the dose response relationship.
If not specified, these dose levels are |
intersectionTest |
Defines the multiple test for the intersection
hypotheses in the closed system of hypotheses.
Five options are available in multi-arm designs: |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
adaptations |
A logical vector of length |
typeOfSelection |
The way the treatment arms or populations are selected at interim.
Five options are available: |
effectMeasure |
Criterion for treatment arm/population selection, either based on test statistic
( |
successCriterion |
Defines when the study is stopped for efficacy at interim.
Two options are available: |
correlationComputation |
If |
epsilonValue |
For |
rValue |
For |
threshold |
Selection criterion: treatment arm / population is selected only if |
plannedEvents |
|
allocationRatioPlanned |
The planned allocation ratio |
minNumberOfEventsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
maxNumberOfEventsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
conditionalPower |
If |
thetaH1 |
If specified, the value of the alternative under which the conditional power or sample size recalculation calculation is performed. Must be a numeric of length 1. |
maxNumberOfIterations |
The number of simulation iterations, default is |
seed |
The seed to reproduce the simulation, default is a random seed. |
calcEventsFunction |
Optionally, a function can be entered that defines the way of performing the sample size
recalculation. By default, event number recalculation is performed with conditional power and specified
|
selectArmsFunction |
Optionally, a function can be entered that defines the way of how treatment arms
are selected. This function is allowed to depend on |
showStatistics |
Logical. If |
At given design the function simulates the power, stopping probabilities, selection probabilities, and expected sample size at given number of subjects, parameter configuration, and treatment arm selection rule in the multi-arm situation. An allocation ratio can be specified referring to the ratio of number of subjects in the active treatment groups as compared to the control group.
The definition of thetaH1
makes only sense if kMax
> 1
and if conditionalPower
, minNumberOfEventsPerStage
, and
maxNumberOfEventsPerStage
(or calcEventsFunction
) are defined.
calcEventsFunction
This function returns the number of events at given conditional power
and conditional critical value for specified testing situation.
The function might depend on the variables
stage
,
selectedArms
,
plannedEvents
,
directionUpper
,
allocationRatioPlanned
,
minNumberOfEventsPerStage
,
maxNumberOfEventsPerStage
,
conditionalPower
,
conditionalCriticalValue
, and
overallEffects
.
The function has to contain the three-dots argument '...' (see examples).
Returns a SimulationResults
object.
The following generics (R generic functions) are available for this object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: # Assess different selection rules for a two-stage survival design with # O'Brien & Fleming alpha spending boundaries and (non-binding) stopping # for futility if the test statistic is negative. # Number of events at the second stage is adjusted based on conditional # power 80% and specified minimum and maximum number of Events. design <- getDesignInverseNormal(typeOfDesign = "asOF", futilityBounds = 0) y1 <- getSimulationMultiArmSurvival(design = design, activeArms = 4, intersectionTest = "Simes", typeOfShape = "sigmoidEmax", omegaMaxVector = seq(1, 2, 0.5), gED50 = 2, slope = 4, typeOfSelection = "best", conditionalPower = 0.8, minNumberOfEventsPerStage = c(NA_real_, 30), maxNumberOfEventsPerStage = c(NA_real_, 90), maxNumberOfIterations = 50, plannedEvents = c(75, 120)) y2 <- getSimulationMultiArmSurvival(design = design, activeArms = 4, intersectionTest = "Simes", typeOfShape = "sigmoidEmax", omegaMaxVector = seq(1,2,0.5), gED50 = 2, slope = 4, typeOfSelection = "epsilon", epsilonValue = 0.2, effectMeasure = "effectEstimate", conditionalPower = 0.8, minNumberOfEventsPerStage = c(NA_real_, 30), maxNumberOfEventsPerStage = c(NA_real_, 90), maxNumberOfIterations = 50, plannedEvents = c(75, 120)) y1$effectMatrix y1$rejectAtLeastOne y2$rejectAtLeastOne y1$selectedArms y2$selectedArms ## End(Not run)
## Not run: # Assess different selection rules for a two-stage survival design with # O'Brien & Fleming alpha spending boundaries and (non-binding) stopping # for futility if the test statistic is negative. # Number of events at the second stage is adjusted based on conditional # power 80% and specified minimum and maximum number of Events. design <- getDesignInverseNormal(typeOfDesign = "asOF", futilityBounds = 0) y1 <- getSimulationMultiArmSurvival(design = design, activeArms = 4, intersectionTest = "Simes", typeOfShape = "sigmoidEmax", omegaMaxVector = seq(1, 2, 0.5), gED50 = 2, slope = 4, typeOfSelection = "best", conditionalPower = 0.8, minNumberOfEventsPerStage = c(NA_real_, 30), maxNumberOfEventsPerStage = c(NA_real_, 90), maxNumberOfIterations = 50, plannedEvents = c(75, 120)) y2 <- getSimulationMultiArmSurvival(design = design, activeArms = 4, intersectionTest = "Simes", typeOfShape = "sigmoidEmax", omegaMaxVector = seq(1,2,0.5), gED50 = 2, slope = 4, typeOfSelection = "epsilon", epsilonValue = 0.2, effectMeasure = "effectEstimate", conditionalPower = 0.8, minNumberOfEventsPerStage = c(NA_real_, 30), maxNumberOfEventsPerStage = c(NA_real_, 90), maxNumberOfIterations = 50, plannedEvents = c(75, 120)) y1$effectMatrix y1$rejectAtLeastOne y2$rejectAtLeastOne y1$selectedArms y2$selectedArms ## End(Not run)
Returns the simulated power, stopping probabilities, conditional power, and expected sample size for testing rates in a one or two treatment groups testing situation.
getSimulationRates( design = NULL, ..., groups = 2L, normalApproximation = TRUE, riskRatio = FALSE, thetaH0 = ifelse(riskRatio, 1, 0), pi1 = seq(0.2, 0.5, 0.1), pi2 = NA_real_, plannedSubjects = NA_real_, directionUpper = NA, allocationRatioPlanned = NA_real_, minNumberOfSubjectsPerStage = NA_real_, maxNumberOfSubjectsPerStage = NA_real_, conditionalPower = NA_real_, pi1H1 = NA_real_, pi2H1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcSubjectsFunction = NULL, showStatistics = FALSE )
getSimulationRates( design = NULL, ..., groups = 2L, normalApproximation = TRUE, riskRatio = FALSE, thetaH0 = ifelse(riskRatio, 1, 0), pi1 = seq(0.2, 0.5, 0.1), pi2 = NA_real_, plannedSubjects = NA_real_, directionUpper = NA, allocationRatioPlanned = NA_real_, minNumberOfSubjectsPerStage = NA_real_, maxNumberOfSubjectsPerStage = NA_real_, conditionalPower = NA_real_, pi1H1 = NA_real_, pi2H1 = NA_real_, maxNumberOfIterations = 1000L, seed = NA_real_, calcSubjectsFunction = NULL, showStatistics = FALSE )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
groups |
The number of treatment groups (1 or 2), default is |
normalApproximation |
The type of computation of the p-values. Default is |
riskRatio |
If |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
pi1 |
A numeric value or vector that represents the assumed probability in
the active treatment group if two treatment groups
are considered, or the alternative probability for a one treatment group design,
default is |
pi2 |
A numeric value that represents the assumed probability in the reference group if two treatment
groups are considered, default is |
plannedSubjects |
|
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
allocationRatioPlanned |
The planned allocation ratio |
minNumberOfSubjectsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
maxNumberOfSubjectsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
conditionalPower |
If |
pi1H1 |
If specified, the assumed probability in the active treatment group if two treatment groups are considered, or the assumed probability for a one treatment group design, for which the conditional power was calculated. |
pi2H1 |
If specified, the assumed probability in the reference group if two treatment groups are considered, for which the conditional power was calculated. |
maxNumberOfIterations |
The number of simulation iterations, default is |
seed |
The seed to reproduce the simulation, default is a random seed. |
calcSubjectsFunction |
Optionally, a function can be entered that defines the way of performing the sample size
recalculation. By default, sample size recalculation is performed with conditional power and specified
|
showStatistics |
Logical. If |
At given design the function simulates the power, stopping probabilities, conditional power, and expected sample size at given number of subjects and parameter configuration. Additionally, an allocation ratio = n1/n2 can be specified where n1 and n2 are the number of subjects in the two treatment groups.
The definition of pi1H1
and/or pi2H1
makes only sense if kMax
> 1
and if conditionalPower
, minNumberOfSubjectsPerStage
, and
maxNumberOfSubjectsPerStage
(or calcSubjectsFunction
) are defined.
calcSubjectsFunction
This function returns the number of subjects at given conditional power and conditional critical value for specified
testing situation. The function might depend on variables
stage
,
riskRatio
,
thetaH0
,
groups
,
plannedSubjects
,
sampleSizesPerStage
,
directionUpper
,
allocationRatioPlanned
,
minNumberOfSubjectsPerStage
,
maxNumberOfSubjectsPerStage
,
conditionalPower
,
conditionalCriticalValue
,
overallRate
,
farringtonManningValue1
, and farringtonManningValue2
.
The function has to contain the three-dots argument '...' (see examples).
Returns a SimulationResults
object.
The following generics (R generic functions) are available for this object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
The summary statistics "Simulated data" contains the following parameters: median range; mean +/-sd
$show(showStatistics = FALSE)
or $setShowStatistics(FALSE)
can be used to disable
the output of the aggregated simulated data.
Example 1: simulationResults <- getSimulationRates(plannedSubjects = 40)
simulationResults$show(showStatistics = FALSE)
Example 2: simulationResults <- getSimulationRates(plannedSubjects = 40)
simulationResults$setShowStatistics(FALSE)
simulationResults
getData()
can be used to get the aggregated simulated data from the
object as data.frame
. The data frame contains the following columns:
iterationNumber
: The number of the simulation iteration.
stageNumber
: The stage.
pi1
: The assumed or derived event rate in the treatment group (if available).
pi2
: The assumed or derived event rate in the control group (if available).
numberOfSubjects
: The number of subjects under consideration when the
(interim) analysis takes place.
rejectPerStage
: 1 if null hypothesis can be rejected, 0 otherwise.
futilityPerStage
: 1 if study should be stopped for futility, 0 otherwise.
testStatistic
: The test statistic that is used for the test decision,
depends on which design was chosen (group sequential, inverse normal,
or Fisher combination test)'
testStatisticsPerStage
: The test statistic for each stage if only data from
the considered stage is taken into account.
overallRate1
: The cumulative rate in treatment group 1.
overallRate2
: The cumulative rate in treatment group 2.
stagewiseRates1
: The stage-wise rate in treatment group 1.
stagewiseRates2
: The stage-wise rate in treatment group 2.
sampleSizesPerStage1
: The stage-wise sample size in treatment group 1.
sampleSizesPerStage2
: The stage-wise sample size in treatment group 2.
trialStop
: TRUE
if study should be stopped for efficacy or futility or final stage, FALSE
otherwise.
conditionalPowerAchieved
: The conditional power for the subsequent stage of the trial for
selected sample size and effect. The effect is either estimated from the data or can be
user defined with pi1H1
and pi2H1
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: # Fixed sample size design (two groups) with total sample # size 120, pi1 = (0.3,0.4,0.5,0.6) and pi2 = 0.3 getSimulationRates(pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, plannedSubjects = 120, maxNumberOfIterations = 10) # Increase number of simulation iterations and compare results with power calculator getSimulationRates(pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, plannedSubjects = 120, maxNumberOfIterations = 50) getPowerRates(pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, maxNumberOfSubjects = 120) # Do the same for a two-stage Pocock inverse normal group sequential # design with non-binding futility stops designIN <- getDesignInverseNormal(typeOfDesign = "P", futilityBounds = c(0)) getSimulationRates(designIN, pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, plannedSubjects = c(40, 80), maxNumberOfIterations = 50) getPowerRates(designIN, pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, maxNumberOfSubjects = 80) # Assess power and average sample size if a sample size reassessment is # foreseen at conditional power 80% for the subsequent stage (decrease and increase) # based on observed overall (cumulative) rates and specified minNumberOfSubjectsPerStage # and maxNumberOfSubjectsPerStage # Do the same under the assumption that a sample size increase only takes place # if the rate difference exceeds the value 0.1 at interim. For this, the sample # size recalculation method needs to be redefined: mySampleSizeCalculationFunction <- function(..., stage, plannedSubjects, minNumberOfSubjectsPerStage, maxNumberOfSubjectsPerStage, conditionalPower, conditionalCriticalValue, overallRate) { if (overallRate[1] - overallRate[2] < 0.1) { return(plannedSubjects[stage] - plannedSubjects[stage - 1]) } else { rateUnderH0 <- (overallRate[1] + overallRate[2]) / 2 stageSubjects <- 2 * (max(0, conditionalCriticalValue * sqrt(2 * rateUnderH0 * (1 - rateUnderH0)) + stats::qnorm(conditionalPower) * sqrt(overallRate[1] * (1 - overallRate[1]) + overallRate[2] * (1 - overallRate[2]))))^2 / (max(1e-12, (overallRate[1] - overallRate[2])))^2 stageSubjects <- ceiling(min(max( minNumberOfSubjectsPerStage[stage], stageSubjects), maxNumberOfSubjectsPerStage[stage])) return(stageSubjects) } } getSimulationRates(designIN, pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, plannedSubjects = c(40, 80), minNumberOfSubjectsPerStage = c(40, 20), maxNumberOfSubjectsPerStage = c(40, 160), conditionalPower = 0.8, calcSubjectsFunction = mySampleSizeCalculationFunction, maxNumberOfIterations = 50) ## End(Not run)
## Not run: # Fixed sample size design (two groups) with total sample # size 120, pi1 = (0.3,0.4,0.5,0.6) and pi2 = 0.3 getSimulationRates(pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, plannedSubjects = 120, maxNumberOfIterations = 10) # Increase number of simulation iterations and compare results with power calculator getSimulationRates(pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, plannedSubjects = 120, maxNumberOfIterations = 50) getPowerRates(pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, maxNumberOfSubjects = 120) # Do the same for a two-stage Pocock inverse normal group sequential # design with non-binding futility stops designIN <- getDesignInverseNormal(typeOfDesign = "P", futilityBounds = c(0)) getSimulationRates(designIN, pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, plannedSubjects = c(40, 80), maxNumberOfIterations = 50) getPowerRates(designIN, pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, maxNumberOfSubjects = 80) # Assess power and average sample size if a sample size reassessment is # foreseen at conditional power 80% for the subsequent stage (decrease and increase) # based on observed overall (cumulative) rates and specified minNumberOfSubjectsPerStage # and maxNumberOfSubjectsPerStage # Do the same under the assumption that a sample size increase only takes place # if the rate difference exceeds the value 0.1 at interim. For this, the sample # size recalculation method needs to be redefined: mySampleSizeCalculationFunction <- function(..., stage, plannedSubjects, minNumberOfSubjectsPerStage, maxNumberOfSubjectsPerStage, conditionalPower, conditionalCriticalValue, overallRate) { if (overallRate[1] - overallRate[2] < 0.1) { return(plannedSubjects[stage] - plannedSubjects[stage - 1]) } else { rateUnderH0 <- (overallRate[1] + overallRate[2]) / 2 stageSubjects <- 2 * (max(0, conditionalCriticalValue * sqrt(2 * rateUnderH0 * (1 - rateUnderH0)) + stats::qnorm(conditionalPower) * sqrt(overallRate[1] * (1 - overallRate[1]) + overallRate[2] * (1 - overallRate[2]))))^2 / (max(1e-12, (overallRate[1] - overallRate[2])))^2 stageSubjects <- ceiling(min(max( minNumberOfSubjectsPerStage[stage], stageSubjects), maxNumberOfSubjectsPerStage[stage])) return(stageSubjects) } } getSimulationRates(designIN, pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, plannedSubjects = c(40, 80), minNumberOfSubjectsPerStage = c(40, 20), maxNumberOfSubjectsPerStage = c(40, 160), conditionalPower = 0.8, calcSubjectsFunction = mySampleSizeCalculationFunction, maxNumberOfIterations = 50) ## End(Not run)
Returns the analysis times, power, stopping probabilities, conditional power, and expected sample size for testing the hazard ratio in a two treatment groups survival design.
getSimulationSurvival( design = NULL, ..., thetaH0 = 1, directionUpper = NA, pi1 = NA_real_, pi2 = NA_real_, lambda1 = NA_real_, lambda2 = NA_real_, median1 = NA_real_, median2 = NA_real_, hazardRatio = NA_real_, kappa = 1, piecewiseSurvivalTime = NA_real_, allocation1 = 1, allocation2 = 1, eventTime = 12, accrualTime = c(0, 12), accrualIntensity = 0.1, accrualIntensityType = c("auto", "absolute", "relative"), dropoutRate1 = 0, dropoutRate2 = 0, dropoutTime = 12, maxNumberOfSubjects = NA_real_, plannedEvents = NA_real_, minNumberOfEventsPerStage = NA_real_, maxNumberOfEventsPerStage = NA_real_, conditionalPower = NA_real_, thetaH1 = NA_real_, maxNumberOfIterations = 1000L, maxNumberOfRawDatasetsPerStage = 0, longTimeSimulationAllowed = FALSE, seed = NA_real_, calcEventsFunction = NULL, showStatistics = FALSE )
getSimulationSurvival( design = NULL, ..., thetaH0 = 1, directionUpper = NA, pi1 = NA_real_, pi2 = NA_real_, lambda1 = NA_real_, lambda2 = NA_real_, median1 = NA_real_, median2 = NA_real_, hazardRatio = NA_real_, kappa = 1, piecewiseSurvivalTime = NA_real_, allocation1 = 1, allocation2 = 1, eventTime = 12, accrualTime = c(0, 12), accrualIntensity = 0.1, accrualIntensityType = c("auto", "absolute", "relative"), dropoutRate1 = 0, dropoutRate2 = 0, dropoutTime = 12, maxNumberOfSubjects = NA_real_, plannedEvents = NA_real_, minNumberOfEventsPerStage = NA_real_, maxNumberOfEventsPerStage = NA_real_, conditionalPower = NA_real_, thetaH1 = NA_real_, maxNumberOfIterations = 1000L, maxNumberOfRawDatasetsPerStage = 0, longTimeSimulationAllowed = FALSE, seed = NA_real_, calcEventsFunction = NULL, showStatistics = FALSE )
design |
The trial design. If no trial design is specified, a fixed sample size design is used.
In this case, Type I error rate |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
thetaH0 |
The null hypothesis value,
default is
For testing a rate in one sample, a value |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
pi1 |
A numeric value or vector that represents the assumed event rate in the treatment group,
default is |
pi2 |
A numeric value that represents the assumed event rate in the control group, default is |
lambda1 |
The assumed hazard rate in the treatment group, there is no default.
|
lambda2 |
The assumed hazard rate in the reference group, there is no default.
|
median1 |
The assumed median survival time in the treatment group, there is no default. |
median2 |
The assumed median survival time in the reference group, there is no default. Must be a positive numeric of length 1. |
hazardRatio |
The vector of hazard ratios under consideration. If the event or hazard rates in both treatment groups are defined, the hazard ratio needs not to be specified as it is calculated, there is no default. Must be a positive numeric of length 1. |
kappa |
A numeric value > 0. A |
piecewiseSurvivalTime |
A vector that specifies the time intervals for the piecewise
definition of the exponential survival time cumulative distribution function |
allocation1 |
The number how many subjects are assigned to treatment 1 in a
subsequent order, default is |
allocation2 |
The number how many subjects are assigned to treatment 2 in a
subsequent order, default is |
eventTime |
The assumed time under which the event rates are calculated, default is |
accrualTime |
The assumed accrual time intervals for the study, default is
|
accrualIntensity |
A numeric vector of accrual intensities, default is the relative
intensity |
accrualIntensityType |
A character value specifying the accrual intensity input type.
Must be one of |
dropoutRate1 |
The assumed drop-out rate in the treatment group, default is |
dropoutRate2 |
The assumed drop-out rate in the control group, default is |
dropoutTime |
The assumed time for drop-out rates in the control and the
treatment group, default is |
maxNumberOfSubjects |
|
plannedEvents |
|
minNumberOfEventsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
maxNumberOfEventsPerStage |
When performing a data driven sample size recalculation,
the numeric vector |
conditionalPower |
If |
thetaH1 |
If specified, the value of the alternative under which the conditional power or sample size recalculation calculation is performed. Must be a numeric of length 1. |
maxNumberOfIterations |
The number of simulation iterations, default is |
maxNumberOfRawDatasetsPerStage |
The number of raw datasets per stage that shall
be extracted and saved as |
longTimeSimulationAllowed |
Logical that indicates whether long time simulations
that consumes more than 30 seconds are allowed or not, default is |
seed |
The seed to reproduce the simulation, default is a random seed. |
calcEventsFunction |
Optionally, a function can be entered that defines the way of performing the sample size
recalculation. By default, event number recalculation is performed with conditional power and specified
|
showStatistics |
Logical. If |
At given design the function simulates the power, stopping probabilities, conditional power, and expected
sample size at given number of events, number of subjects, and parameter configuration.
It also simulates the time when the required events are expected under the given
assumptions (exponentially, piecewise exponentially, or Weibull distributed survival times
and constant or non-constant piecewise accrual).
Additionally, integers allocation1
and allocation2
can be specified that determine the number allocated
to treatment group 1 and treatment group 2, respectively.
More precisely, unequal randomization ratios must be specified via the two integer arguments allocation1
and
allocation2
which describe how many subjects are consecutively enrolled in each group, respectively, before a
subject is assigned to the other group. For example, the arguments allocation1 = 2
, allocation2 = 1
,
maxNumberOfSubjects = 300
specify 2:1 randomization with 200 subjects randomized to intervention and 100 to
control. (Caveat: Do not use allocation1 = 200
, allocation2 = 100
, maxNumberOfSubjects = 300
as this would imply that the 200 intervention subjects are enrolled prior to enrollment of any control subjects.)
conditionalPower
The definition of thetaH1
makes only sense if kMax
> 1
and if conditionalPower
, minNumberOfEventsPerStage
, and
maxNumberOfEventsPerStage
are defined.
Note that numberOfSubjects
, numberOfSubjects1
, and numberOfSubjects2
in the output
are the expected number of subjects.
calcEventsFunction
This function returns the number of events at given conditional power and conditional critical value for specified
testing situation. The function might depend on variables
stage
,
conditionalPower
,
thetaH0
,
plannedEvents
,
singleEventsPerStage
,
minNumberOfEventsPerStage
,
maxNumberOfEventsPerStage
,
allocationRatioPlanned
,
conditionalCriticalValue
,
The function has to contain the three-dots argument '...' (see examples).
Returns a SimulationResults
object.
The following generics (R generic functions) are available for this object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
The first element of the vector piecewiseSurvivalTime
must be equal to 0
.
piecewiseSurvivalTime
can also be a list that combines the definition of the
time intervals and hazard rates in the reference group.
The definition of the survival time in the treatment group is obtained by the specification
of the hazard ratio (see examples for details).
accrualTime
is the time period of subjects' accrual in a study.
It can be a value that defines the end of accrual or a vector.
In this case, accrualTime
can be used to define a non-constant accrual over time.
For this, accrualTime
is a vector that defines the accrual intervals.
The first element of accrualTime
must be equal to 0
and, additionally,
accrualIntensity
needs to be specified.
accrualIntensity
itself is a value or a vector (depending on the
length of accrualTime
) that defines the intensity how subjects
enter the trial in the intervals defined through accrualTime
.
accrualTime
can also be a list that combines the definition of the accrual time and
accrual intensity (see below and examples for details).
If the length of accrualTime
and the length of accrualIntensity
are the same
(i.e., the end of accrual is undefined), maxNumberOfSubjects > 0
needs to be specified
and the end of accrual is calculated.
In that case, accrualIntensity
is the number of subjects per time unit, i.e., the absolute accrual intensity.
If the length of accrualTime
equals the length of accrualIntensity - 1
(i.e., the end of accrual is defined), maxNumberOfSubjects
is calculated if the absolute accrual intensity is given.
If all elements in accrualIntensity
are smaller than 1, accrualIntensity
defines
the relative intensity how subjects enter the trial.
For example, accrualIntensity = c(0.1, 0.2)
specifies that in the second accrual interval
the intensity is doubled as compared to the first accrual interval. The actual (absolute) accrual intensity
is calculated for the calculated or given maxNumberOfSubjects
.
Note that the default is accrualIntensity = 0.1
meaning that the absolute accrual intensity
will be calculated.
The summary statistics "Simulated data" contains the following parameters: median range; mean +/-sd
$show(showStatistics = FALSE)
or $setShowStatistics(FALSE)
can be used to disable
the output of the aggregated simulated data.
Example 1: simulationResults <- getSimulationSurvival(maxNumberOfSubjects = 100, plannedEvents = 30)
simulationResults$show(showStatistics = FALSE)
Example 2: simulationResults <- getSimulationSurvival(maxNumberOfSubjects = 100, plannedEvents = 30)
simulationResults$setShowStatistics(FALSE)
simulationResults
getData()
can be used to get the aggregated simulated data from the
object as data.frame
. The data frame contains the following columns:
iterationNumber
: The number of the simulation iteration.
stageNumber
: The stage.
pi1
: The assumed or derived event rate in the treatment group.
pi2
: The assumed or derived event rate in the control group.
hazardRatio
: The hazard ratio under consideration (if available).
analysisTime
: The analysis time.
numberOfSubjects
: The number of subjects under consideration when the
(interim) analysis takes place.
eventsPerStage1
: The observed number of events per stage
in treatment group 1.
eventsPerStage2
: The observed number of events per stage
in treatment group 2.
singleEventsPerStage
: The observed number of events per stage
in both treatment groups.
rejectPerStage
: 1 if null hypothesis can be rejected, 0 otherwise.
futilityPerStage
: 1 if study should be stopped for futility, 0 otherwise.
eventsNotAchieved
: 1 if number of events could not be reached with
observed number of subjects, 0 otherwise.
testStatistic
: The test statistic that is used for the test decision,
depends on which design was chosen (group sequential, inverse normal,
or Fisher combination test)'
logRankStatistic
: Z-score statistic which corresponds to a one-sided
log-rank test at considered stage.
hazardRatioEstimateLR
: The estimated hazard ratio, derived from the
log-rank statistic.
trialStop
: TRUE
if study should be stopped for efficacy or futility or final stage, FALSE
otherwise.
conditionalPowerAchieved
: The conditional power for the subsequent stage of the trial for
selected sample size and effect. The effect is either estimated from the data or can be
user defined with thetaH1
.
getRawData()
can be used to get the simulated raw data from the
object as data.frame
. Note that getSimulationSurvival()
must called before with maxNumberOfRawDatasetsPerStage
> 0.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
## Not run: # Fixed sample size with minimum required definitions, pi1 = (0.3,0.4,0.5,0.6) and # pi2 = 0.3 at event time 12, and accrual time 24 getSimulationSurvival( pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, eventTime = 12, accrualTime = 24, plannedEvents = 40, maxNumberOfSubjects = 200, maxNumberOfIterations = 10 ) # Increase number of simulation iterations getSimulationSurvival( pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, eventTime = 12, accrualTime = 24, plannedEvents = 40, maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Determine necessary accrual time with default settings if 200 subjects and # 30 subjects per time unit can be recruited getSimulationSurvival( plannedEvents = 40, accrualTime = 0, accrualIntensity = 30, maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Determine necessary accrual time with default settings if 200 subjects and # if the first 6 time units 20 subjects per time unit can be recruited, # then 30 subjects per time unit getSimulationSurvival( plannedEvents = 40, accrualTime = c(0, 6), accrualIntensity = c(20, 30), maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Determine maximum number of Subjects with default settings if the first # 6 time units 20 subjects per time unit can be recruited, and after # 10 time units 30 subjects per time unit getSimulationSurvival( plannedEvents = 40, accrualTime = c(0, 6, 10), accrualIntensity = c(20, 30), maxNumberOfIterations = 50 ) # Specify accrual time as a list at <- list( "0 - <6" = 20, "6 - Inf" = 30 ) getSimulationSurvival( plannedEvents = 40, accrualTime = at, maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Specify accrual time as a list, if maximum number of subjects need to be calculated at <- list( "0 - <6" = 20, "6 - <=10" = 30 ) getSimulationSurvival(plannedEvents = 40, accrualTime = at, maxNumberOfIterations = 50) # Specify effect size for a two-stage group sequential design with # O'Brien & Fleming boundaries. Effect size is based on event rates # at specified event time, directionUpper = FALSE needs to be specified # because it should be shown that hazard ratio < 1 designGS <- getDesignGroupSequential(kMax = 2) getSimulationSurvival( design = designGS, pi1 = 0.2, pi2 = 0.3, eventTime = 24, plannedEvents = c(20, 40), maxNumberOfSubjects = 200, directionUpper = FALSE, maxNumberOfIterations = 50 ) # As above, but with a three-stage O'Brien and Fleming design with # specified information rates, note that planned events consists of integer values designGS2 <- getDesignGroupSequential(informationRates = c(0.4, 0.7, 1)) getSimulationSurvival( design = designGS2, pi1 = 0.2, pi2 = 0.3, eventTime = 24, plannedEvents = round(designGS2$informationRates * 40), maxNumberOfSubjects = 200, directionUpper = FALSE, maxNumberOfIterations = 50 ) # Effect size is based on event rate at specified event time for the reference # group and hazard ratio, directionUpper = FALSE needs to be specified because # it should be shown that hazard ratio < 1 getSimulationSurvival( design = designGS, hazardRatio = 0.5, pi2 = 0.3, eventTime = 24, plannedEvents = c(20, 40), maxNumberOfSubjects = 200, directionUpper = FALSE, maxNumberOfIterations = 50 ) # Effect size is based on hazard rate for the reference group and # hazard ratio, directionUpper = FALSE needs to be specified because # it should be shown that hazard ratio < 1 getSimulationSurvival( design = designGS, hazardRatio = 0.5, lambda2 = 0.02, plannedEvents = c(20, 40), maxNumberOfSubjects = 200, directionUpper = FALSE, maxNumberOfIterations = 50 ) # Specification of piecewise exponential survival time and hazard ratios, # note that in getSimulationSurvival only on hazard ratio is used # in the case that the survival time is piecewise expoential getSimulationSurvival( design = designGS, piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), hazardRatio = 1.5, plannedEvents = c(20, 40), maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) pws <- list( "0 - <5" = 0.01, "5 - <10" = 0.02, ">=10" = 0.04 ) getSimulationSurvival( design = designGS, piecewiseSurvivalTime = pws, hazardRatio = c(1.5), plannedEvents = c(20, 40), maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Specification of piecewise exponential survival time for both treatment arms getSimulationSurvival( design = designGS, piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), lambda1 = c(0.015, 0.03, 0.06), plannedEvents = c(20, 40), maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Specification of piecewise exponential survival time as a list, # note that in getSimulationSurvival only on hazard ratio # (not a vector) can be used pws <- list( "0 - <5" = 0.01, "5 - <10" = 0.02, ">=10" = 0.04 ) getSimulationSurvival( design = designGS, piecewiseSurvivalTime = pws, hazardRatio = 1.5, plannedEvents = c(20, 40), maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Specification of piecewise exponential survival time and delayed effect # (response after 5 time units) getSimulationSurvival( design = designGS, piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), lambda1 = c(0.01, 0.02, 0.06), plannedEvents = c(20, 40), maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Specify effect size based on median survival times getSimulationSurvival( median1 = 5, median2 = 3, plannedEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE, maxNumberOfIterations = 50 ) # Specify effect size based on median survival # times of Weibull distribtion with kappa = 2 getSimulationSurvival( median1 = 5, median2 = 3, kappa = 2, plannedEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE, maxNumberOfIterations = 50 ) # Perform recalculation of number of events based on conditional power for a # three-stage design with inverse normal combination test, where the conditional power # is calculated under the specified effect size thetaH1 = 1.3 and up to a four-fold # increase in originally planned sample size (number of events) is allowed. # Note that the first value in minNumberOfEventsPerStage and # maxNumberOfEventsPerStage is arbitrary, i.e., it has no effect. designIN <- getDesignInverseNormal(informationRates = c(0.4, 0.7, 1)) resultsWithSSR1 <- getSimulationSurvival( design = designIN, hazardRatio = seq(1, 1.6, 0.1), pi2 = 0.3, conditionalPower = 0.8, thetaH1 = 1.3, plannedEvents = c(58, 102, 146), minNumberOfEventsPerStage = c(NA, 44, 44), maxNumberOfEventsPerStage = 4 * c(NA, 44, 44), maxNumberOfSubjects = 800, maxNumberOfIterations = 50 ) resultsWithSSR1 # If thetaH1 is unspecified, the observed hazard ratio estimate # (calculated from the log-rank statistic) is used for performing the # recalculation of the number of events resultsWithSSR2 <- getSimulationSurvival( design = designIN, hazardRatio = seq(1, 1.6, 0.1), pi2 = 0.3, conditionalPower = 0.8, plannedEvents = c(58, 102, 146), minNumberOfEventsPerStage = c(NA, 44, 44), maxNumberOfEventsPerStage = 4 * c(NA, 44, 44), maxNumberOfSubjects = 800, maxNumberOfIterations = 50 ) resultsWithSSR2 # Compare it with design without event size recalculation resultsWithoutSSR <- getSimulationSurvival( design = designIN, hazardRatio = seq(1, 1.6, 0.1), pi2 = 0.3, plannedEvents = c(58, 102, 145), maxNumberOfSubjects = 800, maxNumberOfIterations = 50 ) resultsWithoutSSR$overallReject resultsWithSSR1$overallReject resultsWithSSR2$overallReject # Confirm that event size racalcuation increases the Type I error rate, # i.e., you have to use the combination test resultsWithSSRGS <- getSimulationSurvival( design = designGS2, hazardRatio = seq(1), pi2 = 0.3, conditionalPower = 0.8, plannedEvents = c(58, 102, 145), minNumberOfEventsPerStage = c(NA, 44, 44), maxNumberOfEventsPerStage = 4 * c(NA, 44, 44), maxNumberOfSubjects = 800, maxNumberOfIterations = 50 ) resultsWithSSRGS$overallReject # Set seed to get reproducable results identical( getSimulationSurvival( plannedEvents = 40, maxNumberOfSubjects = 200, seed = 99 )$analysisTime, getSimulationSurvival( plannedEvents = 40, maxNumberOfSubjects = 200, seed = 99 )$analysisTime ) # Perform recalculation of number of events based on conditional power as above. # The number of events is recalculated only in the first interim, the recalculated number # is also used for the final stage. Here, we use the user defind calcEventsFunction as # follows (note that the last stage value in minNumberOfEventsPerStage and maxNumberOfEventsPerStage # has no effect): myCalcEventsFunction <- function(..., stage, conditionalPower, estimatedTheta, plannedEvents, eventsOverStages, minNumberOfEventsPerStage, maxNumberOfEventsPerStage, conditionalCriticalValue) { theta <- max(1 + 1e-12, estimatedTheta) if (stage == 2) { requiredStageEvents <- max(0, conditionalCriticalValue + qnorm(conditionalPower))^2 * 4 / log(theta)^2 requiredOverallStageEvents <- min( max(minNumberOfEventsPerStage[stage], requiredStageEvents), maxNumberOfEventsPerStage[stage] ) + eventsOverStages[stage - 1] } else { requiredOverallStageEvents <- 2 * eventsOverStages[stage - 1] - eventsOverStages[1] } return(requiredOverallStageEvents) } resultsWithSSR <- getSimulationSurvival( design = designIN, hazardRatio = seq(1, 2.6, 0.5), pi2 = 0.3, conditionalPower = 0.8, plannedEvents = c(58, 102, 146), minNumberOfEventsPerStage = c(NA, 44, 4), maxNumberOfEventsPerStage = 4 * c(NA, 44, 4), maxNumberOfSubjects = 800, calcEventsFunction = myCalcEventsFunction, seed = 1234, maxNumberOfIterations = 50 ) ## End(Not run)
## Not run: # Fixed sample size with minimum required definitions, pi1 = (0.3,0.4,0.5,0.6) and # pi2 = 0.3 at event time 12, and accrual time 24 getSimulationSurvival( pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, eventTime = 12, accrualTime = 24, plannedEvents = 40, maxNumberOfSubjects = 200, maxNumberOfIterations = 10 ) # Increase number of simulation iterations getSimulationSurvival( pi1 = seq(0.3, 0.6, 0.1), pi2 = 0.3, eventTime = 12, accrualTime = 24, plannedEvents = 40, maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Determine necessary accrual time with default settings if 200 subjects and # 30 subjects per time unit can be recruited getSimulationSurvival( plannedEvents = 40, accrualTime = 0, accrualIntensity = 30, maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Determine necessary accrual time with default settings if 200 subjects and # if the first 6 time units 20 subjects per time unit can be recruited, # then 30 subjects per time unit getSimulationSurvival( plannedEvents = 40, accrualTime = c(0, 6), accrualIntensity = c(20, 30), maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Determine maximum number of Subjects with default settings if the first # 6 time units 20 subjects per time unit can be recruited, and after # 10 time units 30 subjects per time unit getSimulationSurvival( plannedEvents = 40, accrualTime = c(0, 6, 10), accrualIntensity = c(20, 30), maxNumberOfIterations = 50 ) # Specify accrual time as a list at <- list( "0 - <6" = 20, "6 - Inf" = 30 ) getSimulationSurvival( plannedEvents = 40, accrualTime = at, maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Specify accrual time as a list, if maximum number of subjects need to be calculated at <- list( "0 - <6" = 20, "6 - <=10" = 30 ) getSimulationSurvival(plannedEvents = 40, accrualTime = at, maxNumberOfIterations = 50) # Specify effect size for a two-stage group sequential design with # O'Brien & Fleming boundaries. Effect size is based on event rates # at specified event time, directionUpper = FALSE needs to be specified # because it should be shown that hazard ratio < 1 designGS <- getDesignGroupSequential(kMax = 2) getSimulationSurvival( design = designGS, pi1 = 0.2, pi2 = 0.3, eventTime = 24, plannedEvents = c(20, 40), maxNumberOfSubjects = 200, directionUpper = FALSE, maxNumberOfIterations = 50 ) # As above, but with a three-stage O'Brien and Fleming design with # specified information rates, note that planned events consists of integer values designGS2 <- getDesignGroupSequential(informationRates = c(0.4, 0.7, 1)) getSimulationSurvival( design = designGS2, pi1 = 0.2, pi2 = 0.3, eventTime = 24, plannedEvents = round(designGS2$informationRates * 40), maxNumberOfSubjects = 200, directionUpper = FALSE, maxNumberOfIterations = 50 ) # Effect size is based on event rate at specified event time for the reference # group and hazard ratio, directionUpper = FALSE needs to be specified because # it should be shown that hazard ratio < 1 getSimulationSurvival( design = designGS, hazardRatio = 0.5, pi2 = 0.3, eventTime = 24, plannedEvents = c(20, 40), maxNumberOfSubjects = 200, directionUpper = FALSE, maxNumberOfIterations = 50 ) # Effect size is based on hazard rate for the reference group and # hazard ratio, directionUpper = FALSE needs to be specified because # it should be shown that hazard ratio < 1 getSimulationSurvival( design = designGS, hazardRatio = 0.5, lambda2 = 0.02, plannedEvents = c(20, 40), maxNumberOfSubjects = 200, directionUpper = FALSE, maxNumberOfIterations = 50 ) # Specification of piecewise exponential survival time and hazard ratios, # note that in getSimulationSurvival only on hazard ratio is used # in the case that the survival time is piecewise expoential getSimulationSurvival( design = designGS, piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), hazardRatio = 1.5, plannedEvents = c(20, 40), maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) pws <- list( "0 - <5" = 0.01, "5 - <10" = 0.02, ">=10" = 0.04 ) getSimulationSurvival( design = designGS, piecewiseSurvivalTime = pws, hazardRatio = c(1.5), plannedEvents = c(20, 40), maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Specification of piecewise exponential survival time for both treatment arms getSimulationSurvival( design = designGS, piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), lambda1 = c(0.015, 0.03, 0.06), plannedEvents = c(20, 40), maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Specification of piecewise exponential survival time as a list, # note that in getSimulationSurvival only on hazard ratio # (not a vector) can be used pws <- list( "0 - <5" = 0.01, "5 - <10" = 0.02, ">=10" = 0.04 ) getSimulationSurvival( design = designGS, piecewiseSurvivalTime = pws, hazardRatio = 1.5, plannedEvents = c(20, 40), maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Specification of piecewise exponential survival time and delayed effect # (response after 5 time units) getSimulationSurvival( design = designGS, piecewiseSurvivalTime = c(0, 5, 10), lambda2 = c(0.01, 0.02, 0.04), lambda1 = c(0.01, 0.02, 0.06), plannedEvents = c(20, 40), maxNumberOfSubjects = 200, maxNumberOfIterations = 50 ) # Specify effect size based on median survival times getSimulationSurvival( median1 = 5, median2 = 3, plannedEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE, maxNumberOfIterations = 50 ) # Specify effect size based on median survival # times of Weibull distribtion with kappa = 2 getSimulationSurvival( median1 = 5, median2 = 3, kappa = 2, plannedEvents = 40, maxNumberOfSubjects = 200, directionUpper = FALSE, maxNumberOfIterations = 50 ) # Perform recalculation of number of events based on conditional power for a # three-stage design with inverse normal combination test, where the conditional power # is calculated under the specified effect size thetaH1 = 1.3 and up to a four-fold # increase in originally planned sample size (number of events) is allowed. # Note that the first value in minNumberOfEventsPerStage and # maxNumberOfEventsPerStage is arbitrary, i.e., it has no effect. designIN <- getDesignInverseNormal(informationRates = c(0.4, 0.7, 1)) resultsWithSSR1 <- getSimulationSurvival( design = designIN, hazardRatio = seq(1, 1.6, 0.1), pi2 = 0.3, conditionalPower = 0.8, thetaH1 = 1.3, plannedEvents = c(58, 102, 146), minNumberOfEventsPerStage = c(NA, 44, 44), maxNumberOfEventsPerStage = 4 * c(NA, 44, 44), maxNumberOfSubjects = 800, maxNumberOfIterations = 50 ) resultsWithSSR1 # If thetaH1 is unspecified, the observed hazard ratio estimate # (calculated from the log-rank statistic) is used for performing the # recalculation of the number of events resultsWithSSR2 <- getSimulationSurvival( design = designIN, hazardRatio = seq(1, 1.6, 0.1), pi2 = 0.3, conditionalPower = 0.8, plannedEvents = c(58, 102, 146), minNumberOfEventsPerStage = c(NA, 44, 44), maxNumberOfEventsPerStage = 4 * c(NA, 44, 44), maxNumberOfSubjects = 800, maxNumberOfIterations = 50 ) resultsWithSSR2 # Compare it with design without event size recalculation resultsWithoutSSR <- getSimulationSurvival( design = designIN, hazardRatio = seq(1, 1.6, 0.1), pi2 = 0.3, plannedEvents = c(58, 102, 145), maxNumberOfSubjects = 800, maxNumberOfIterations = 50 ) resultsWithoutSSR$overallReject resultsWithSSR1$overallReject resultsWithSSR2$overallReject # Confirm that event size racalcuation increases the Type I error rate, # i.e., you have to use the combination test resultsWithSSRGS <- getSimulationSurvival( design = designGS2, hazardRatio = seq(1), pi2 = 0.3, conditionalPower = 0.8, plannedEvents = c(58, 102, 145), minNumberOfEventsPerStage = c(NA, 44, 44), maxNumberOfEventsPerStage = 4 * c(NA, 44, 44), maxNumberOfSubjects = 800, maxNumberOfIterations = 50 ) resultsWithSSRGS$overallReject # Set seed to get reproducable results identical( getSimulationSurvival( plannedEvents = 40, maxNumberOfSubjects = 200, seed = 99 )$analysisTime, getSimulationSurvival( plannedEvents = 40, maxNumberOfSubjects = 200, seed = 99 )$analysisTime ) # Perform recalculation of number of events based on conditional power as above. # The number of events is recalculated only in the first interim, the recalculated number # is also used for the final stage. Here, we use the user defind calcEventsFunction as # follows (note that the last stage value in minNumberOfEventsPerStage and maxNumberOfEventsPerStage # has no effect): myCalcEventsFunction <- function(..., stage, conditionalPower, estimatedTheta, plannedEvents, eventsOverStages, minNumberOfEventsPerStage, maxNumberOfEventsPerStage, conditionalCriticalValue) { theta <- max(1 + 1e-12, estimatedTheta) if (stage == 2) { requiredStageEvents <- max(0, conditionalCriticalValue + qnorm(conditionalPower))^2 * 4 / log(theta)^2 requiredOverallStageEvents <- min( max(minNumberOfEventsPerStage[stage], requiredStageEvents), maxNumberOfEventsPerStage[stage] ) + eventsOverStages[stage - 1] } else { requiredOverallStageEvents <- 2 * eventsOverStages[stage - 1] - eventsOverStages[1] } return(requiredOverallStageEvents) } resultsWithSSR <- getSimulationSurvival( design = designIN, hazardRatio = seq(1, 2.6, 0.5), pi2 = 0.3, conditionalPower = 0.8, plannedEvents = c(58, 102, 146), minNumberOfEventsPerStage = c(NA, 44, 4), maxNumberOfEventsPerStage = 4 * c(NA, 44, 4), maxNumberOfSubjects = 800, calcEventsFunction = myCalcEventsFunction, seed = 1234, maxNumberOfIterations = 50 ) ## End(Not run)
Returns summary statistics and p-values for a given data set and a given design.
getStageResults( design, dataInput, ..., stage = NA_integer_, directionUpper = NA )
getStageResults( design, dataInput, ..., stage = NA_integer_, directionUpper = NA )
design |
The trial design. |
dataInput |
The summary data used for calculating the test results.
This is either an element of |
... |
Further (optional) arguments to be passed:
|
stage |
The stage number (optional). Default: total number of existing stages in the data input. |
directionUpper |
Logical. Specifies the direction of the alternative,
only applicable for one-sided testing; default is |
Calculates and returns the stage results of the specified design and data input at the specified stage.
Returns a StageResults
object.
names
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
Click on the link of a generic in the list above to go directly to the help documentation of
the rpact
specific implementation of the generic.
Note that you can use the R function methods
to get all the methods of a generic and
to identify the object specific name of it, e.g.,
use methods("plot")
to get all the methods for the plot
generic.
There you can find, e.g., plot.AnalysisResults
and
obtain the specific help documentation linked above by typing ?plot.AnalysisResults
.
Other analysis functions:
getAnalysisResults()
,
getClosedCombinationTestResults()
,
getClosedConditionalDunnettTestResults()
,
getConditionalPower()
,
getConditionalRejectionProbabilities()
,
getFinalConfidenceInterval()
,
getFinalPValue()
,
getRepeatedConfidenceIntervals()
,
getRepeatedPValues()
,
getTestActions()
## Not run: design <- getDesignInverseNormal() dataRates <- getDataset( n1 = c(10, 10), n2 = c(20, 20), events1 = c( 8, 10), events2 = c(10, 16)) getStageResults(design, dataRates) ## End(Not run)
## Not run: design <- getDesignInverseNormal() dataRates <- getDataset( n1 = c(10, 10), n2 = c(20, 20), events1 = c( 8, 10), events2 = c(10, 16)) getStageResults(design, dataRates) ## End(Not run)
Returns test actions.
getTestActions(stageResults, ...)
getTestActions(stageResults, ...)
stageResults |
The results at given stage, obtained from |
... |
Only available for backward compatibility. |
Returns the test actions of the specified design and stage results at the specified stage.
Returns a character
vector of length kMax
Returns a numeric
vector of length kMax
containing the test actions of each stage.
Other analysis functions:
getAnalysisResults()
,
getClosedCombinationTestResults()
,
getClosedConditionalDunnettTestResults()
,
getConditionalPower()
,
getConditionalRejectionProbabilities()
,
getFinalConfidenceInterval()
,
getFinalPValue()
,
getRepeatedConfidenceIntervals()
,
getRepeatedPValues()
,
getStageResults()
## Not run: design <- getDesignInverseNormal(kMax = 2) data <- getDataset( n = c( 20, 30), means = c( 50, 51), stDevs = c(130, 140) ) getTestActions(getStageResults(design, dataInput = data)) ## End(Not run)
## Not run: design <- getDesignInverseNormal(kMax = 2) data <- getDataset( n = c( 20, 30), means = c( 50, 51), stDevs = c(130, 140) ) getTestActions(getStageResults(design, dataInput = data)) ## End(Not run)
The function knit_print.SummaryFactory
is the default
printing function for rpact summary objects in knitr.
The chunk option render
uses this function by default.
To fall back to the normal printing behavior set the
chunk option render = normal_print
.
For more information see knit_print
.
## S3 method for class 'SummaryFactory' knit_print(x, ...)
## S3 method for class 'SummaryFactory' knit_print(x, ...)
x |
A |
... |
Other arguments (see |
Generic function to print a summary object in Markdown.
Use options("rpact.print.heading.base.number" = NUMBER)
(where NUMBER
is an integer value >= -2) to
specify the heading level.
NUMBER = 1 results in the heading prefix #
, NUMBER = 2 results in ##
, ...
The default is
options("rpact.print.heading.base.number" = -2)
, i.e., the
top headings will be written italic but are not
explicit defined as header.
options("rpact.print.heading.base.number" = -1)
means
that all headings will be written bold but are not
explicit defined as header.
Furthermore the following options can be set globally:
rpact.auto.markdown.all
: if TRUE
, all output types will be rendered in Markdown format automatically.
rpact.auto.markdown.print
: if TRUE
, all print outputs will be rendered in Markdown format automatically.
rpact.auto.markdown.summary
: if TRUE
, all summary outputs will be rendered in Markdown format automatically.
rpact.auto.markdown.plot
: if TRUE
, all plot outputs will be rendered in Markdown format automatically.
Example: options("rpact.auto.markdown.plot" = FALSE)
disables the automatic knitting of plots inside Markdown documents.
Calculates the Multivariate Normal Distribution with Product Correlation Structure published by Charles Dunnett, Algorithm AS 251.1 Appl.Statist. (1989), Vol.38, No.3, doi:10.2307/2347754.
mvnprd(..., A, B, BPD, EPS = 1e-06, INF, IERC = 1, HINC = 0)
mvnprd(..., A, B, BPD, EPS = 1e-06, INF, IERC = 1, HINC = 0)
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
A |
Upper limits of integration. Array of N dimensions |
B |
Lower limits of integration. Array of N dimensions |
BPD |
Values defining correlation structure. Array of N dimensions |
EPS |
desired accuracy. Defaults to 1e-06 |
INF |
Determines where integration is done to infinity. Array of N dimensions. Valid values for INF(I): 0 = c(B(I), Inf), 1 = c(-Inf, A(I)), 2 = c(B(I), A(I)) |
IERC |
error control. If set to 1, strict error control based on fourth derivative is used. If set to zero, error control based on halving intervals is used |
HINC |
Interval width for Simpson's rule. Value of zero caused a default .24 to be used |
This is a wrapper function for the original Fortran 77 code. For a multivariate normal vector with correlation structure defined by RHO(I,J) = BPD(I) * BPD(J), computes the probability that the vector falls in a rectangle in n-space with error less than eps.
Calculates the Multivariate Normal Distribution with Product Correlation Structure published by Charles Dunnett, Algorithm AS 251.1 Appl.Statist. (1989), Vol.38, No.3, doi:10.2307/2347754.
mvstud(..., NDF, A, B, BPD, D, EPS = 1e-06, INF, IERC = 1, HINC = 0)
mvstud(..., NDF, A, B, BPD, D, EPS = 1e-06, INF, IERC = 1, HINC = 0)
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
NDF |
Degrees of Freedom. Use 0 for infinite D.F. |
A |
Upper limits of integration. Array of N dimensions |
B |
Lower limits of integration. Array of N dimensions |
BPD |
Values defining correlation structure. Array of N dimensions |
D |
Non-Centrality Vector |
EPS |
desired accuracy. Defaults to 1e-06 |
INF |
Determines where integration is done to infinity. Array of N dimensions. Valid values for INF(I): 0 = c(B(I), Inf), 1 = c(-Inf, A(I)), 2 = c(B(I), A(I)) |
IERC |
error control. If set to 1, strict error control based on fourth derivative is used. If set to zero, error control based on halving intervals is used |
HINC |
Interval width for Simpson's rule. Value of zero caused a default .24 to be used |
This is a wrapper function for the original Fortran 77 code. For a multivariate normal vector with correlation structure defined by RHO(I,J) = BPD(I) * BPD(J), computes the probability that the vector falls in a rectangle in n-space with error less than eps.
## Not run: N <- 3 RHO <- 0.5 B <- rep(-5.0, length = N) A <- rep(5.0, length = N) INF <- rep(2, length = N) BPD <- rep(sqrt(RHO), length = N) D <- rep(0.0, length = N) result <- mvstud(NDF = 0, A = A, B = B, BPD = BPD, INF = INF, D = D) result ## End(Not run)
## Not run: N <- 3 RHO <- 0.5 B <- rep(-5.0, length = N) A <- rep(5.0, length = N) INF <- rep(2, length = N) BPD <- rep(sqrt(RHO), length = N) D <- rep(0.0, length = N) result <- mvstud(NDF = 0, A = A, B = B, BPD = BPD, INF = INF, D = D) result ## End(Not run)
Fetch a parameter from a parameter set.
obtain(x, ..., output) ## S3 method for class 'ParameterSet' obtain(x, ..., output = c("named", "labeled", "value", "list")) fetch(x, ..., output) ## S3 method for class 'ParameterSet' fetch(x, ..., output = c("named", "labeled", "value", "list"))
obtain(x, ..., output) ## S3 method for class 'ParameterSet' obtain(x, ..., output = c("named", "labeled", "value", "list")) fetch(x, ..., output) ## S3 method for class 'ParameterSet' fetch(x, ..., output = c("named", "labeled", "value", "list"))
x |
The |
... |
One or more variables specified as:
|
output |
A character defining the output type as follows:
|
## Not run: getDesignInverseNormal() |> fetch(kMax) getDesignInverseNormal() |> fetch(kMax, output = "list") ## End(Not run)
## Not run: getDesignInverseNormal() |> fetch(kMax) getDesignInverseNormal() |> fetch(kMax, output = "list") ## End(Not run)
Plots the conditional power together with the likelihood function.
## S3 method for class 'AnalysisResults' plot( x, y, ..., type = 1L, nPlanned = NA_real_, allocationRatioPlanned = NA_real_, main = NA_character_, xlab = NA_character_, ylab = NA_character_, legendTitle = NA_character_, palette = "Set1", legendPosition = NA_integer_, showSource = FALSE, grid = 1, plotSettings = NULL )
## S3 method for class 'AnalysisResults' plot( x, y, ..., type = 1L, nPlanned = NA_real_, allocationRatioPlanned = NA_real_, main = NA_character_, xlab = NA_character_, ylab = NA_character_, legendTitle = NA_character_, palette = "Set1", legendPosition = NA_integer_, showSource = FALSE, grid = 1, plotSettings = NULL )
x |
The analysis results at given stage, obtained from |
y |
Not available for this kind of plot (is only defined to be compatible to the generic plot function). |
... |
Optional plot arguments. Furthermore the following arguments can be defined:
|
type |
The plot type (default = 1). Note that at the moment only one type (the conditional power plot) is available. |
nPlanned |
The additional (i.e., "new" and not cumulative) sample size planned for each of the subsequent stages. The argument must be a vector with length equal to the number of remaining stages and contain the combined sample size from both treatment groups if two groups are considered. For survival outcomes, it should contain the planned number of additional events. For multi-arm designs, it is the per-comparison (combined) sample size. For enrichment designs, it is the (combined) sample size for the considered sub-population. |
allocationRatioPlanned |
The planned allocation ratio |
main |
The main title, default is |
xlab |
The x-axis label, default is |
ylab |
The y-axis label. |
legendTitle |
The legend title, default is |
palette |
The palette, default is |
legendPosition |
The position of the legend.
By default (
|
showSource |
Logical. If
Note: no plot object will be returned if |
grid |
An integer value specifying the output of multiple plots.
By default ( |
plotSettings |
An object of class |
The conditional power is calculated only if effect size and sample size is specified.
Returns a ggplot2
object.
## Not run: design <- getDesignGroupSequential(kMax = 2) dataExample <- getDataset( n = c(20, 30), means = c(50, 51), stDevs = c(130, 140) ) result <- getAnalysisResults(design = design, dataInput = dataExample, thetaH0 = 20, nPlanned = c(30), thetaH1 = 1.5, stage = 1) if (require(ggplot2)) plot(result, thetaRange = c(0, 100)) ## End(Not run)
## Not run: design <- getDesignGroupSequential(kMax = 2) dataExample <- getDataset( n = c(20, 30), means = c(50, 51), stDevs = c(130, 140) ) result <- getAnalysisResults(design = design, dataInput = dataExample, thetaH0 = 20, nPlanned = c(30), thetaH1 = 1.5, stage = 1) if (require(ggplot2)) plot(result, thetaRange = c(0, 100)) ## End(Not run)
Plots a dataset.
## S3 method for class 'Dataset' plot( x, y, ..., main = "Dataset", xlab = "Stage", ylab = NA_character_, legendTitle = "Group", palette = "Set1", showSource = FALSE, plotSettings = NULL )
## S3 method for class 'Dataset' plot( x, y, ..., main = "Dataset", xlab = "Stage", ylab = NA_character_, legendTitle = "Group", palette = "Set1", showSource = FALSE, plotSettings = NULL )
x |
The |
y |
Not available for this kind of plot (is only defined to be compatible to the generic plot function). |
... |
Optional plot arguments. At the moment |
main |
The main title, default is |
xlab |
The x-axis label, default is |
ylab |
The y-axis label. |
legendTitle |
The legend title, default is |
palette |
The palette, default is |
showSource |
Logical. If
Note: no plot object will be returned if |
plotSettings |
An object of class |
Generic function to plot all kinds of datasets.
Returns a ggplot2
object.
## Not run: # Plot a dataset of means dataExample <- getDataset( n1 = c(22, 11, 22, 11), n2 = c(22, 13, 22, 13), means1 = c(1, 1.1, 1, 1), means2 = c(1.4, 1.5, 3, 2.5), stDevs1 = c(1, 2, 2, 1.3), stDevs2 = c(1, 2, 2, 1.3) ) if (require(ggplot2)) plot(dataExample, main = "Comparison of Means") # Plot a dataset of rates dataExample <- getDataset( n1 = c(8, 10, 9, 11), n2 = c(11, 13, 12, 13), events1 = c(3, 5, 5, 6), events2 = c(8, 10, 12, 12) ) if (require(ggplot2)) plot(dataExample, main = "Comparison of Rates") ## End(Not run)
## Not run: # Plot a dataset of means dataExample <- getDataset( n1 = c(22, 11, 22, 11), n2 = c(22, 13, 22, 13), means1 = c(1, 1.1, 1, 1), means2 = c(1.4, 1.5, 3, 2.5), stDevs1 = c(1, 2, 2, 1.3), stDevs2 = c(1, 2, 2, 1.3) ) if (require(ggplot2)) plot(dataExample, main = "Comparison of Means") # Plot a dataset of rates dataExample <- getDataset( n1 = c(8, 10, 9, 11), n2 = c(11, 13, 12, 13), events1 = c(3, 5, 5, 6), events2 = c(8, 10, 12, 12) ) if (require(ggplot2)) plot(dataExample, main = "Comparison of Rates") ## End(Not run)
Plots an object that inherits from class EventProbabilities
.
## S3 method for class 'EventProbabilities' plot( x, y, ..., allocationRatioPlanned = x$allocationRatioPlanned, main = NA_character_, xlab = NA_character_, ylab = NA_character_, type = 1L, legendTitle = NA_character_, palette = "Set1", plotPointsEnabled = NA, legendPosition = NA_integer_, showSource = FALSE, plotSettings = NULL )
## S3 method for class 'EventProbabilities' plot( x, y, ..., allocationRatioPlanned = x$allocationRatioPlanned, main = NA_character_, xlab = NA_character_, ylab = NA_character_, type = 1L, legendTitle = NA_character_, palette = "Set1", plotPointsEnabled = NA, legendPosition = NA_integer_, showSource = FALSE, plotSettings = NULL )
x |
The object that inherits from |
y |
An optional object that inherits from |
... |
Optional plot arguments. At the moment |
allocationRatioPlanned |
The planned allocation ratio |
main |
The main title. |
xlab |
The x-axis label. |
ylab |
The y-axis label. |
type |
The plot type (default = 1). Note that at the moment only one type is available. |
legendTitle |
The legend title, default is |
palette |
The palette, default is |
plotPointsEnabled |
Logical. If |
legendPosition |
The position of the legend.
By default (
|
showSource |
Logical. If
Note: no plot object will be returned if |
plotSettings |
An object of class |
Generic function to plot an event probabilities object.
Generic function to plot an event probabilities object.
Returns a ggplot2
object.
Plots an object that inherits from class NumberOfSubjects
.
## S3 method for class 'NumberOfSubjects' plot( x, y, ..., allocationRatioPlanned = NA_real_, main = NA_character_, xlab = NA_character_, ylab = NA_character_, type = 1L, legendTitle = NA_character_, palette = "Set1", plotPointsEnabled = NA, legendPosition = NA_integer_, showSource = FALSE, plotSettings = NULL )
## S3 method for class 'NumberOfSubjects' plot( x, y, ..., allocationRatioPlanned = NA_real_, main = NA_character_, xlab = NA_character_, ylab = NA_character_, type = 1L, legendTitle = NA_character_, palette = "Set1", plotPointsEnabled = NA, legendPosition = NA_integer_, showSource = FALSE, plotSettings = NULL )
x |
The object that inherits from |
y |
An optional object that inherits from |
... |
Optional plot arguments. At the moment |
allocationRatioPlanned |
The planned allocation ratio |
main |
The main title. |
xlab |
The x-axis label. |
ylab |
The y-axis label. |
type |
The plot type (default = 1). Note that at the moment only one type is available. |
legendTitle |
The legend title, default is |
palette |
The palette, default is |
plotPointsEnabled |
Logical. If |
legendPosition |
The position of the legend.
By default (
|
showSource |
Logical. If
Note: no plot object will be returned if |
plotSettings |
An object of class |
Generic function to plot an "number of subjects" object.
Generic function to plot a "number of subjects" object.
Returns a ggplot2
object.
Plots an object that inherits from class ParameterSet
.
## S3 method for class 'ParameterSet' plot( x, y, ..., main = NA_character_, xlab = NA_character_, ylab = NA_character_, type = 1L, palette = "Set1", legendPosition = NA_integer_, showSource = FALSE, plotSettings = NULL )
## S3 method for class 'ParameterSet' plot( x, y, ..., main = NA_character_, xlab = NA_character_, ylab = NA_character_, type = 1L, palette = "Set1", legendPosition = NA_integer_, showSource = FALSE, plotSettings = NULL )
x |
The object that inherits from |
y |
Not available for this kind of plot (is only defined to be compatible to the generic plot function). |
... |
Optional plot arguments. At the moment |
main |
The main title. |
xlab |
The x-axis label. |
ylab |
The y-axis label. |
type |
The plot type (default = 1). |
palette |
The palette, default is |
legendPosition |
The position of the legend.
By default (
|
showSource |
Logical. If
Note: no plot object will be returned if |
plotSettings |
An object of class |
Generic function to plot a parameter set.
Returns a ggplot2
object.
Plots simulation results.
## S3 method for class 'SimulationResults' plot( x, y, ..., main = NA_character_, xlab = NA_character_, ylab = NA_character_, type = NA_integer_, palette = "Set1", theta = seq(-1, 1, 0.01), plotPointsEnabled = NA, legendPosition = NA_integer_, showSource = FALSE, grid = 1, plotSettings = NULL )
## S3 method for class 'SimulationResults' plot( x, y, ..., main = NA_character_, xlab = NA_character_, ylab = NA_character_, type = NA_integer_, palette = "Set1", theta = seq(-1, 1, 0.01), plotPointsEnabled = NA, legendPosition = NA_integer_, showSource = FALSE, grid = 1, plotSettings = NULL )
x |
The simulation results, obtained from |
y |
Not available for this kind of plot (is only defined to be compatible to the generic plot function). |
... |
Optional plot arguments. At the moment |
main |
The main title. |
xlab |
The x-axis label. |
ylab |
The y-axis label. |
type |
The plot type (default =
|
palette |
The palette, default is |
theta |
A vector of standardized effect sizes (theta values), default is a sequence from -1 to 1. |
plotPointsEnabled |
Logical. If |
legendPosition |
The position of the legend.
By default (
|
showSource |
Logical. If
Note: no plot object will be returned if |
grid |
An integer value specifying the output of multiple plots.
By default ( |
plotSettings |
An object of class |
Generic function to plot all kinds of simulation results.
Returns a ggplot2
object.
## Not run: results <- getSimulationMeans( alternative = 0:4, stDev = 5, plannedSubjects = 40, maxNumberOfIterations = 1000 ) plot(results, type = 5) ## End(Not run)
## Not run: results <- getSimulationMeans( alternative = 0:4, stDev = 5, plannedSubjects = 40, maxNumberOfIterations = 1000 ) plot(results, type = 5) ## End(Not run)
Plots the conditional power together with the likelihood function.
## S3 method for class 'StageResults' plot( x, y, ..., type = 1L, nPlanned, allocationRatioPlanned = 1, main = NA_character_, xlab = NA_character_, ylab = NA_character_, legendTitle = NA_character_, palette = "Set1", legendPosition = NA_integer_, showSource = FALSE, plotSettings = NULL )
## S3 method for class 'StageResults' plot( x, y, ..., type = 1L, nPlanned, allocationRatioPlanned = 1, main = NA_character_, xlab = NA_character_, ylab = NA_character_, legendTitle = NA_character_, palette = "Set1", legendPosition = NA_integer_, showSource = FALSE, plotSettings = NULL )
x |
The stage results at given stage, obtained from |
y |
Not available for this kind of plot (is only defined to be compatible to the generic plot function). |
... |
Optional plot arguments. Furthermore the following arguments can be defined:
|
type |
The plot type (default = 1). Note that at the moment only one type (the conditional power plot) is available. |
nPlanned |
The additional (i.e., "new" and not cumulative) sample size planned for each of the subsequent stages. The argument must be a vector with length equal to the number of remaining stages and contain the combined sample size from both treatment groups if two groups are considered. For survival outcomes, it should contain the planned number of additional events. For multi-arm designs, it is the per-comparison (combined) sample size. For enrichment designs, it is the (combined) sample size for the considered sub-population. |
allocationRatioPlanned |
The planned allocation ratio |
main |
The main title. |
xlab |
The x-axis label. |
ylab |
The y-axis label. |
legendTitle |
The legend title. |
palette |
The palette, default is |
legendPosition |
The position of the legend.
By default (
|
showSource |
Logical. If
Note: no plot object will be returned if |
plotSettings |
An object of class |
Generic function to plot all kinds of stage results. The conditional power is calculated only if effect size and sample size is specified.
Returns a ggplot2
object.
## Not run: design <- getDesignGroupSequential( kMax = 4, alpha = 0.025, informationRates = c(0.2, 0.5, 0.8, 1), typeOfDesign = "WT", deltaWT = 0.25 ) dataExample <- getDataset( n = c(20, 30, 30), means = c(50, 51, 55), stDevs = c(130, 140, 120) ) stageResults <- getStageResults(design, dataExample, thetaH0 = 20) if (require(ggplot2)) plot(stageResults, nPlanned = c(30), thetaRange = c(0, 100)) ## End(Not run)
## Not run: design <- getDesignGroupSequential( kMax = 4, alpha = 0.025, informationRates = c(0.2, 0.5, 0.8, 1), typeOfDesign = "WT", deltaWT = 0.25 ) dataExample <- getDataset( n = c(20, 30, 30), means = c(50, 51, 55), stDevs = c(130, 140, 120) ) stageResults <- getStageResults(design, dataExample, thetaH0 = 20) if (require(ggplot2)) plot(stageResults, nPlanned = c(30), thetaRange = c(0, 100)) ## End(Not run)
Plots a summary factory.
## S3 method for class 'SummaryFactory' plot(x, y, ..., showSummary = FALSE)
## S3 method for class 'SummaryFactory' plot(x, y, ..., showSummary = FALSE)
x |
The summary factory object. |
y |
Not available for this kind of plot (is only defined to be compatible to the generic plot function). |
... |
Optional plot arguments. At the moment |
showSummary |
Show the summary before creating the
plot output, default is |
Generic function to plot all kinds of summary factories.
Returns a ggplot2
object.
Plots a trial design.
## S3 method for class 'TrialDesign' plot( x, y, ..., main = NA_character_, xlab = NA_character_, ylab = NA_character_, type = 1L, palette = "Set1", theta = seq(-1, 1, 0.01), nMax = NA_integer_, plotPointsEnabled = NA, legendPosition = NA_integer_, showSource = FALSE, grid = 1, plotSettings = NULL ) ## S3 method for class 'TrialDesignCharacteristics' plot(x, y, ..., type = 1L, grid = 1)
## S3 method for class 'TrialDesign' plot( x, y, ..., main = NA_character_, xlab = NA_character_, ylab = NA_character_, type = 1L, palette = "Set1", theta = seq(-1, 1, 0.01), nMax = NA_integer_, plotPointsEnabled = NA, legendPosition = NA_integer_, showSource = FALSE, grid = 1, plotSettings = NULL ) ## S3 method for class 'TrialDesignCharacteristics' plot(x, y, ..., type = 1L, grid = 1)
x |
The trial design, obtained from |
y |
Not available for this kind of plot (is only defined to be compatible to the generic plot function). |
... |
Optional plot arguments. At the moment |
main |
The main title. |
xlab |
The x-axis label. |
ylab |
The y-axis label. |
type |
The plot type (default =
|
palette |
The palette, default is |
theta |
A vector of standardized effect sizes (theta values), default is a sequence from -1 to 1. |
nMax |
The maximum sample size. Must be a positive integer of length 1. |
plotPointsEnabled |
Logical. If |
legendPosition |
The position of the legend.
By default (
|
showSource |
Logical. If
Note: no plot object will be returned if |
grid |
An integer value specifying the output of multiple plots.
By default ( |
plotSettings |
An object of class |
Generic function to plot a trial design.
Generic function to plot a trial design.
Note that nMax
is not an argument that it passed to ggplot2
.
Rather, the underlying calculations (e.g. power for different theta's or average sample size) are based
on calls to function getPowerAndAverageSampleNumber()
which has argument nMax
.
I.e., nMax
is not an argument to ggplot2 but to
getPowerAndAverageSampleNumber()
which is called prior to plotting.
Returns a ggplot2
object.
plot()
to compare different designs or design parameters visual.
## Not run: design <- getDesignInverseNormal( kMax = 3, alpha = 0.025, typeOfDesign = "asKD", gammaA = 2, informationRates = c(0.2, 0.7, 1), typeBetaSpending = "bsOF" ) if (require(ggplot2)) { plot(design) # default: type = 1 } ## End(Not run)
## Not run: design <- getDesignInverseNormal( kMax = 3, alpha = 0.025, typeOfDesign = "asKD", gammaA = 2, informationRates = c(0.2, 0.7, 1), typeBetaSpending = "bsOF" ) if (require(ggplot2)) { plot(design) # default: type = 1 } ## End(Not run)
Plots a trial design plan.
## S3 method for class 'TrialDesignPlan' plot( x, y, ..., main = NA_character_, xlab = NA_character_, ylab = NA_character_, type = NA_integer_, palette = "Set1", theta = NA_real_, plotPointsEnabled = NA, legendPosition = NA_integer_, showSource = FALSE, grid = 1, plotSettings = NULL )
## S3 method for class 'TrialDesignPlan' plot( x, y, ..., main = NA_character_, xlab = NA_character_, ylab = NA_character_, type = NA_integer_, palette = "Set1", theta = NA_real_, plotPointsEnabled = NA, legendPosition = NA_integer_, showSource = FALSE, grid = 1, plotSettings = NULL )
x |
The trial design plan, obtained from |
y |
Not available for this kind of plot (is only defined to be compatible to the generic plot function). |
... |
Optional plot arguments. At the moment |
main |
The main title. |
xlab |
The x-axis label. |
ylab |
The y-axis label. |
type |
The plot type (default =
|
palette |
The palette, default is |
theta |
A vector of standardized effect sizes (theta values), default is a sequence from -1 to 1. |
plotPointsEnabled |
Logical. If |
legendPosition |
The position of the legend.
By default (
|
showSource |
Logical. If
Note: no plot object will be returned if |
grid |
An integer value specifying the output of multiple plots.
By default ( |
plotSettings |
An object of class |
Generic function to plot all kinds of trial design plans.
Returns a ggplot2
object.
## Not run: if (require(ggplot2)) plot(getSampleSizeMeans()) ## End(Not run)
## Not run: if (require(ggplot2)) plot(getSampleSizeMeans()) ## End(Not run)
Plots a trial design set.
## S3 method for class 'TrialDesignSet' plot( x, y, ..., type = 1L, main = NA_character_, xlab = NA_character_, ylab = NA_character_, palette = "Set1", theta = seq(-1, 1, 0.02), nMax = NA_integer_, plotPointsEnabled = NA, legendPosition = NA_integer_, showSource = FALSE, grid = 1, plotSettings = NULL )
## S3 method for class 'TrialDesignSet' plot( x, y, ..., type = 1L, main = NA_character_, xlab = NA_character_, ylab = NA_character_, palette = "Set1", theta = seq(-1, 1, 0.02), nMax = NA_integer_, plotPointsEnabled = NA, legendPosition = NA_integer_, showSource = FALSE, grid = 1, plotSettings = NULL )
x |
The trial design set, obtained from |
y |
Not available for this kind of plot (is only defined to be compatible to the generic plot function). |
... |
Optional plot arguments. At the moment |
type |
The plot type (default =
|
main |
The main title. |
xlab |
The x-axis label. |
ylab |
The y-axis label. |
palette |
The palette, default is |
theta |
A vector of standardized effect sizes (theta values), default is a sequence from -1 to 1. |
nMax |
The maximum sample size. Must be a positive integer of length 1. |
plotPointsEnabled |
Logical. If |
legendPosition |
The position of the legend.
By default (
|
showSource |
Logical. If
Note: no plot object will be returned if |
grid |
An integer value specifying the output of multiple plots.
By default ( |
plotSettings |
An object of class |
Generic function to plot a trial design set. Is, e.g., useful to compare different designs or design parameters visual.
Returns a ggplot2
object.
## Not run: design <- getDesignInverseNormal( kMax = 3, alpha = 0.025, typeOfDesign = "asKD", gammaA = 2, informationRates = c(0.2, 0.7, 1), typeBetaSpending = "bsOF" ) # Create a set of designs based on the master design defined above # and varied parameter 'gammaA' designSet <- getDesignSet(design = design, gammaA = 4) if (require(ggplot2)) plot(designSet, type = 1, legendPosition = 6) ## End(Not run)
## Not run: design <- getDesignInverseNormal( kMax = 3, alpha = 0.025, typeOfDesign = "asKD", gammaA = 2, informationRates = c(0.2, 0.7, 1), typeBetaSpending = "bsOF" ) # Create a set of designs based on the master design defined above # and varied parameter 'gammaA' designSet <- getDesignSet(design = design, gammaA = 4) if (require(ggplot2)) plot(designSet, type = 1, legendPosition = 6) ## End(Not run)
Generic function to plot a TrialDesignSummaries
object.
## S3 method for class 'TrialDesignSummaries' plot(x, ..., type = 1L, grid = 1)
## S3 method for class 'TrialDesignSummaries' plot(x, ..., type = 1L, grid = 1)
x |
a |
... |
further arguments passed to or from other methods. |
type |
The plot type (default =
|
grid |
An integer value specifying the output of multiple plots.
By default ( |
Function to identify the available plot types of an object.
plotTypes( obj, output = c("numeric", "caption", "numcap", "capnum"), numberInCaptionEnabled = FALSE ) getAvailablePlotTypes( obj, output = c("numeric", "caption", "numcap", "capnum"), numberInCaptionEnabled = FALSE )
plotTypes( obj, output = c("numeric", "caption", "numcap", "capnum"), numberInCaptionEnabled = FALSE ) getAvailablePlotTypes( obj, output = c("numeric", "caption", "numcap", "capnum"), numberInCaptionEnabled = FALSE )
obj |
The object for which the plot types shall be identified, e.g. produced by
|
output |
The output type. Can be one of |
numberInCaptionEnabled |
If |
plotTypes
and getAvailablePlotTypes()
are equivalent, i.e.,
plotTypes
is a short form of getAvailablePlotTypes()
.
output
:
numeric
: numeric output
caption
: caption as character output
numcap
: list with number and caption
capnum
: list with caption and number
Returns a list if option
is either capnum
or numcap
or returns a vector that is of character type for option=caption
or
of numeric type for option=numeric
.
## Not run: design <- getDesignInverseNormal(kMax = 2) getAvailablePlotTypes(design, "numeric") plotTypes(design, "caption") getAvailablePlotTypes(design, "numcap") plotTypes(design, "capnum") ## End(Not run)
## Not run: design <- getDesignInverseNormal(kMax = 2) getAvailablePlotTypes(design, "numeric") plotTypes(design, "caption") getAvailablePlotTypes(design, "numcap") plotTypes(design, "capnum") ## End(Not run)
Prints the result object stored inside a summary factory.
## S3 method for class 'SummaryFactory' print(x, ..., markdown = NA, sep = NA_character_)
## S3 method for class 'SummaryFactory' print(x, ..., markdown = NA, sep = NA_character_)
x |
The summary factory object. |
... |
Optional plot arguments. At the moment |
markdown |
If |
sep |
The separator line between the summary and the print output, default is |
Generic function to print all kinds of summary factories.
Prints the design characteristics object.
## S3 method for class 'TrialDesignCharacteristics' print(x, ..., markdown = NA, showDesign = TRUE)
## S3 method for class 'TrialDesignCharacteristics' print(x, ..., markdown = NA, showDesign = TRUE)
x |
The trial design characteristics object. |
... |
Optional plot arguments. At the moment |
markdown |
If |
showDesign |
Show the design print output above the design characteristics, default is |
Generic function to print all kinds of design characteristics.
Generic function to print a TrialDesignSummaries
object.
## S3 method for class 'TrialDesignSummaries' print(x, ...)
## S3 method for class 'TrialDesignSummaries' print(x, ...)
x |
a |
... |
further arguments passed to or from other methods. |
Returns the R source command of a result object.
rcmd( obj, ..., leadingArguments = NULL, includeDefaultParameters = FALSE, stringWrapParagraphWidth = 90, prefix = "", postfix = "", stringWrapPrefix = "", newArgumentValues = list(), tolerance = 1e-07, pipeOperator = c("auto", "none", "magrittr", "R"), output = c("vector", "cat", "test", "markdown", "internal"), explicitPrint = FALSE ) getObjectRCode( obj, ..., leadingArguments = NULL, includeDefaultParameters = FALSE, stringWrapParagraphWidth = 90, prefix = "", postfix = "", stringWrapPrefix = "", newArgumentValues = list(), tolerance = 1e-07, pipeOperator = c("auto", "none", "magrittr", "R"), output = c("vector", "cat", "test", "markdown", "internal"), explicitPrint = FALSE )
rcmd( obj, ..., leadingArguments = NULL, includeDefaultParameters = FALSE, stringWrapParagraphWidth = 90, prefix = "", postfix = "", stringWrapPrefix = "", newArgumentValues = list(), tolerance = 1e-07, pipeOperator = c("auto", "none", "magrittr", "R"), output = c("vector", "cat", "test", "markdown", "internal"), explicitPrint = FALSE ) getObjectRCode( obj, ..., leadingArguments = NULL, includeDefaultParameters = FALSE, stringWrapParagraphWidth = 90, prefix = "", postfix = "", stringWrapPrefix = "", newArgumentValues = list(), tolerance = 1e-07, pipeOperator = c("auto", "none", "magrittr", "R"), output = c("vector", "cat", "test", "markdown", "internal"), explicitPrint = FALSE )
obj |
The result object. |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
leadingArguments |
A character vector with arguments that shall be inserted at the beginning of the function command,
e.g., |
includeDefaultParameters |
If |
stringWrapParagraphWidth |
An integer value defining the number of characters after which a line break shall be inserted;
set to |
prefix |
A character string that shall be added to the beginning of the R command. |
postfix |
A character string that shall be added to the end of the R command. |
stringWrapPrefix |
A prefix character string that shall be added to each new line, typically some spaces. |
newArgumentValues |
A named list with arguments that shall be renewed in the R command, e.g.,
|
tolerance |
The tolerance for defining a value as default. |
pipeOperator |
The pipe operator to use in the R code, default is "none". |
output |
The output format, default is a character "vector". |
explicitPrint |
Show an explicit |
getObjectRCode()
(short: rcmd()
) recreates
the R commands that result in the specified object obj
.
obj
must be an instance of class ParameterSet
.
A character
value or vector will be returned.
Reads a data file and returns it as dataset object.
readDataset( file, ..., header = TRUE, sep = ",", quote = "\"", dec = ".", fill = TRUE, comment.char = "", fileEncoding = "UTF-8" )
readDataset( file, ..., header = TRUE, sep = ",", quote = "\"", dec = ".", fill = TRUE, comment.char = "", fileEncoding = "UTF-8" )
file |
A CSV file (see |
... |
Further arguments to be passed to |
header |
A logical value indicating whether the file contains the names of the variables as its first line. |
sep |
The field separator character. Values on each line of the file are separated
by this character. If sep = "," (the default for |
quote |
The set of quoting characters. To disable quoting altogether, use
quote = "". See scan for the behavior on quotes embedded in quotes. Quoting is only
considered for columns read as character, which is all of them unless |
dec |
The character used in the file for decimal points. |
fill |
logical. If |
comment.char |
character: a character vector of length one containing a single character or an empty string. Use "" to turn off the interpretation of comments altogether. |
fileEncoding |
character string: if non-empty declares the encoding used on a file (not a connection) so the character data can be re-encoded. See the 'Encoding' section of the help for file, the 'R Data Import/Export Manual' and 'Note'. |
readDataset
is a wrapper function that uses read.table
to read the
CSV file into a data frame, transfers it from long to wide format with reshape
and puts the data to getDataset()
.
Returns a Dataset
object.
The following generics (R generic functions) are available for this result object:
names()
to obtain the field names,
print()
to print the object,
summary()
to display a summary of the object,
plot()
to plot the object,
as.data.frame()
to coerce the object to a data.frame
,
as.matrix()
to coerce the object to a matrix
.
readDatasets()
for reading multiple datasets,
writeDataset()
for writing a single dataset,
writeDatasets()
for writing multiple datasets.
## Not run: dataFileRates <- system.file("extdata", "dataset_rates.csv", package = "rpact" ) if (dataFileRates != "") { datasetRates <- readDataset(dataFileRates) datasetRates } dataFileMeansMultiArm <- system.file("extdata", "dataset_means_multi-arm.csv", package = "rpact" ) if (dataFileMeansMultiArm != "") { datasetMeansMultiArm <- readDataset(dataFileMeansMultiArm) datasetMeansMultiArm } dataFileRatesMultiArm <- system.file("extdata", "dataset_rates_multi-arm.csv", package = "rpact" ) if (dataFileRatesMultiArm != "") { datasetRatesMultiArm <- readDataset(dataFileRatesMultiArm) datasetRatesMultiArm } dataFileSurvivalMultiArm <- system.file("extdata", "dataset_survival_multi-arm.csv", package = "rpact" ) if (dataFileSurvivalMultiArm != "") { datasetSurvivalMultiArm <- readDataset(dataFileSurvivalMultiArm) datasetSurvivalMultiArm } ## End(Not run)
## Not run: dataFileRates <- system.file("extdata", "dataset_rates.csv", package = "rpact" ) if (dataFileRates != "") { datasetRates <- readDataset(dataFileRates) datasetRates } dataFileMeansMultiArm <- system.file("extdata", "dataset_means_multi-arm.csv", package = "rpact" ) if (dataFileMeansMultiArm != "") { datasetMeansMultiArm <- readDataset(dataFileMeansMultiArm) datasetMeansMultiArm } dataFileRatesMultiArm <- system.file("extdata", "dataset_rates_multi-arm.csv", package = "rpact" ) if (dataFileRatesMultiArm != "") { datasetRatesMultiArm <- readDataset(dataFileRatesMultiArm) datasetRatesMultiArm } dataFileSurvivalMultiArm <- system.file("extdata", "dataset_survival_multi-arm.csv", package = "rpact" ) if (dataFileSurvivalMultiArm != "") { datasetSurvivalMultiArm <- readDataset(dataFileSurvivalMultiArm) datasetSurvivalMultiArm } ## End(Not run)
Reads a data file and returns it as a list of dataset objects.
readDatasets( file, ..., header = TRUE, sep = ",", quote = "\"", dec = ".", fill = TRUE, comment.char = "", fileEncoding = "UTF-8" )
readDatasets( file, ..., header = TRUE, sep = ",", quote = "\"", dec = ".", fill = TRUE, comment.char = "", fileEncoding = "UTF-8" )
file |
A CSV file (see |
... |
Further arguments to be passed to |
header |
A logical value indicating whether the file contains the names of the variables as its first line. |
sep |
The field separator character. Values on each line of the file are separated
by this character. If sep = "," (the default for |
quote |
The set of quoting characters. To disable quoting altogether, use
quote = "". See scan for the behavior on quotes embedded in quotes. Quoting is only
considered for columns read as character, which is all of them unless |
dec |
The character used in the file for decimal points. |
fill |
logical. If |
comment.char |
character: a character vector of length one containing a single character or an empty string. Use "" to turn off the interpretation of comments altogether. |
fileEncoding |
character string: if non-empty declares the encoding used on a file (not a connection) so the character data can be re-encoded. See the 'Encoding' section of the help for file, the 'R Data Import/Export Manual' and 'Note'. |
Reads a file that was written by writeDatasets()
before.
Returns a list
of Dataset
objects.
readDataset()
for reading a single dataset,
writeDatasets()
for writing multiple datasets,
writeDataset()
for writing a single dataset.
## Not run: dataFile <- system.file("extdata", "datasets_rates.csv", package = "rpact") if (dataFile != "") { datasets <- readDatasets(dataFile) datasets } ## End(Not run)
## Not run: dataFile <- system.file("extdata", "datasets_rates.csv", package = "rpact") if (dataFile != "") { datasets <- readDatasets(dataFile) datasets } ## End(Not run)
rpact (R Package for Adaptive Clinical Trials) is a comprehensive package that enables the design, simulation, and analysis of confirmatory adaptive group sequential designs. Particularly, the methods described in the recent monograph by Wassmer and Brannath (published by Springer, 2016) are implemented. It also comprises advanced methods for sample size calculations for fixed sample size designs incl., e.g., sample size calculation for survival trials with piecewise exponentially distributed survival times and staggered patients entry.
rpact includes the classical group sequential designs (incl. user spending function approaches) where the sample sizes per stage (or the time points of interim analysis) cannot be changed in a data-driven way. Confirmatory adaptive designs explicitly allow for this under control of the Type I error rate. They are either based on the combination testing or the conditional rejection probability (CRP) principle. Both are available, for the former the inverse normal combination test and Fisher's combination test can be used.
Specific techniques of the adaptive methodology are also available, e.g., overall confidence intervals, overall p-values, and conditional and predictive power assessments. Simulations can be performed to assess the design characteristics of a (user-defined) sample size recalculation strategy. Designs are available for trials with continuous, binary, and survival endpoint.
For more information please visit www.rpact.org. If you are interested in professional services round about the package or need a comprehensive validation documentation to fulfill regulatory requirements please visit www.rpact.com.
rpact is developed by
Gernot Wassmer ([email protected]) and
Friedrich Pahlke ([email protected]).
Gernot Wassmer, Friedrich Pahlke
Wassmer, G., Brannath, W. (2016) Group Sequential and Confirmatory Adaptive Designs in Clinical Trials (Springer Series in Pharmaceutical Statistics; doi:10.1007/978-3-319-32562-0)
Useful links:
Report bugs at https://github.com/rpact-com/rpact/issues
With this function the format of the standard outputs of all rpact
objects can be changed and set user defined respectively.
setOutputFormat( parameterName = NA_character_, ..., digits = NA_integer_, nsmall = NA_integer_, trimSingleZeros = NA, futilityProbabilityEnabled = NA, file = NA_character_, resetToDefault = FALSE, roundFunction = NA_character_, persist = TRUE )
setOutputFormat( parameterName = NA_character_, ..., digits = NA_integer_, nsmall = NA_integer_, trimSingleZeros = NA, futilityProbabilityEnabled = NA, file = NA_character_, resetToDefault = FALSE, roundFunction = NA_character_, persist = TRUE )
parameterName |
The name of the parameter whose output format shall be edited.
Leave the default |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
digits |
How many significant digits are to be used for a numeric value.
The default, |
nsmall |
The minimum number of digits to the right of the decimal point in
formatting real numbers in non-scientific formats.
Allowed values are |
trimSingleZeros |
If |
futilityProbabilityEnabled |
If |
file |
An optional file name of an existing text file that contains output format definitions (see Details for more information). |
resetToDefault |
If |
roundFunction |
A character value that specifies the R base round function
to use, default is |
persist |
A logical value indicating whether the output format settings
should be saved persistently. Default is |
Output formats can be written to a text file (see getOutputFormat()
).
To load your personal output formats read a formerly saved file at the beginning of your
work with rpact
, e.g. execute setOutputFormat(file = "my_rpact_output_formats.txt")
.
Note that the parameterName
must not match exactly, e.g., for p-values the
following parameter names will be recognized amongst others:
p value
p.values
p-value
pValue
rpact.output.format.p.value
format
for details on the
function used internally to format the values.
Other output formats:
getOutputFormat()
## Not run: # show output format of p values getOutputFormat("p.value") # set new p value output format setOutputFormat("p.value", digits = 5, nsmall = 5) # show sample sizes as smallest integers not less than the not rounded values setOutputFormat("sample size", digits = 0, nsmall = 0, roundFunction = "ceiling") getSampleSizeMeans() # show sample sizes as smallest integers not greater than the not rounded values setOutputFormat("sample size", digits = 0, nsmall = 0, roundFunction = "floor") getSampleSizeMeans() # set new sample size output format without round function setOutputFormat("sample size", digits = 2, nsmall = 2) getSampleSizeMeans() # reset sample size output format to default setOutputFormat("sample size") getSampleSizeMeans() getOutputFormat("sample size") ## End(Not run)
## Not run: # show output format of p values getOutputFormat("p.value") # set new p value output format setOutputFormat("p.value", digits = 5, nsmall = 5) # show sample sizes as smallest integers not less than the not rounded values setOutputFormat("sample size", digits = 0, nsmall = 0, roundFunction = "ceiling") getSampleSizeMeans() # show sample sizes as smallest integers not greater than the not rounded values setOutputFormat("sample size", digits = 0, nsmall = 0, roundFunction = "floor") getSampleSizeMeans() # set new sample size output format without round function setOutputFormat("sample size", digits = 2, nsmall = 2) getSampleSizeMeans() # reset sample size output format to default setOutputFormat("sample size") getSampleSizeMeans() getOutputFormat("sample size") ## End(Not run)
This function ensures the correct installation of the rpact
package by performing
various tests. It supports a comprehensive validation process, essential for GxP compliance
and other regulatory requirements.
testPackage( outDir = ".", ..., completeUnitTestSetEnabled = TRUE, connection = list(token = NULL, secret = NULL), testFileDirectory = NA_character_, downloadTestsOnly = FALSE, addWarningDetailsToReport = TRUE, reportType = c("compact", "detailed", "Rout"), testInstalledBasicPackages = TRUE, scope = c("basic", "devel", "both", "internet", "all"), openHtmlReport = TRUE, keepSourceFiles = FALSE )
testPackage( outDir = ".", ..., completeUnitTestSetEnabled = TRUE, connection = list(token = NULL, secret = NULL), testFileDirectory = NA_character_, downloadTestsOnly = FALSE, addWarningDetailsToReport = TRUE, reportType = c("compact", "detailed", "Rout"), testInstalledBasicPackages = TRUE, scope = c("basic", "devel", "both", "internet", "all"), openHtmlReport = TRUE, keepSourceFiles = FALSE )
outDir |
The absolute path to the output directory where all test results will be saved. By default, the current working directory is used. |
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
completeUnitTestSetEnabled |
If |
connection |
A |
testFileDirectory |
An optional path pointing to a local directory containing test files. |
downloadTestsOnly |
If |
addWarningDetailsToReport |
If |
reportType |
The type of report to generate.
Can be |
testInstalledBasicPackages |
If |
scope |
The scope of the basic R package tests to run. Can be |
openHtmlReport |
If |
keepSourceFiles |
If |
This function is integral to the installation qualification (IQ) process of the rpact
package,
ensuring it meets quality standards and functions as expected. A directory named rpact-tests
is created within the specified output directory, where all test files are downloaded from a secure
resource and executed. Results are saved in the file testthat.Rout
, located in the
rpact-tests
directory.
Installation qualification is a critical step in the validation process. Without successful IQ,
the package cannot be considered fully validated. To gain access to the full set of unit tests,
users must provide token
and secret
credentials, which are distributed to
members of the rpact user group as part of the validation documentation.
For more information, see vignette rpact_installation_qualification.
Invisibly returns the value of completeUnitTestSetEnabled
.
For more information, please visit: https://www.rpact.org/iq
## Not run: # Set the output directory setwd("/path/to/output") # Basic usage testPackage() # Perform all unit tests with access credentials testPackage( connection = list( token = "your_token_here", secret = "your_secret_here" ) ) # Download test files without executing them testPackage(downloadTestsOnly = TRUE) ## End(Not run)
## Not run: # Set the output directory setwd("/path/to/output") # Basic usage testPackage() # Perform all unit tests with access credentials testPackage( connection = list( token = "your_token_here", secret = "your_secret_here" ) ) # Download test files without executing them testPackage(downloadTestsOnly = TRUE) ## End(Not run)
Distribution function, quantile function and random number generation for the piecewise exponential distribution.
getPiecewiseExponentialDistribution( time, ..., piecewiseSurvivalTime = NA_real_, piecewiseLambda = NA_real_, kappa = 1 ) ppwexp(t, ..., s = NA_real_, lambda = NA_real_, kappa = 1) getPiecewiseExponentialQuantile( quantile, ..., piecewiseSurvivalTime = NA_real_, piecewiseLambda = NA_real_, kappa = 1 ) qpwexp(q, ..., s = NA_real_, lambda = NA_real_, kappa = 1) getPiecewiseExponentialRandomNumbers( n, ..., piecewiseSurvivalTime = NA_real_, piecewiseLambda = NA_real_, kappa = 1 ) rpwexp(n, ..., s = NA_real_, lambda = NA_real_, kappa = 1)
getPiecewiseExponentialDistribution( time, ..., piecewiseSurvivalTime = NA_real_, piecewiseLambda = NA_real_, kappa = 1 ) ppwexp(t, ..., s = NA_real_, lambda = NA_real_, kappa = 1) getPiecewiseExponentialQuantile( quantile, ..., piecewiseSurvivalTime = NA_real_, piecewiseLambda = NA_real_, kappa = 1 ) qpwexp(q, ..., s = NA_real_, lambda = NA_real_, kappa = 1) getPiecewiseExponentialRandomNumbers( n, ..., piecewiseSurvivalTime = NA_real_, piecewiseLambda = NA_real_, kappa = 1 ) rpwexp(n, ..., s = NA_real_, lambda = NA_real_, kappa = 1)
... |
Ensures that all arguments (starting from the "...") are to be named and that a warning will be displayed if unknown arguments are passed. |
kappa |
A numeric value > 0. A |
t , time
|
Vector of time values. |
s , piecewiseSurvivalTime
|
Vector of start times defining the "time pieces". |
lambda , piecewiseLambda
|
Vector of lambda values (hazard rates) corresponding to the start times. |
q , quantile
|
Vector of quantiles. |
n |
Number of observations. |
getPiecewiseExponentialDistribution()
(short: ppwexp()
),
getPiecewiseExponentialQuantile()
(short: qpwexp()
), and
getPiecewiseExponentialRandomNumbers()
(short: rpwexp()
) provide
probabilities, quantiles, and random numbers according to a piecewise
exponential or a Weibull distribution.
The piecewise definition is performed through a vector of
starting times (piecewiseSurvivalTime
) and a vector of hazard rates (piecewiseLambda
).
You can also use a list that defines the starting times and piecewise
lambdas together and define piecewiseSurvivalTime as this list.
The list needs to have the form, e.g.,
piecewiseSurvivalTime <- list(
"0 - <6" = 0.025,
"6 - <9" = 0.04,
"9 - <15" = 0.015,
">=15" = 0.007) .
For the Weibull case, you can also specify a shape parameter kappa in order to
calculate probabilities, quantiles, or random numbers.
In this case, no piecewise definition is possible, i.e., only piecewiseLambda
(as a single value) and kappa need to be specified.
A numeric
value or vector will be returned.
## Not run: # Calculate probabilties for a range of time values for a # piecewise exponential distribution with hazard rates # 0.025, 0.04, 0.015, and 0.007 in the intervals # [0, 6), [6, 9), [9, 15), [15, Inf), respectively, # and re-return the time values: piecewiseSurvivalTime <- list( "0 - <6" = 0.025, "6 - <9" = 0.04, "9 - <15" = 0.015, ">=15" = 0.01 ) y <- getPiecewiseExponentialDistribution(seq(0, 150, 15), piecewiseSurvivalTime = piecewiseSurvivalTime ) getPiecewiseExponentialQuantile(y, piecewiseSurvivalTime = piecewiseSurvivalTime ) ## End(Not run)
## Not run: # Calculate probabilties for a range of time values for a # piecewise exponential distribution with hazard rates # 0.025, 0.04, 0.015, and 0.007 in the intervals # [0, 6), [6, 9), [9, 15), [15, Inf), respectively, # and re-return the time values: piecewiseSurvivalTime <- list( "0 - <6" = 0.025, "6 - <9" = 0.04, "9 - <15" = 0.015, ">=15" = 0.01 ) y <- getPiecewiseExponentialDistribution(seq(0, 150, 15), piecewiseSurvivalTime = piecewiseSurvivalTime ) getPiecewiseExponentialQuantile(y, piecewiseSurvivalTime = piecewiseSurvivalTime ) ## End(Not run)
Functions to convert pi, lambda and median values into each other.
getLambdaByPi(piValue, eventTime = 12, kappa = 1) getLambdaByMedian(median, kappa = 1) getHazardRatioByPi(pi1, pi2, eventTime = 12, kappa = 1) getPiByLambda(lambda, eventTime = 12, kappa = 1) getPiByMedian(median, eventTime = 12, kappa = 1) getMedianByLambda(lambda, kappa = 1) getMedianByPi(piValue, eventTime = 12, kappa = 1)
getLambdaByPi(piValue, eventTime = 12, kappa = 1) getLambdaByMedian(median, kappa = 1) getHazardRatioByPi(pi1, pi2, eventTime = 12, kappa = 1) getPiByLambda(lambda, eventTime = 12, kappa = 1) getPiByMedian(median, eventTime = 12, kappa = 1) getMedianByLambda(lambda, kappa = 1) getMedianByPi(piValue, eventTime = 12, kappa = 1)
piValue , pi1 , pi2 , lambda , median
|
Value that shall be converted. |
eventTime |
The assumed time under which the event rates are calculated, default is |
kappa |
A numeric value > 0. A |
Can be used, e.g., to convert median values into pi or lambda values for usage in
getSampleSizeSurvival()
or getPowerSurvival()
.
Returns a numeric
value or vector will be returned.
Writes a dataset to a CSV file.
writeDataset( dataset, file, ..., append = FALSE, quote = TRUE, sep = ",", eol = "\n", na = "NA", dec = ".", row.names = TRUE, col.names = NA, qmethod = "double", fileEncoding = "UTF-8" )
writeDataset( dataset, file, ..., append = FALSE, quote = TRUE, sep = ",", eol = "\n", na = "NA", dec = ".", row.names = TRUE, col.names = NA, qmethod = "double", fileEncoding = "UTF-8" )
dataset |
A dataset. |
file |
The target CSV file. |
... |
Further arguments to be passed to |
append |
Logical. Only relevant if file is a character string.
If |
quote |
The set of quoting characters. To disable quoting altogether, use
quote = "". See scan for the behavior on quotes embedded in quotes. Quoting is only
considered for columns read as character, which is all of them unless |
sep |
The field separator character. Values on each line of the file are separated
by this character. If sep = "," (the default for |
eol |
The character(s) to print at the end of each line (row). |
na |
The string to use for missing values in the data. |
dec |
The character used in the file for decimal points. |
row.names |
Either a logical value indicating whether the row names of |
col.names |
Either a logical value indicating whether the column names of |
qmethod |
A character string specifying how to deal with embedded double quote characters
when quoting strings. Must be one of "double" (default in |
fileEncoding |
Character string: if non-empty declares the encoding used on a file (not a connection) so the character data can be re-encoded. See the 'Encoding' section of the help for file, the 'R Data Import/Export Manual' and 'Note'. |
writeDataset()
is a wrapper function that coerces the dataset to a data frame and uses write.table
to write it to a CSV file.
writeDatasets()
for writing multiple datasets,
readDataset()
for reading a single dataset,
readDatasets()
for reading multiple datasets.
## Not run: datasetOfRates <- getDataset( n1 = c(11, 13, 12, 13), n2 = c(8, 10, 9, 11), events1 = c(10, 10, 12, 12), events2 = c(3, 5, 5, 6) ) writeDataset(datasetOfRates, "dataset_rates.csv") ## End(Not run)
## Not run: datasetOfRates <- getDataset( n1 = c(11, 13, 12, 13), n2 = c(8, 10, 9, 11), events1 = c(10, 10, 12, 12), events2 = c(3, 5, 5, 6) ) writeDataset(datasetOfRates, "dataset_rates.csv") ## End(Not run)
Writes a list of datasets to a CSV file.
writeDatasets( datasets, file, ..., append = FALSE, quote = TRUE, sep = ",", eol = "\n", na = "NA", dec = ".", row.names = TRUE, col.names = NA, qmethod = "double", fileEncoding = "UTF-8" )
writeDatasets( datasets, file, ..., append = FALSE, quote = TRUE, sep = ",", eol = "\n", na = "NA", dec = ".", row.names = TRUE, col.names = NA, qmethod = "double", fileEncoding = "UTF-8" )
datasets |
A list of datasets. |
file |
The target CSV file. |
... |
Further arguments to be passed to |
append |
Logical. Only relevant if file is a character string.
If |
quote |
The set of quoting characters. To disable quoting altogether, use
quote = "". See scan for the behavior on quotes embedded in quotes. Quoting is only
considered for columns read as character, which is all of them unless |
sep |
The field separator character. Values on each line of the file are separated
by this character. If sep = "," (the default for |
eol |
The character(s) to print at the end of each line (row). |
na |
The string to use for missing values in the data. |
dec |
The character used in the file for decimal points. |
row.names |
Either a logical value indicating whether the row names of |
col.names |
Either a logical value indicating whether the column names of |
qmethod |
A character string specifying how to deal with embedded double quote characters
when quoting strings. Must be one of "double" (default in |
fileEncoding |
Character string: if non-empty declares the encoding used on a file (not a connection) so the character data can be re-encoded. See the 'Encoding' section of the help for file, the 'R Data Import/Export Manual' and 'Note'. |
The format of the CSV file is optimized for usage of readDatasets()
.
writeDataset()
for writing a single dataset,
readDatasets()
for reading multiple datasets,
readDataset()
for reading a single dataset.
## Not run: d1 <- getDataset( n1 = c(11, 13, 12, 13), n2 = c(8, 10, 9, 11), events1 = c(10, 10, 12, 12), events2 = c(3, 5, 5, 6) ) d2 <- getDataset( n1 = c(9, 13, 12, 13), n2 = c(6, 10, 9, 11), events1 = c(10, 10, 12, 12), events2 = c(4, 5, 5, 6) ) datasets <- list(d1, d2) writeDatasets(datasets, "datasets_rates.csv") ## End(Not run)
## Not run: d1 <- getDataset( n1 = c(11, 13, 12, 13), n2 = c(8, 10, 9, 11), events1 = c(10, 10, 12, 12), events2 = c(3, 5, 5, 6) ) d2 <- getDataset( n1 = c(9, 13, 12, 13), n2 = c(6, 10, 9, 11), events1 = c(10, 10, 12, 12), events2 = c(4, 5, 5, 6) ) datasets <- list(d1, d2) writeDatasets(datasets, "datasets_rates.csv") ## End(Not run)