|Title||Interactive Q-learning for Quantiles.|
|Publication Type||Journal Article|
|Year of Publication||2017|
|Authors||Linn, Kristin A., Eric B. Laber, and Leonard A. Stefanski|
|Journal||J Am Stat Assoc|
A dynamic treatment regime is a sequence of decision rules, each of which recommends treatment based on features of patient medical history such as past treatments and outcomes. Existing methods for estimating optimal dynamic treatment regimes from data optimize the mean of a response variable. However, the mean may not always be the most appropriate summary of performance. We derive estimators of decision rules for optimizing probabilities and quantiles computed with respect to the response distribution for two-stage, binary treatment settings. This enables estimation of dynamic treatment regimes that optimize the cumulative distribution function of the response at a prespecified point or a prespecified quantile of the response distribution such as the median. The proposed methods perform favorably in simulation experiments. We illustrate our approach with data from a sequentially randomized trial where the primary outcome is remission of depression symptoms.
|Alternate Journal||J Am Stat Assoc|
|Original Publication||Interactive Q-learning for quantiles.|
|PubMed Central ID||PMC5586239|
|Grant List||P01 CA142538 / CA / NCI NIH HHS / United States |
R01 CA085848 / CA / NCI NIH HHS / United States
DMS-0906421 / / National Science Foundation / International
DMS-1406456 / / National Science Foundation / International
Interactive Q-learning for Quantiles.