Deep advantage learning for optimal dynamic treatment regime.

TitleDeep advantage learning for optimal dynamic treatment regime.
Publication TypeJournal Article
Year of Publication2018
AuthorsLiang, Shuhan, Wenbin Lu, and Rui Song
JournalStat Theory Relat Fields
Volume2
Issue1
Pagination80-88
Date Published2018
ISSN2475-4277
Abstract

Recently deep learning has successfully achieved state-of-the-art performance on many difficult tasks. Deep neural network outperforms many existing popular methods in the field of reinforcement learning. It can also identify important covariates automatically. Parameter sharing of convolutional neural network (CNN) greatly reduces the amount of parameters in the neural network, which allows for high scalability. However few research has been done on deep advantage learning (A-learning). In this paper, we present a deep A-learning approach to estimate optimal dynamic treatment regime. A-learning models the advantage function, which is of direct relevance to the goal. We use an inverse probability weighting (IPW) method to estimate the difference between potential outcomes, which does not require to make any model assumption on the baseline mean function. We implemented different architectures of deep CNN and convexified convolutional neural networks (CCNN). The proposed deep A-learning methods are applied to a data from the STAR*D trial and are shown to have better performance compared with the penalized least square estimator using a linear decision rule.

DOI10.1080/24754269.2018.1466096
Alternate JournalStat Theory Relat Fields
Original PublicationDeep advantage learning for optimal dynamic treatment regime.
PubMed ID30420972
PubMed Central IDPMC6226036
Grant ListP01 CA142538 / CA / NCI NIH HHS / United States
Project: