Skip to content

Research

Daniel’s current research is focused on bridging the theoretical gap between provable optimal RL algorithms which are mainly of limited practical use and those with seemingly empirically good success but with weak, if any, theoretical guarantees.

Publications

Wesley Cowan, Michael N. Katehakis, and Daniel Pirutinsky (2019c). Sufficient conditions for asymptotic logarithmic regret for reinforcement learning Manuscript in preparation.

Debopriya Ghosh, Odysseas Kanavetas, Michael N. Katehakis, Spiros Papadimitriou, and Daniel Pirutinsky (2019) New Reinforcement Learning Models and Algorithms for Healthcare Treatment Choices Manuscript in preparation.

Wesley Cowan, Michael N. Katehakis, and Daniel Pirutinsky (2019b). Accelerating the Computation of UCB and Related Indices for Reinforcement Learning Manuscript under review. [arxiv]

Wesley Cowan, Michael N. Katehakis, and Daniel Pirutinsky (2019a). Reinforcement learning: a comparison of ucb versus alternative adaptive policies Proceedings of First Congress of Greek Mathematicians, De Gruyter Proceedings in Mathematics, 2019. [arxiv]

Lisa Hellerstein, Thomas Lidbetter, and Daniel Pirutinsky (2019). Solving zero-sum games using best- response oracles with applications to search games Operations Research, 67(3):731–743, 2019. [arxiv]