TY - JOUR
T1 - Exploratory Control with Tsallis Entropy for Latent Factor Models
AU - Donnelly, Ryan
AU - Jaimungal, Sebastian
N1 - Funding Information:
\\ast Received by the editors November 15, 2022; accepted for publication (in revised form) November 12, 2023; published electronically February 5, 2024. https://doi.org/10.1137/22M153505X Funding: The work of the second author was supported by the Natural Sciences and Engineering Research Council of Canada (grants RGPIN-2018-05705 and RGPAS-2018-522715). \\dagger Department of Mathematics, King's College London, Strand, London, WC2R 2LS, UK (ryan.f.donnelly@ kcl.ac.uk). \\ddagger Department of Statistical Sciences, University of Toronto, Toronto, Ontario, M5G 1Z5, Canada (sebastian. [email protected]).
Publisher Copyright:
© 2024 Society for Industrial and Applied Mathematics Publications. All rights reserved.
PY - 2024/3
Y1 - 2024/3
N2 - We study optimal control in models with latent factors where the agent controls the distribution over actions, rather than actions themselves, in both discrete and continuous time. To encourage exploration of the state space, we reward exploration with Tsallis entropy and derive the optimal distribution over states-which we prove is q-Gaussian distributed with location characterized through the solution of an BS\DeltaE and BSDE in discrete and continuous time, respectively. We discuss the relation between the solutions of the optimal exploration problems and the standard dynamic optimal control solution. Finally, we develop the optimal policy in a model-agnostic setting along the lines of soft Q-learning. The approach may be applied in, e.g., developing more robust statistical arbitrage trading strategies.
AB - We study optimal control in models with latent factors where the agent controls the distribution over actions, rather than actions themselves, in both discrete and continuous time. To encourage exploration of the state space, we reward exploration with Tsallis entropy and derive the optimal distribution over states-which we prove is q-Gaussian distributed with location characterized through the solution of an BS\DeltaE and BSDE in discrete and continuous time, respectively. We discuss the relation between the solutions of the optimal exploration problems and the standard dynamic optimal control solution. Finally, we develop the optimal policy in a model-agnostic setting along the lines of soft Q-learning. The approach may be applied in, e.g., developing more robust statistical arbitrage trading strategies.
UR - http://www.scopus.com/inward/record.url?scp=85190401841&partnerID=8YFLogxK
U2 - 10.1137/22M153505X
DO - 10.1137/22M153505X
M3 - Article
SN - 1945-497X
VL - 15
SP - 26
EP - 53
JO - SIAM Journal on Financial Mathematics
JF - SIAM Journal on Financial Mathematics
IS - 1
ER -