Exploratory Control with Tsallis Entropy for Latent Factor Models

Ryan Donnelly*, Sebastian Jaimungal

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

98 Downloads (Pure)

Abstract

We study optimal control in models with latent factors where the agent controls the distribution over actions, rather than actions themselves, in both discrete and continuous time. To encourage exploration of the state space, we reward exploration with Tsallis entropy and derive the optimal distribution over states-which we prove is q-Gaussian distributed with location characterized through the solution of an BS\DeltaE and BSDE in discrete and continuous time, respectively. We discuss the relation between the solutions of the optimal exploration problems and the standard dynamic optimal control solution. Finally, we develop the optimal policy in a model-agnostic setting along the lines of soft Q-learning. The approach may be applied in, e.g., developing more robust statistical arbitrage trading strategies.

Original languageEnglish
Pages (from-to)26-53
Number of pages28
JournalSIAM Journal on Financial Mathematics
Volume15
Issue number1
Early online date5 Feb 2024
DOIs
Publication statusPublished - Mar 2024

Fingerprint

Dive into the research topics of 'Exploratory Control with Tsallis Entropy for Latent Factor Models'. Together they form a unique fingerprint.

Cite this