TY - CHAP
T1 - Bounding Causal Effects with Leaky Instruments
AU - Watson, David S.
AU - Penn, Jordan
AU - Gunderson, Lee M.
AU - Bravo-Hermsdorff, Gecia
AU - Mastouri, Afsaneh
AU - Silva, Ricardo
N1 - Publisher Copyright:
© 2024 Proceedings of Machine Learning Research. All rights reserved.
PY - 2024
Y1 - 2024
N2 - Instrumental variables (IVs) are a popular and powerful tool for estimating causal effects in the presence of unobserved confounding. However, classical approaches rely on strong assumptions such as the exclusion criterion, which states that instrumental effects must be entirely mediated by treatments. This assumption often fails in practice. When IV methods are improperly applied to data that do not meet the exclusion criterion, estimated causal effects may be badly biased. In this work, we propose a novel solution that provides partial identification in linear systems given a set of leaky instruments, which are allowed to violate the exclusion criterion to some limited degree. We derive a convex optimization objective that provides provably sharp bounds on the average treatment effect under some common forms of information leakage, and implement inference procedures to quantify the uncertainty of resulting estimates. We demonstrate our method in a set of experiments with simulated data, where it performs favorably against the state of the art. An accompanying R package, leakyIV, is available from CRAN.
AB - Instrumental variables (IVs) are a popular and powerful tool for estimating causal effects in the presence of unobserved confounding. However, classical approaches rely on strong assumptions such as the exclusion criterion, which states that instrumental effects must be entirely mediated by treatments. This assumption often fails in practice. When IV methods are improperly applied to data that do not meet the exclusion criterion, estimated causal effects may be badly biased. In this work, we propose a novel solution that provides partial identification in linear systems given a set of leaky instruments, which are allowed to violate the exclusion criterion to some limited degree. We derive a convex optimization objective that provides provably sharp bounds on the average treatment effect under some common forms of information leakage, and implement inference procedures to quantify the uncertainty of resulting estimates. We demonstrate our method in a set of experiments with simulated data, where it performs favorably against the state of the art. An accompanying R package, leakyIV, is available from CRAN.
UR - http://www.scopus.com/inward/record.url?scp=85212174857&partnerID=8YFLogxK
M3 - Conference paper
AN - SCOPUS:85212174857
VL - 244
T3 - Proceedings of Machine Learning Research
SP - 3689
EP - 3710
BT - Proceedings of the 40th Conference on Uncertainty in Artificial Intelligence
T2 - 40th Conference on Uncertainty in Artificial Intelligence, UAI 2024
Y2 - 15 July 2024 through 19 July 2024
ER -