King's College London

Research portal

Reward Prediction Error Signals are Meta-Representational

Research output: Contribution to journalArticlepeer-review

Standard

Reward Prediction Error Signals are Meta-Representational. / Shea, Nicholas.

In: NOUS, Vol. 48, No. 2, 06.2014, p. 314-341.

Research output: Contribution to journalArticlepeer-review

Harvard

Shea, N 2014, 'Reward Prediction Error Signals are Meta-Representational', NOUS, vol. 48, no. 2, pp. 314-341. https://doi.org/10.1111/j.1468-0068.2012.00863.x

APA

Shea, N. (2014). Reward Prediction Error Signals are Meta-Representational. NOUS, 48(2), 314-341. https://doi.org/10.1111/j.1468-0068.2012.00863.x

Vancouver

Shea N. Reward Prediction Error Signals are Meta-Representational. NOUS. 2014 Jun;48(2):314-341. https://doi.org/10.1111/j.1468-0068.2012.00863.x

Author

Shea, Nicholas. / Reward Prediction Error Signals are Meta-Representational. In: NOUS. 2014 ; Vol. 48, No. 2. pp. 314-341.

Bibtex Download

@article{196234796e8741e6a67868c892636ea8,
title = "Reward Prediction Error Signals are Meta-Representational",
abstract = "Contents1. Introduction2. Reward-Guided Decision Making3. Content in the Model4. How to Deflate a Metarepresentational Reading.Proust and Carruthers on metacognitive feelings5. A Deflationary Treatment of RPEs?5.1 Dispensing with prediction errors5.2 What is use of the RPE focused on?5.3 Alternative explanations—worldly correlates5.4 Contrast cases6. ConclusionAppendix: Temporal Difference Learning Algorithms",
author = "Nicholas Shea",
year = "2014",
month = jun,
doi = "10.1111/j.1468-0068.2012.00863.x",
language = "English",
volume = "48",
pages = "314--341",
journal = "NOUS",
issn = "0029-4624",
publisher = "Wiley-Blackwell",
number = "2",

}

RIS (suitable for import to EndNote) Download

TY - JOUR

T1 - Reward Prediction Error Signals are Meta-Representational

AU - Shea, Nicholas

PY - 2014/6

Y1 - 2014/6

N2 - Contents1. Introduction2. Reward-Guided Decision Making3. Content in the Model4. How to Deflate a Metarepresentational Reading.Proust and Carruthers on metacognitive feelings5. A Deflationary Treatment of RPEs?5.1 Dispensing with prediction errors5.2 What is use of the RPE focused on?5.3 Alternative explanations—worldly correlates5.4 Contrast cases6. ConclusionAppendix: Temporal Difference Learning Algorithms

AB - Contents1. Introduction2. Reward-Guided Decision Making3. Content in the Model4. How to Deflate a Metarepresentational Reading.Proust and Carruthers on metacognitive feelings5. A Deflationary Treatment of RPEs?5.1 Dispensing with prediction errors5.2 What is use of the RPE focused on?5.3 Alternative explanations—worldly correlates5.4 Contrast cases6. ConclusionAppendix: Temporal Difference Learning Algorithms

U2 - 10.1111/j.1468-0068.2012.00863.x

DO - 10.1111/j.1468-0068.2012.00863.x

M3 - Article

VL - 48

SP - 314

EP - 341

JO - NOUS

JF - NOUS

SN - 0029-4624

IS - 2

ER -

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454