Improving Activation Steering in Language Models with Mean-Centring

Ole Jorgensen, Dylan Cope, Nandi Schoots, Murray Shanahan

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

15 Downloads (Pure)


Recent work in activation steering has demonstrated the potential to better control the outputs of Large Language Models (LLMs), but it involves finding steering vectors. This is difficult because engineers do not typically know how features are represented in these models. We seek to address this issue by applying the idea of mean-centring to steering vectors. We find that taking the average of activations associated with a target dataset, and then subtracting the mean of all training activations, results in effective steering vectors. We test this method on a variety of models on natural language tasks by steering away from generating toxic text, and steering the completion of a story towards a target genre. We also apply mean-centring to extract function vectors, more effectively triggering the execution of a range of natural language tasks by a significant margin (compared to previous baselines). This suggests that mean-centring can be used to easily improve the effectiveness of activation steering in a wide range of contexts.
Original languageEnglish
Title of host publicationResponsible Language Models Workshop (ReLM) at AAAI-24
Publication statusPublished - 26 Feb 2024
EventResponsible Language Models Workshop at AAAI-24 - Vancouver, Canada
Duration: 26 Feb 202426 Feb 2024


WorkshopResponsible Language Models Workshop at AAAI-24
Abbreviated titleReLM@AAAI-24
Internet address


  • cs.CL
  • cs.AI
  • cs.LG


Dive into the research topics of 'Improving Activation Steering in Language Models with Mean-Centring'. Together they form a unique fingerprint.

Cite this