Large language models respond to emotionally charged inputs with contextually appropriate outputs, but the mechanism by which they represent, propagate, and modulate emotional tone through their internal layers remains poorly understood. Do emotions “live” in specific layers? Is the signal carried by the attention mechanism, the MLP, or the residual stream itself? And when a model is instructed to be a “helpful assistant,” does its internal representation remain emotionally neutral, or does it mirror the user’s emotional state?