Large language models respond to emotionally charged inputs with contextually appropriate outputs, but the mechanism by which they represent, propagate, and modulate emotional tone through their internal layers remains poorly understood. Do emotions “live” in specific layers? Is the signal carried by the attention mechanism, the MLP, or the residual stream itself? And when a model is instructed to be a “helpful assistant,” does its internal representation remain emotionally neutral, or does it mirror the user’s emotional state?
Category: Research Report
Small LLM Performance Benchmark – Research Report
This report presents the results of a systematic evaluation of 22 quantized open-source language models across description generation tasks, measuring quality, JSON reliability, and inference efficiency.