Large language models respond to emotionally charged inputs with contextually appropriate outputs, but the mechanism by which they represent, propagate, and modulate emotional tone through their internal layers remains poorly understood. Do emotions “live” in specific layers? Is the signal carried by the attention mechanism, the MLP, or the residual stream itself? And when a model is instructed to be a “helpful assistant,” does its internal representation remain emotionally neutral, or does it mirror the user’s emotional state?
Month: April 2026
The Memory That Stays. Part 2
The tool is never the bottleneck. The bottleneck is everything the tool cannot see, cannot access, and does not know it should ask about. The Illusion of the Ready-Made Agent In the past months Anthropic’s Cowork graduated from research preview to general availability across all paid plans, bringing desktop-native agentic capabilities to marketing, finance, legal, […]
The Memory That Stays. Part 1
An organisation that cannot remember what it decided, or why, is condemned to decide the same things over and over again, each time believing it is the first.
Small LLM Performance Benchmark – Research Report
This report presents the results of a systematic evaluation of 22 quantized open-source language models across description generation tasks, measuring quality, JSON reliability, and inference efficiency.