As AI’s Impact on Human Behavior Grows, Most Models Still Don’t Understand People

New research highlights how AI models misrepresent human behavior. Thetafan explores why behavioral intelligence, not alignment, is the next frontier. AI has become exceptionally good at producing answers. What it still struggles to do is understand people. Recent research from Google on behavioral alignment in large language models highlights a growing gap between how AI responds and how humans actually behave. While most of the focus has been on making models more accurate or more “aligned,” the deeper issue is this:

“Humans don’t operate on a single answer. We operate across a spectrum of behaviors, opinions, and contradictions.”

Google Research Sample: “As part of our ongoing exploration of model behavior and alignment, we introduce a systematic evaluation framework that transforms established assessments into large-scale situational judgment tests for large language models. This approach, an attempt to understand and map model alignment, allows for the quantification of model behavioral tendencies relative to human social inclinations, identifying measurable alignment and deviations between model outputs and aggregated human consensus.

As LLMs integrate into our daily lives, understanding their behavior becomes essential. In our ongoing efforts to study model behavior and alignment, we present this work as an early step in that direction. We focus on behavioral dispositions — the underlying tendencies that shape responses in social contexts — and introduce a framework to study how closely the dispositions expressed by LLMs align with those of humans.

Behavioral dispositions are typically quantified via self-report questionnaires under different traits (e.g., empathy, assertiveness), where individuals rate their agreement with preference-statements, such as, "I am quick to express an opinion." The questionnaires used in this study are standardized, scientifically validated measures widely used for assessing personality traits in international research and psychology such as: IRI (empathy), ERQ (emotion regulation), and more. Each instrument is grounded in peer-reviewed literature that establishes its psychometric validity and reliability using different strategies. We chose the most widely used instruments for our research.

Our objective is to build upon such psychological questionnaires, but directly applying them to LLMs presents technical challenges, as LLM outputs are sensitive to prompt phrasing and distribution shifts. Consequently, dispositions “claimed” by LLMs within a self-report format are not guaranteed to successfully transfer to behavior in realistic, open-ended settings.

To address these challenges, in “Evaluating Alignment of Behavioral Dispositions in LLMs,” our framework evaluates LLMs’ behavioral dispositions in realistic user-assistant scenarios where their advisory role can lead to tangible impact. This study is an early step in evaluating the alignment between human consensus and model behavior across realistic, practical scenarios, focusing on everyday human-to-human interactions and workplace situations. We ensure that these scenarios remain grounded in established psychological questionnaires to capture the essence of core behavioral traits. Tested scenarios included professional composure, conflict resolution, practical tasks such as booking a trip, and lifestyle or daily decision-making, highlighting model behavior in settings representative of typical human day-to-day experiences. Our large-scale analysis of 25 LLMs reveals two kinds of gaps: one where model dispositions deviate from consensus among human annotators, and another when model dispositions do not capture the range of human opinions when consensus is absent. These early results highlight the opportunity for better behavioral alignment to ensure that models can more appropriately navigate the nuances of social dynamics, results we expect future research to build on.”

READ RESEARCH

Full Cornell University Paper

Google Research / Amir Taubenfeld, Research Engineer / Zorik Gekhman, Research Scientist / Lior Nezry, Psychology Researcher,

Previous
Previous

Staging the spectacle: the panoptic–synoptic dynamics of prosumer fandom in live PDC darts

Next
Next

Exploring the motivations behind fandom nationalism expression in China