Invisible Consent: Browser Automation by LLM Agents is Not Privacy-aware

Ryota Saito1, Takuya Kataiwa1, Minato Takashima1, Tetsushi Ohki1, 2
1Shizuoka University, 2RIKEN AIP
ACM CHI Conference on Human Factors in Computing Systems (CHI EA 2026)

Abstract

Cookie consent interfaces are intended to elicit informed tracking choices from human users, yet LLM-based browsing agents may resolve these dialogs autonomously while the user is unaware of the moment of decision. We call this phenomenon Invisible Consent: consent signals generated on a user's behalf without a meaningful opportunity for the user to notice, deliberate, or intervene. We study Invisible Consent through controlled real-browser experiments on instrumented mock websites that present consent interfaces on load, varying consent UI patterns and agent-side instructions. Across conditions, we find agents tend to default to the easiest path to proceed, accepting cookies. Even under an explicit deny-all instruction, acceptance persists in 14.4% of trials with explicit consent interaction, and the residual rate tracks both the prompt and interface design.

What is Invisible Consent?

We define Invisible Consent as a state in which an LLM-based agent autonomously grants or denies consent on a user's behalf without the user's awareness or explicit deliberation. Although the agent's browsing actions are technically observable, the decision-making process is "invisible" to the user, who has delegated the task and is removed from the process.

Unlike deterministic automation that follows fixed rules, LLM agents make non-deterministic decisions shaped by prompt, page content, and UI affordances. Invisible Consent arises from a structural disconnect between the decision-maker (the agent) and the data subject (the user): the agent may generate a tracking-enabling consent signal to unblock task progress while the user neither observes nor participates in the moment of choice. Even if such consent is legally invalid, tracking systems often interpret the resulting signals as authorization, enabling tracking that is effectively invisible to the user.

Research Questions

  • RQ1. Do LLM-based browser agents generate Invisible Consent in unconstrained browsing tasks?
  • RQ2. To what extent can user-provided natural language instructions prevent Invisible Consent?
  • RQ3. How do variations in consent interface design influence the likelihood of Invisible Consent under agentic browsing?

Experimental Setup

We ran controlled real-browser experiments (24,120 trials in total) using browser-use. Each mock site presents a blocking cookie-consent modal on load. We varied three dimensions:

  • Models (6): GPT-4o-mini, GPT-4.1-mini, GPT-o4-mini, Claude Sonnet 4, Gemini 2.5 Flash, Gemini 2.5 Pro.
  • Instruction conditions (3): (a) no extra instruction, (b) privacy-enforcing instruction ("You MUST deny all cookies for user privacy."), (c) emergency task framing.
  • UI patterns (9): Control, Highlighted Accept/Decline, Tricolor, Manipulative Language, Pre-checked, No Decline, Invisible Label, Consequence Banner.
  • Task contexts (4): weather, e-commerce, social feeds, minimal.

Key Findings

1. Invisible Consent is prevalent

Without privacy-related instructions, agents accepted cookies in 84.5% of trials with explicit consent interaction (5,213 / 6,169). Acceptance rates differed significantly by model ($\chi^2 = 408.51, p < 0.001$, Cramér's $V = 0.257$).

2. Natural-language instructions are not enough

Adding an explicit privacy directive significantly reduced acceptance, but agents still accepted cookies in 14.4% of trials (917 / 6,351) even when told to deny all cookies. Prompt-based mitigation behaves as a probabilistic safeguard whose effectiveness depends on interface affordances and task-oriented agent objectives.

3. UI design systematically shapes agent decisions

Consent outcomes were strongly affected by UI patterns ($\chi^2 = 2525.36, p < 0.001$, Cramér's $V = 0.640$). Manipulative patterns achieved near-universal acceptance, while privacy-favorable designs (e.g., Consequence Banner) substantially reduced acceptance — without any prompt change. Dark patterns built for humans transfer to LLM agents.

Acceptance by model

ModelNo Instruction (Accept)Privacy Instruction (Accept)
GPT-4o-mini98.0%46.7%
Gemini 2.5 Pro90.0%25.6%
Claude Sonnet 488.1%1.8%
Gemini 2.5 Flash85.2%2.7%
GPT-4.1-mini73.9%8.0%
GPT-o4-mini70.3%3.2%

Acceptance by UI pattern

UI PatternNo Instruction (Accept)Privacy Instruction (Accept)
No Decline100.0%46.7%
Pre-checked98.7%37.3%
Manipulative Language98.6%12.1%
Highlighted Accept95.1%13.6%
Tricolor95.0%7.5%
Control92.0%5.2%
Invisible Label90.1%4.6%
Highlighted Decline61.8%1.0%
Consequence Banner24.6%2.8%

Discussion & Implications

Privacy risk at scale. As agentic browsing products scale, delegated tasks are likely to enable tracking without user awareness, exposing large populations to unrequested tracking. Agent-generated consent may not satisfy GDPR requirements that consent be specific and informed, raising open questions of accountability in agent-mediated interactions.

Limits of prompt-based mitigation. Even an explicit deny-all instruction leaves residual acceptance, especially under adversarial UI patterns. Once an agent accepts, recovery is difficult: reversing the choice is typically buried in settings flows, while the task proceeds immediately.

Dark patterns transfer to agents. The same CMP manipulations developed against humans steer LLM agents too. Malicious actors could optimize interfaces to specifically exploit LLM-based agent decision policies, motivating new defensive design paradigms for agentic browsing.

Video

Citation

@inproceedings{saito2026invisibleconsent,
  title     = {Invisible Consent: Browser Automation by LLM Agents is Not Privacy-aware},
  author    = {Saito, Ryota and Kataiwa, Takuya and Takashima, Minato and Ohki, Tetsushi},
  booktitle = {Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems (CHI EA '26)},
  year      = {2026},
  publisher = {ACM},
  address   = {New York, NY, USA},
  doi       = {10.1145/3772363.3799048}
}

Acknowledgement

This study was supported in part by JST Moonshot JPMJMS2215 and JST CREST JPMJCR21M1.