Reasoning Shift: How Context Silently Shortens LLM Reasoning
TL;DR Highlight
When irrelevant context is present, reasoning models skip self-verification and cut reasoning tokens by up to 50%.
Who Should Read
Developers building LLM agents or multi-turn chatbots who wonder why accuracy drops on complex reasoning tasks. ML engineers designing systems that use reasoning models in long-context environments.
Core Mechanics
- When irrelevant context is present (e.g., long documents, previous conversation history), reasoning models generate up to 50% fewer reasoning tokens for the same problem. Confirmed across all four models: Qwen3.5-27B, GPT-OSS-120B, Gemini 3 Flash Preview, and Kimi K2 Thinking.
- The shortened reasoning is not due to the model misunderstanding the problem. In fact, the model immediately recognizes irrelevant context as 'not relevant' and moves on — yet reasoning still becomes shorter.
- The key difference lies not in 'when the first candidate answer is found' but in 'the self-verification behavior that follows.' In the baseline, models continue double-checking after finding an answer, but under long-input conditions, models terminate reasoning immediately after producing an answer far more often (57% vs. 68% immediate termination).
- Inserting just a few hundred tokens of irrelevant prefix reduces average reasoning length by 18%, and inserting 64k tokens reduces it by up to 53%.
- For easy problems, this phenomenon acts as a reduction in overthinking and does not affect accuracy, but for difficult math problems (by IMOAnswerBench standards), it causes a 9–15% accuracy drop.
- This phenomenon is far more pronounced in thinking mode. Non-thinking mode sees a 19% reduction in response length, while thinking mode sees a 53% reduction.
Evidence
- "Accuracy drops under Subtask/Long input scenarios on IMOAnswerBench: Qwen-3.5-27B 12%, GPT-OSS-120B 9%, Gemini 3 Flash Preview 15%, Kimi K2 Thinking 9%. Average reasoning length for Qwen3.5-27B on MATH500: Baseline 8,003 tokens → Long input (64k prefix) 3,762 tokens, a 53% reduction. In resampling experiments starting from the same reasoning prefix, 46% of Long input conditions terminated reasoning immediately vs. only 21% in Baseline conditions; frequencies of self-verification keywords such as 'Wait', 'Alternatively', and 'But' also decreased. Inserting as few as 128 tokens reduces Qwen3.5-27B's average reasoning length by approximately 18%, with the reduction increasing monotonically as more tokens are inserted."
How to Apply
- "When building multi-turn agents, be aware that longer conversation histories can degrade reasoning quality for the current subtask. Consider architectures that process subtasks in isolated new contexts (separate API calls) or apply context compaction. When using reasoning models for difficult reasoning tasks, avoid prepending unnecessary background information or prior outputs to the prompt. Extracting only the necessary information and constructing a minimal context helps preserve reasoning quality. When using reasoning models in a RAG pipeline, inserting retrieved documents wholesale can shorten reasoning. Consider precisely extracting only the relevant chunks, or decomposing the problem into independent subtasks and calling the model separately with shorter context for each."
Code Example
# Example patterns for maintaining reasoning quality in long-context environments
# Bad example: putting irrelevant context and the problem together
bad_prompt = """
[Previous conversation history or thousands of tokens of long document...]
Based on the above, please solve the following math problem:
{problem}
"""
# Good example 1: call subtask in an isolated new context
def solve_subtask_isolated(problem: str, client) -> str:
"""Isolate subtasks in a complex agent workflow to maintain reasoning quality"""
response = client.chat.completions.create(
model="qwen/qwen3.5-27b", # or another reasoning model
messages=[
{
"role": "user",
"content": f"Please reason step-by-step and put the final answer within \\boxed{{}}\n\n{problem}"
}
],
# Pass only the problem — do not carry over the long agent context
)
return response.choices[0].message.content
# Good example 2: if long context is unavoidable, explicitly instruct the model to ignore it
def solve_with_context_hint(problem: str, context: str, client) -> str:
response = client.chat.completions.create(
model="qwen/qwen3.5-27b",
messages=[
{
"role": "user",
"content": f"[Background context - you may ignore this]:\n{context}\n\n[Task - focus entirely on this]:\nPlease reason step-by-step carefully, verify your answer, and put the final answer within \\boxed{{}}\n\n{problem}"
}
]
)
return response.choices[0].message.contentTerminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.
Related Resources
Original Abstract (Expand)
Large language models (LLMs) exhibiting test-time scaling behavior, such as extended reasoning traces and self-verification, have demonstrated remarkable performance on complex, long-term reasoning tasks. However, the robustness of these reasoning behaviors remains underexplored. To investigate this, we conduct a systematic evaluation of multiple reasoning models across three scenarios: (1) problems augmented with lengthy, irrelevant context; (2) multi-turn conversational settings with independent tasks; and (3) problems presented as a subtask within a complex task. We observe an interesting phenomenon: reasoning models tend to produce much shorter reasoning traces (up to 50%) for the same problem under different context conditions compared to the traces produced when the problem is presented in isolation. A finer-grained analysis reveals that this compression is associated with a decrease in self-verification and uncertainty management behaviors, such as double-checking. While this behavioral shift does not compromise performance on straightforward problems, it might affect performance on more challenging tasks. We hope our findings draw additional attention to both the robustness of reasoning models and the problem of context management for LLMs and LLM-based agents.