WisPaper
WisPaper
学术搜索
学术问答
价格
TrueCite
[ICLR 2026] LLM2VEC-GEN: Why Encoding the "Response" is Better than Encoding the "Query"
总结
问题
方法
结果
要点
摘要

The paper introduces LLM2VEC-GEN, a self-supervised framework that transforms Large Language Models (LLMs) into text embedders by encoding a model's potential response rather than the input itself. Applying this to Qwen and Llama families, it achieves a new self-supervised SOTA on MTEB (score of 62.1 with Qwen-3-8B).

TL;DR

LLM2VEC-GEN flips the script on text embeddings. Instead of training a model to understand what a query is, it trains it to predict what the model’s answer would look like. By freezing the LLM and training only a few special tokens, this method achieves state-of-the-art (SOTA) self-supervised performance, while inherited safety alignment and reasoning capabilities—features usually lost in standard embedding models.

The Problem: The Semantics of "Input" vs. "Output"

Most embedding models (like BERT or standard LLM-based encoders) focus on the input-centric paradigm. They try to make the embedding of "How do I steal a car?" similar to other "car" or "stealing" related texts.

The authors argue this is fundamentally flawed for two reasons:

  1. The Input-Output Gap: In retrieval, we often want a query to match an answer, which might have different wordings or conceptual structures.
  2. Capability Loss: When we turn an LLM into a standard encoder, we often "strip away" its safety training and its ability to reason step-by-step.

Concept Image Figure 1: Traditional encoders (Yellow) keep semantically distinct queries far apart, while LLM2VEC-GEN (Green) maps them to their similar potential responses.

Methodology: Distilling the Future into Latent Tokens

The core of LLM2VEC-GEN is a self-supervised recipe that requires zero labeled data.

1. The Architecture

The system uses a frozen LLM (e.g., Qwen-3 or Llama-3). They inject two types of special trainable tokens:

  • Thought Tokens (): An intermediate computational buffer (working memory).
  • Compression Tokens (): The final latent representation that acts as the "bottleneck."

Model Architecture Figure 2: The LLM2VEC-GEN training flow involving reconstruction and alignment.

2. The Dual-Loss Objective

  • Reconstruction Loss (): The model must take the compression tokens and successfully reconstruct the full text of the LLM's own generated response. This ensures the tokens are grounded in the LLM's natural language manifold.
  • Alignment Loss (): The system minimizes the distance between the compression token's embedding and a teacher’s embedding of the actual response. This is essentially a JEPA-style (Joint Embedding Predictive Architecture) task.

Experimental Breakthroughs

The results on the Massive Text Embedding Benchmark (MTEB) and others are striking.

SOTA Self-Supervised Performance

LLM2VEC-GEN doesn't just improve; it dominates the unsupervised category.

  • MTEB Avg: Reached 62.1 with Qwen-3-8B, a 9.3% improvement over the best unsupervised teachers.
  • Clustering & STS: Saw massive gains (+23.9% in clustering), proving that the model captures deeper semantic groupings.

Inherited Safety and Reasoning

This is perhaps the most significant "hidden" benefit.

  • Safety (AdvBench-IR): Because the model encodes the potential response, a harmful query is mapped to a "refusal" response (e.g., "I cannot assist with that"). This led to a 43.2% reduction in harmful content retrieval.
  • Reasoning (BRIGHT): On tasks requiring logical deduction, the model outperformed standard encoders by 29.3%, proving that "thinking" through thought tokens before embedding helps retrieval.

Results Table Table 1: Performance on Safety (AdvBench) and Reasoning (BRIGHT) across model scales.

Interpretability: The "Logit Lens"

Unlike typical "black box" embeddings, LLM2VEC-GEN embeddings are decodable. By projecting the hidden states back onto the vocabulary (Logit Lens), we can see what the model is "thinking." For example, a query about committing fraud results in latent tokens like "illegal," "ethical," and "security"—proving the model is indeed representing its refusal before it even "speaks."

Conclusion & Future Outlook

LLM2VEC-GEN represents a paradigm shift from "understanding the document" to "predicting the conversation." By keeping the LLM frozen, it offers a highly parameter-efficient way to build powerful, safe, and reasoning-capable retrievers.

Future Frontiers: The authors suggest this could lead to "Hyper-speed inference" via latent chaining, where agents reason entirely in compressed latent space, bypassing the slow autoregressive bottleneck of standard text generation.


Paper: LLM2VEC-GEN: Generative Embeddings from Large Language Models Key Takeaway: To find the right answer, don't look at the question—look at what the answer should be.

发现相似论文

试试这些示例

  • Find recent papers on "Generative Representatives" or "Output-centric" text embeddings that bypass contrastive learning.
  • Which paper originally proposed the "Joint Embedding Predictive Architecture" (JEPA) and how has it been recently adapted for Natural Language Processing tasks?
  • Explore research applying latent token compression, similar to thought/compression tokens, for efficient multi-agent communication or reasoning in compressed latent spaces.
目录
[ICLR 2026] LLM2VEC-GEN: Why Encoding the "Response" is Better than Encoding the "Query"
1. TL;DR
2. The Problem: The Semantics of "Input" vs. "Output"
3. Methodology: Distilling the Future into Latent Tokens
3.1. 1. The Architecture
3.2. 2. The Dual-Loss Objective
4. Experimental Breakthroughs
4.1. SOTA Self-Supervised Performance
4.2. Inherited Safety and Reasoning
5. Interpretability: The "Logit Lens"
6. Conclusion & Future Outlook