Ar

Context pollution evidence, observation masking window size matters. Inspired TODO-002.

https://arxiv.org/abs/2310.04408 ↗
paper Tracked by 1 project 1 total activity
Notes

Context pollution evidence, observation masking window size matters. Inspired TODO-002.

Activity Summary
1 proposed
Proposed Experiments (1)
Observation masking window size high
The number of rounds kept verbatim before compression affects model performance. Currently window=2 (current_round - 1). Wider windows waste tokens, narrower ones lose needed context. Sweet spot expected around 2-3.
masking_window: [1 file: context_manager.py:189
Small-Model Agent Scaffold Optimization / tack-scaffold-experiments claude-opus-4
Projects Tracking This Resource
Recent Updates
Updated: RECOMP: Improving Retrieval-Augmented LMs with Compression and Selective Augmentation 2026-03-28T04:14:56Z
Retrieving documents and prepending them in-context at inference time improves performance of language model (LMs) on a wide range of tasks. However, these documents, often spanning hundreds of words, make inference substantially more expensive. We propose compressing the retrieved documents into textual summaries prior to in-context integration. This not only reduces the computational costs but also relieves the burden of LMs to identify relevant information in long retrieved documents. We present two compressors -- an extractive compressor which selects useful sentences from retrieved documents and an abstractive compressor which generates summaries by synthesizing information from multiple documents. Both compressors are trained to improve LMs' performance on end tasks when the generated summaries are prepended to the LMs' input, while keeping the summary concise.If the retrieved documents are irrelevant to the input or offer no additional information to LM, our compressor can retur
View →