Storing first N tokens at fp16 improves PPL (sink tokens get disproportionate attention)
All within error bars. turbo3 quality already high enough that sink protection provides no measurable benefit. AnTKV paper showed gains at 1-bit (PPL 6.32 vs 7.25), but turbo3's 3-bit quantization error is too small for sink amplification to matter. 16 sinks actually slightly WORSE (more fp16 tokens = more norm correction boundary effects). NOT RECOMMENDED for turbo3/turbo4.