1 Comment
User's avatar
Anton's avatar

The “Few Tokens Are All You Need” paper is particularly wild. Fine-tuning on just 8-32 tokens? That’s a massive efficiency gain, and it makes me wonder how much bloat we’ve been dealing with in traditional fine-tuning. Tencent’s approach could redefine lightweight AI training.

Expand full comment