A really nice technical study was done called the mismanagement of high-performance.
The basic idea is that the models are already advanced enough to do a lot of harm.
The only thing holding us back is the technology, because we don't quite know how the whole thing works.
Sometimes we wonder, where is everybody? What are they doing?
This is a random way of viewing it.
We should let the agents write their own scaffolds.
All the scaffolds you use today are so, so, so, super vital.
It's very obvious that the models can already write beautiful scaffolds.
They should just dynamically write the scaffolds as they are doing the inference.