Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
We were able to reproduce the strong findings of the HRM paper on ARC-AGI-1.
Further, we ran a series of ablation experiments to get to the bottom of what's behind it.
Key findings:
1. The HRM model architecture itself (the centerpiece of the paper) is not an important factor.
2. The outer refinement loop (barely mentioned in the paper) is the main driver of performance.
3. Cross-task transfer learning is not very helpful. What matters is training on the tasks you will test on.
4. You can use much fewer data augmentations, especially at inference time.
Finding 2 & 3 mean that this approach is a case of *zero-pretraining test-time training*, similar to the recently published "ARC-AGI without pretraining" paper by Liao et al.
332,41K
Johtavat
Rankkaus
Suosikit