Skip to content

Scaling Laws & Emergent Abilities

  • Scaling Laws for Neural Language Models by Kaplan, McCandlish, Henighan, Brown, Chess, Child, Gray, Radford, Wu, Amodei (2020) [paper] — Power-law relationships between compute, data, parameters, and loss. Empirical scaling science. PDF
  • Training Compute-Optimal Large Language Models (Chinchilla) by Hoffmann, Borgeaud, Mensch, Buchatskaya, Cai, Rutherford, et al. (2022) [paper] — Showed most LLMs were undertrained. Optimal ratio of data to parameters. PDF