Antoine Déchappe

About meAbout me

Latest research review

  • Learning without training:The implicit dynamics of in-context learning

    Oct 16, 2025

Recent writing

  • Fix Intermittent uv 401 Errors with GCP Artifact Registry

    Oct 28, 2025

  • Mocking UUIDs in Python Tests with a Generator

    Jul 28, 2025

  • Beware of poetry package named differently from project structure

    Mar 27, 2025

Antoine Déchappe

BlogBlogResearch reviewResearch reviewProjectsProjectsBookmarksBookmarks

Home

❯

Research review

❯

LLMs

❯

Training
    • Research review
      • LLMs
        • Training
          • Branch-Train-Merge: embarrassingly parallel training of expert Language Models
          • Smaller, weaker, yet better: training LLM reasoners via compute-optimal sampling

  • GitHub
  • LinkedIn
  • Email