Skip to main content

ChaosBench: A Multi-Channel, Physics-Based Benchmark for Subseasonal-to-Seasonal Climate Prediction

Venue
NeurIPS
Year
2024
Authors
Juan Nathaniel, Yongquan Qu, Tung Nguyen, Sungduk Yu, Julius Busecke, Aditya Grover, Pierre Gentine
Topic
wildfires

🌟 Highlights

  • The motivation is there as well as the problem space, that is, the difficulty of climate prediction past 15 days. I think their approach of expanding the diversity of physics-based versus the usual data-only driven approach is less common and more original. The paper itself is okay, I would have given it a weak accept.

📝 Summary

Present benchmarks for climate prediction have a shorter forecasting range of up to 15 days, a lack of range for operational baselines, and lack physics-based constraints for explainability. To address this, the authors introduce Chaosbench - a large-scale fully-coupled, large-scale, and physics-based benchmark for S2S climate prediction. Unlike SOTA, Chaosbench is stable to a 44 day (seasonal) lead time.

🧩 Key Contributions

  • Current approaches: heavily reliant on physics-based models in the form of numerical weather prediction.

    • Massive computational overheads to perform numerical integration in a high-resolution setting
    • They are sensitive to initial conditions in the short term, and boundary conditions for the long-term.
    • The authors motivate moving to data-driven models to enable easier accessibility
  • Chaosbench is a large-scale fully-coupled, large-scale, and physics-based benchmark for S2S climate prediction. It's framed as a sequential regression task that consists of 45+ years of data. It uses multi-system observations for validating both physics-based and data-driven models. Physics-based helps with explainability and physical consistency. Physics serves as a baseline for data-driven forecast. Done with a 44 day lead time.

  • ChaosBench places weight on expanding diversity of physics-based models from leading organizations.

Strengths

  • Benchmark Results section is well explained and organized. It easily guides you through what you're seeing

  • They interestingly introduce probabilistic metrics that accounts for the butterfly effect in addition to deterministic metrics. I believe that's essential for climate prediction models.

  • I enjoy that the authors are expanding the diversity of physics-based versus the usual approach of increasing the number of data-driven models for baselines.

⚠️ Weaknesses / Questions

  • Minor: As someone outside this field, and for a non-climate orientated venue, I would need a bit more hand-holding on climatology and where it fits in. The authors make it seem like a lower bound for feasability, but a little bit more background on this would be nice.

  • Minor: For figure 8, probablistic evaluation, I almost feel like there needs to be error bars for the reported metrics. Having them for the other figures would be nice too, but figure 8 would benefit the most from them.

  • Nit: Appendix B about getting started feel unnecessary as the git repo should have the instructions as part of it's README. Appendix B will get stale after one update on the repo. I do appreciate the attention to allowing one to reproduce the metrics.

🔍 Related Work

  • Numerical Weather Prediction (NWP)

  • Long term Climate Prediction

📄 Attachments

PDF
📄 View PDF
Slides
🎤 View Slides
Code
🧑‍💻 GitHub Repository
Paper Link
🔗 External Page