- HOME
- SITEMAP (you are here)
- PAGES
- POSTS
- Better activation functions for NNUE
Results from Swish and SwiGLU in NNUE.
- Chess networks use a Gated Nonlinear Unit
In which I analogise pairwise multiplication to the Gated Linear Unit.
- Training dynamics of target weighting
Does it matter whether you predict the true future, or a stronger version of yourself?
- Deep NNUE
Important architectural improvements in Viridithas 14.
- More experimentation in NNUE
King buckets, output buckets, and Elo scaling with network size.
- Experimenting with NNUE configurations
Results from activation functions, learning rates, and batch sizes.
- NNUE performance improvements
How to make NNs run fast in chess engines.
- Contra “Grandmaster-Level Chess Without Search”
A critique of DeepMind's searchless chess paper, with comparison to Lc0.
- An overview of computer chess techniques
Alpha-beta, PVS, iterative deepening, move ordering, search reductions, etc.