- HOME
- SITEMAP (you are here)
- PAGES
- POSTS
- A note on activation functions
An architectural quirk that may not have been noticed.
- Solving the problem of induction
Goodman’s grue and Solomonoff’s spinning razor factory
- Target weighting in NNUE
Does it matter whether you predict the true future, or a stronger version of yourself?
- Network architecture: multilayer
Deeper neural networks for chess engines.
- ANNUEP: New main net, scaling, community results
King buckets, output buckets, and Elo scaling with network size.
- ANNUEP: initial results
Results from activation functions, learning rates, and batch sizes.
- The advanced NNUE program
A statement of general intent.
- NNUE performance improvements
How to make NNs run fast in chess engines.
- Contra “Grandmaster-Level Chess Without Search”
A critique of DeepMind's searchless chess paper, with comparison to Lc0.
- An overview of computer chess techniques
Alpha-beta, PVS, iterative deepening, move ordering, search reductions, etc.