GitHub Source and Windows binary. GitHub(Japanese top page)
2021-09-20 v23 kldgain option for training. update required. w745, 7940000 games.
2021-08-05 Drop the learning rate to 0.0001. (from 3711k games, w321).
2021-06-28 v1.1 softmax temperature > 1.0 is adjusted, even if moves <= 30. aobak ver is 20. w92,1430000 games.
2021-06-23 Windows version(v1.0) is released.
2021-06-07 Fixed adjustment ELO method.
2021-06-07 Bug fix. It fails to find 1 ply mate sometimes.
2021-06-06 Web site open. Google Colab is available. Interestingly, at present, uwate(white)'s winrate is high in 6-Piece. This is because less pieces player has more chance to get pieces if you move pieces almost randomly. AobaKomaochi uses 27-point declare rule. The removed pieces are counted towards uwate(white)'s total.
|In past hour,||number of clients are 20,||3969 games.|
|In past 24 hours,||number of clients are 22,||35403 games.|
|In past 7000 games||In past 500,000 games|
|Average of moves||Sente winrate||Draw rate||Average of moves||Sente winrate||Draw rate||Handicap ELO|
AobaKomaochi 100playout/move vs Kristallweizen(6.00) 20k/move. 400 match games.
You can see the transition of opening moves.
For randomeness, it often plays blunder for the first 30 moves. And Black strength is adjusted.
From no000000000000.csa to no000000500007.csa are generated by not using neural network, but random function. The first game that is generated by neural network is no000000500008.csa 256x20block, replay buffer is past 500,000 games.Weights
w001 ... 256x20b,minibatch 128, learning rate 0.01, wd 0.0002, momentum 0.9, 500000 games fail at w009. w001 ... 256x20b,minibatch 128, learning rate 0.001, wd 0.0002, momentum 0.9, 500000 games. restart with smaller lr. w321 ... 256x20b,minibatch 128, learning rate 0.0001, wd 0.0002, momentum 0.9, 3711485 games w524 ... 256x20b,minibatch 128, learning rate 0.00001, wd 0.0002, momentum 0.9, 5738768 games