chessforyou Bettina&Terry77
chessforyou Bettina&Terry77
chessforyou Bettina&Terry77
Would you like to react to this message? Create an account in a few clicks or log in to continue.

chessforyou Bettina&Terry77


 
HomeLatest imagesRegisterLog in
WELCOME TO FORUM OF Angels77 * named in memory of Bettina & Terry
Search
 
 

Display results as :
 
Rechercher Advanced Search
Search
 
 

Display results as :
 
Rechercher Advanced Search
Latest topics
Latest topics
Navigation
 Portal
 Index
 Memberlist
 Profile
 FAQ
 Search
Navigation
 Portal
 Index
 Memberlist
 Profile
 FAQ
 Search
Forum
Forum
Affiliates
free forum
 


Affiliates
free forum
 



 

 Latest abrok Development Version May 14th * Including Update NNUE architecture to SFNNv5 * (SF The gift that keeps on giving

Go down 
AuthorMessage
LondonFrau
Admin
Admin
LondonFrau


Female Posts : 1293
Reputation : 3534
Join date : 2010-02-27
Location : ???

Latest abrok Development Version May 14th    * Including Update NNUE architecture to SFNNv5  * (SF The gift that keeps on giving Empty
PostSubject: Latest abrok Development Version May 14th * Including Update NNUE architecture to SFNNv5 * (SF The gift that keeps on giving   Latest abrok Development Version May 14th    * Including Update NNUE architecture to SFNNv5  * (SF The gift that keeps on giving EmptySat May 14, 2022 4:23 pm

!! latest version !!


Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: disservin
Date: Sat May 14 13:17:35 2022 +0200
Timestamp: 1652527055

SE depth scaling using the previous depth

This patch makes the SE depth condition more robust and allows it to scale with completed depth
from a previous search.

At long TC this patch is almost equivalent to https://github.com/official-stockfish/Stockfish/pull/4016 which had

VLTC:
https://tests.stockfishchess.org/tests/view/626abd7e8707aa698c0093a8
Elo: 2.35 +-1.5 (95%) LOS: 99.9%
Total: 40000 W: 10991 L: 10720 D: 18289 Elo +2.35
Ptnml(0-2): 8, 3534, 12648, 3799, 11
nElo: 5.47 +-3.4 (95%) PairsRatio: 1.08

VLTC multicore:
https://tests.stockfishchess.org/tests/view/6272a6afc8f14123163c1997
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 86808 W: 24165 L: 23814 D: 38829 Elo +1.40
Ptnml(0-2): 11, 7253, 28524, 7606, 10

however, it is now also gaining at LTC:

LTC:
https://tests.stockfishchess.org/tests/view/627e7cb523c0c72a05b651a9
LLR: 2.94 (-2.94,2.94) <0.50,3.00>
Total: 27064 W: 7285 L: 7046 D: 12733 Elo +3.07
Ptnml(0-2): 8, 2446, 8390, 2675, 13

and should have nearly no influence at STC as depth 27 is rarely reached.
It was noticed that initializing the threshold with MAX_PLY, had an adverse effect,
possibly because the first move is sensitive to this.

closes https://github.com/official-stockfish/Stockfish/pull/4021
closes https://github.com/official-stockfish/Stockfish/pull/4016

Bench: 6481017
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Tomasz Sobczyk
Date: Sat May 14 12:47:22 2022 +0200
Timestamp: 1652525242

!! Update NNUE architecture to SFNNv5. Update network to nn-3c0aa92af1da.nnue.    !!

Architecture changes:

Duplicated activation after the 1024->15 layer with squared crelu (so 15->15*2). As proposed by vondele.

Trainer changes:

Added bias to L1 factorization, which was previously missing (no measurable improvement but at least neutral in principle)
For retraining linearly reduce lambda parameter from 1.0 at epoch 0 to 0.75 at epoch 800.
reduce max_skipping_rate from 15 to 10 (compared to vondele's outstanding PR)

Note: This network was trained with a ~0.8% error in quantization regarding the newly added activation function.
This will be fixed in the released trainer version. Expect a trainer PR tomorrow.

Note: The inference implementation cuts a corner to merge results from two activation functions.
This could possibly be resolved nicer in the future. AVX2 implementation likely not necessary, but NEON is missing.

First training session invocation:

python3 train.py \
../nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
../nnue-pytorch-training/data/nodes5000pv2_UHO.binpack \
--gpus "$3," \
--threads 4 \
--num-workers 8 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=1.0 \
--max_epochs=400 \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

Second training session invocation:

python3 train.py \
../nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
../nnue-pytorch-training/data/T60T70wIsRightFarseerT60T74T75T76.binpack \
--gpus "$3," \
--threads 4 \
--num-workers 8 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--start-lambda=1.0 \
--end-lambda=0.75 \
--gamma=0.995 \
--lr=4.375e-4 \
--max_epochs=800 \
--resume-from-model /data/sopel/nnue/nnue-pytorch-training/data/exp367/nn-exp367-run3-epoch399.pt \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2

Passed STC:
LLR: 2.95 (-2.94,2.94) <0.00,2.50>
Total: 27288 W: 7445 L: 7178 D: 12665 Elo +3.40
Ptnml(0-2): 159, 3002, 7054, 3271, 158
https://tests.stockfishchess.org/tests/view/627e8c001919125939623644

Passed LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.00>
Total: 21792 W: 5969 L: 5727 D: 10096 Elo +3.86
Ptnml(0-2): 25, 2152, 6294, 2406, 19
https://tests.stockfishchess.org/tests/view/627f2a855734b18b2e2ece47

closes https://github.com/official-stockfish/Stockfish/pull/4020

Bench: 6481017
see source

_________________
Bettina...............The greatest happiness of life is the conviction that we are loved.

Victor Hugo

vv56 likes this post

Back to top Go down
 
Latest abrok Development Version May 14th * Including Update NNUE architecture to SFNNv5 * (SF The gift that keeps on giving
Back to top 
Page 1 of 1
 Similar topics
-
» Latest abrok Development Version April 2nd Update NNUE architecture to SFNNv9 and net nn-ae6a388e4a1a.nnue (SF The gift that keeps on giving
» Latest abrok Development Version Febuary 10th Update architecture to "SFNNv4". Update network to nn-6877cd24400e.nnue. (SF The gift that keeps on giving
» Latest abrok Development Version June 14th 2021 inc Update default net to nn-8e47cf062333.nnue (SF The gift that keeps on giving )
» Latest abrok Development Version July 1st Update NNUE architecture to SFNNv7 with larger L1 size of 2048 (SF The gift that keeps on giving
» Latest abrok Development Version May 18th (SF The gift that keeps on giving ) New NNUE architecture and net

Permissions in this forum:You cannot reply to topics in this forum
chessforyou Bettina&Terry77 :: ENGlNES :: ALL ENGlNES :: ENGlNES & CODES :: Stockfish-
Jump to: