chessforyou Bettina&Terry77
chessforyou Bettina&Terry77
chessforyou Bettina&Terry77
Would you like to react to this message? Create an account in a few clicks or log in to continue.

chessforyou Bettina&Terry77


 
HomeLatest imagesRegisterLog in
WELCOME TO FORUM OF Angels77 * named in memory of Bettina & Terry
Search
 
 

Display results as :
 
Rechercher Advanced Search
Search
 
 

Display results as :
 
Rechercher Advanced Search
Latest topics
Latest topics
Navigation
 Portal
 Index
 Memberlist
 Profile
 FAQ
 Search
Navigation
 Portal
 Index
 Memberlist
 Profile
 FAQ
 Search
Forum
Forum
Affiliates
free forum
 


Affiliates
free forum
 



 

 Bit late but take your pick ,,, Latest Stockfish derve 8th January ( SF the gift that keeps on giving )

Go down 
AuthorMessage
LondonFrau
Admin
Admin
LondonFrau


Female Posts : 1293
Reputation : 3534
Join date : 2010-02-27
Location : ???

Bit late but take your pick ,,, Latest Stockfish derve 8th January ( SF the gift that keeps on giving ) Empty
PostSubject: Bit late but take your pick ,,, Latest Stockfish derve 8th January ( SF the gift that keeps on giving )   Bit late but take your pick ,,, Latest Stockfish derve 8th January ( SF the gift that keeps on giving ) EmptyFri Jan 12, 2024 8:02 pm

!! latest version !!


Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Linmiao Xu
Date: Mon Jan 8 18:34:36 2024 +0100
Timestamp: 1704735276

Update default main net to nn-baff1edbea57.nnue

Created by retraining the previous main net nn-b1e55edbea57.nnue with:
- some of the same options as before: ranger21 optimizer, more WDL
skipping
- adding T80 aug filter-v6, sep, and oct 2023 data to the previous best
dataset
- increasing training loss for positions where predicted win rates were
higher than estimated match results from training data position scores

```yaml
experiment-name: 2560--S8-r21-more-wdl-skip-10p-more-loss-high-q-sk28

training-dataset:
# https://github.com/official-stockfish/Stockfish/pull/4782
- /data/S6-1ee1aba5ed.binpack
- /data/test80-aug2023-2tb7p.v6.min.binpack
- /data/test80-sep2023-2tb7p.binpack
- /data/test80-oct2023-2tb7p.binpack
early-fen-skipping: 28

start-from-engine-test-net: True
nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-10p-more-loss-high-q

num-epochs: 1000
lr: 4.375e-4
gamma: 0.995
start-lambda: 1.0
end-lambda: 0.7
```

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Training loss was increased by 10% for positions where predicted win
rates were higher than suggested by the win rate model based on the
training data, by multiplying with: ((qf > pt) * 0.1 + 1). This was a
variant of experiments from Sopel's NNUE training & experimentation log:
https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY
Experiment 302 - increase loss when prediction too high, vondele’s idea
Experiment 309 - increase loss when prediction too high, normalize in a
batch

Passed STC:
https://tests.stockfishchess.org/tests/view/6597a21c79aa8af82b95fd5c
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 148320 W: 37960 L: 37475 D: 72885 Elo +1.14
Ptnml(0-2): 542, 17565, 37383, 18206, 464

Passed LTC:
https://tests.stockfishchess.org/tests/view/659834a679aa8af82b960845
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 55188 W: 13955 L: 13592 D: 27641 Elo +2.29
Ptnml(0-2): 34, 6162, 14834, 6535, 29

closes https://github.com/official-stockfish/Stockfish/pull/4972

Bench: 1219824
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Disservin
Date: Mon Jan 8 18:33:38 2024 +0100
Timestamp: 1704735218

Cleanup Evalfile handling

This cleans up the EvalFile handling after the merge of #4915,
which has become a bit confusing on what it is actually doing.

closes https://github.com/official-stockfish/Stockfish/pull/4971

No functional change
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Disservin
Date: Sun Jan 7 21:41:52 2024 +0100
Timestamp: 1704660112

Prefix abs with std::
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Linmiao Xu
Date: Sun Jan 7 21:20:15 2024 +0100
Timestamp: 1704658815

Update smallnet to nn-baff1ede1f90.nnue with wider eval range

Created by training an L1-128 net from scratch with a wider range of
evals in the training data and wld-fen-skipping disabled during
training. The differences in this training data compared to the first
dual nnue PR are:

- removal of all positions with 3 pieces
- when piece count >= 16, keep positions with simple eval above 750
- when piece count < 16, remove positions with simple eval above 3000

The asymmetric data filtering was meant to flatten the training data
piece count distribution, which was previously heavily skewed towards
positions with low piece counts.

Additionally, the simple eval range where the smallnet is used was
widened to cover more positions previously evaluated by the big net and
simple eval.

```yaml
experiment-name: 128--S1-hse-S7-v4-S3-v1-no-wld-skip

training-dataset:
- /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack
- /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack
- /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack

- /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-v4.binpack
- /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

- /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
- /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-v4.binpack

- /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

- /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack

- /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-v4.binpack
- /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-v4.binpack
- /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-v4.binpack

- /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-v4.binpack
- /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-v4.binpack
- /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-v4.binpack
- /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-v4.binpack

wld-fen-skipping: False
start-from-engine-test-net: False

nnue-pytorch-branch: linrock/nnue-pytorch/L1-128
engine-test-branch: linrock/Stockfish/L1-128-nolazy
engine-base-branch: linrock/Stockfish/L1-128

num-epochs: 500
start-lambda: 1.0
end-lambda: 1.0
```

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Binpacks interleaved at training time with:
https://github.com/official-stockfish/nnue-pytorch/pull/259

FT weights permuted with 10k positions from fishpack32.binpack with:
https://github.com/official-stockfish/nnue-pytorch/pull/254

Data filtered for high simple eval positions (v4) with:
https://github.com/linrock/Stockfish/blob/b9c8440/src/tools/transform.cpp#L640-L675

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move of
L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data:
nn-epoch319.nnue : -241.7 +/- 3.2

Passed STC vs. 36db936:
https://tests.stockfishchess.org/tests/view/6576b3484d789acf40aabbfe
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 21920 W: 5680 L: 5381 D: 10859 Elo +4.74
Ptnml(0-2): 82, 2488, 5520, 2789, 81

Passed LTC vs. DualNNUE #4915:
https://tests.stockfishchess.org/tests/view/65775c034d789acf40aac7e3
LLR: 2.95 (-2.94,2.94) <0.50,2.50>
Total: 147606 W: 36619 L: 36063 D: 74924 Elo +1.31
Ptnml(0-2): 98, 16591, 39891, 17103, 120

closes https://github.com/official-stockfish/Stockfish/pull/4919

Bench: 1438336
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Linmiao Xu
Date: Sun Jan 7 21:15:52 2024 +0100
Timestamp: 1704658552

Dual NNUE with L1-128 smallnet

Credit goes to @mstembera for:
- writing the code enabling dual NNUE:
https://github.com/official-stockfish/Stockfish/pull/4898
- the idea of trying L1-128 trained exclusively on high simple eval
positions

The L1-128 smallnet is:
- epoch 399 of a single-stage training from scratch
- trained only on positions from filtered data with high material
difference
- defined by abs(simple_eval) > 1000

```yaml
experiment-name: 128--S1-only-hse-v2

training-dataset:
- /data/hse/S3/dfrc99-16tb7p-eval-filt-v2.min.high-simple-eval-1k.binpack
- /data/hse/S3/leela96-filt-v2.min.high-simple-eval-1k.binpack
- /data/hse/S3/test80-apr2022-16tb7p.min.high-simple-eval-1k.binpack

- /data/hse/S7/test60-2020-2tb7p.v6-3072.high-simple-eval-1k.binpack
- /data/hse/S7/test60-novdec2021-12tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack

- /data/hse/S7/test77-nov2021-2tb7p.v6-3072.min.high-simple-eval-1k.binpack
- /data/hse/S7/test77-dec2021-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test77-jan2022-2tb7p.high-simple-eval-1k.binpack

- /data/hse/S7/test78-jantomay2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test78-juntosep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack

- /data/hse/S7/test79-apr2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test79-may2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack

# T80 2022
- /data/hse/S7/test80-may2022-16tb7p.high-simple-eval-1k.binpack
- /data/hse/S7/test80-jun2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-jul2022-16tb7p.v6-dd.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-aug2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-sep2022-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-oct2022-16tb7p.v6-dd.high-simple-eval-1k.binpack
- /data/hse/S7/test80-nov2022-16tb7p-v6-dd.min.high-simple-eval-1k.binpack

# T80 2023
- /data/hse/S7/test80-jan2023-3of3-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-feb2023-16tb7p-filter-v6-dd.min-mar2023.unmin.high-simple-eval-1k.binpack
- /data/hse/S7/test80-mar2023-2tb7p.v6-sk16.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-apr2023-2tb7p-filter-v6-sk16.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-may2023-2tb7p.v6.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-jun2023-2tb7p.v6-3072.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-jul2023-2tb7p.v6-3072.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-aug2023-2tb7p.v6.min.high-simple-eval-1k.binpack
- /data/hse/S7/test80-sep2023-2tb7p.high-simple-eval-1k.binpack
- /data/hse/S7/test80-oct2023-2tb7p.high-simple-eval-1k.binpack

start-from-engine-test-net: False

nnue-pytorch-branch: linrock/nnue-pytorch/L1-128
engine-test-branch: linrock/Stockfish/L1-128-nolazy
engine-base-branch: linrock/Stockfish/L1-128

num-epochs: 500
lambda: 1.0
```

Experiment yaml configs converted to easy_train.sh commands with:
https://github.com/linrock/nnue-tools/blob/4339954/yaml_easy_train.py

Binpacks interleaved at training time with:
https://github.com/official-stockfish/nnue-pytorch/pull/259

Data filtered for high simple eval positions with:
https://github.com/linrock/nnue-data/blob/32d6a68/filter_high_simple_eval_plain.py
https://github.com/linrock/Stockfish/blob/61dbfe/src/tools/transform.cpp#L626-L655

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Local elo at 25k nodes per move of
L1-128 smallnet (nnue-only eval) vs. L1-128 trained on standard S1 data:
nn-epoch399.nnue : -318.1 +/- 2.1

Passed STC:
https://tests.stockfishchess.org/tests/view/6574cb9d95ea6ba1fcd49e3b
LLR: 2.93 (-2.94,2.94) <0.00,2.00>
Total: 62432 W: 15875 L: 15521 D: 31036 Elo +1.97
Ptnml(0-2): 177, 7331, 15872, 7633, 203

Passed LTC:
https://tests.stockfishchess.org/tests/view/6575da2d4d789acf40aaac6e
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 64830 W: 16118 L: 15738 D: 32974 Elo +2.04
Ptnml(0-2): 43, 7129, 17697, 7497, 49

closes https://github.com/official-stockfish/Stockfish/pulls

Bench: 1330050

Co-Authored-By: mstembera <5421953+>
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: mstembera
Date: Sun Jan 7 13:41:50 2024 +0100
Timestamp: 1704631310

Introduce BAD_QUIET movepicker stage

Split quiets into good and bad as we do with captures. When we find
the first quiet move below a certain threshold that has been sorted we
consider all subsequent quiets bad. Inspired by @locutus2 idea to skip
bad captures.

Passed STC:
https://tests.stockfishchess.org/tests/view/6597759f79aa8af82b95fa17
LLR: 2.94 (-2.94,2.94) <0.00,2.00>
Total: 138688 W: 35566 L: 35096 D: 68026 Elo +1.18
Ptnml(0-2): 476, 16367, 35183, 16847, 471

Passed LTC:
https://tests.stockfishchess.org/tests/view/6598583c79aa8af82b960ad0
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 84108 W: 21468 L: 21048 D: 41592 Elo +1.73
Ptnml(0-2): 38, 9355, 22858, 9755, 48

closes https://github.com/official-stockfish/Stockfish/pull/4970

Bench: 1336907
see source
Windows x64 for Haswell CPUs
Windows x64 for modern computers + AVX2
Windows x64 for modern computers
Windows x64 + SSSE3
Windows x64
Linux x64 for Haswell CPUs
Linux x64 for modern computers + AVX2
Linux x64 for modern computers
Linux x64 + SSSE3
Linux x64
Author: Disservin
Date: Sun Jan 7 13:38:55 2024 +0100
Timestamp: 1704631135

Add .git-blame-ignore-revs

Add a `.git-blame-ignore-revs` file which can be used to skip specified
commits when blaming, this is useful to ignore formatting commits, like
clang-format #4790.

Github blame automatically supports this file format, as well as other
third party tools. Git itself needs to be told about the file name to
work, the following command will add it to the current git repo. `git
config blame.ignoreRevsFile .git-blame-ignore-revs`, alternatively one
has to specify it with every blame. `git blame --ignore-revs-file
.git-blame-ignore-revs search.cpp`

Supported since git 2.23.

closes https://github.com/official-stockfish/Stockfish/pull/4969

No functional change
see source

_________________
Bettina...............The greatest happiness of life is the conviction that we are loved.

Victor Hugo

vv56 likes this post

Back to top Go down
 
Bit late but take your pick ,,, Latest Stockfish derve 8th January ( SF the gift that keeps on giving )
Back to top 
Page 1 of 1
 Similar topics
-
» Latest abrok Development Version January 4th (SF The gift that keeps on giving
» Latest abrok Development Version January 7th (SF The gift that keeps on giving )
» Latest abrok Development Version January 9th (SF The gift that keeps on giving
» Latest abrok Development Version January 19h 2022 (SF The gift that keeps on giving
» Latest abrok Development Version January 9th 2022 (SF The gift that keeps on giving

Permissions in this forum:You cannot reply to topics in this forum
chessforyou Bettina&Terry77 :: ENGlNES :: ALL ENGlNES :: ENGlNES & CODES :: Stockfish-
Jump to: