good choices. no information loss so far... (I assume that the move generation is not having anything else that reordering (premove stuff). And good idea to use a simple evaluation.
i think with ab pruning (math. equivalent to min-max), all the possible loss of information will be put on the evaluation function quality. Exhaustive search breadth and the quality seem to be in a trade off kind of optimization problem. The lower the quality the more leaf nodes to probe. I think one could argue that the min-max backbubbling might be actina as a multiplication of errors, and a max leaf eval has to be bigger than another one And another one, in a nested recursive fashion. But that is a not a mathematic argument yet. I am wondering if there would not be a toy caricature model that could illustrate some structure of leaf eval error model, and how the propagation works over the whole partial tree search optmizatoin. Even a single person binary decision game, might do. that is as far as I got a while ago. Anyway.. Yes the fun will be there.. and also any programmed bias.
You might also in your search get into the horzion problem, and the quiescense search extension is one of the traditional solutions. but if you intend to be real serious as SF seem to have geared, in focussing on improving the leaf functions given the chess-engine world of tournament pools defining what is the extent of chess positoins wilderness to be able to best play from, then, using the 1,3,3,5,9
eval might make you naturally find the quiecene search as the only attractive solution to the horizon problem.
And I think one might be forced to at least start that way. but to be conscious of what it might mean further down the line on the extent of domain sampling for that leaf function class (unless you fix all the parameters from your own expert guesses, or well use the others findings.
To go with that opinion, here is a possible suportting idea. SF is able to train its NN leaf eval, with SF+simple only, over any type of positions, not just quiescent. (and it is better being trained that way, to explore the full breadth of the SF+simple eval input output function training it.
As a complete search and leaf eval function of the root input position, SF+simple eval can score any position as its root input, only the non-user visible search from it will have to test for the leaf position quiescience. So NNue is actually learning to mimic SF+simple eval on shallower positions, than simple eval is finding worth min-max signal to backbubble in AB pruning or more aggressive stuff.
At some point one might consider that the NNue might become so good at detecting shallower more sublte signals, that even in "mexican standoff" or during a battle mid brawl not done, such signals could still be found.. That during play, one might not even need the simple eval.. I don't know where they have been in their exploration. It seems they are not wanting users to be aware of the inside story. I am priming myself to find out actualy. reading your updates. is good timing.
Also what are your thoughts about my ideas/questions.. any thing there sounding like I am missing some information you might have seen..
good choices. no information loss so far... (I assume that the move generation is not having anything else that reordering (premove stuff). And good idea to use a simple evaluation.
i think with ab pruning (math. equivalent to min-max), all the possible loss of information will be put on the evaluation function quality. Exhaustive search breadth and the quality seem to be in a trade off kind of optimization problem. The lower the quality the more leaf nodes to probe. I think one could argue that the min-max backbubbling might be actina as a multiplication of errors, and a max leaf eval has to be bigger than another one And another one, in a nested recursive fashion. But that is a not a mathematic argument yet. I am wondering if there would not be a toy caricature model that could illustrate some structure of leaf eval error model, and how the propagation works over the whole partial tree search optmizatoin. Even a single person binary decision game, might do. that is as far as I got a while ago. Anyway.. Yes the fun will be there.. and also any programmed bias.
You might also in your search get into the horzion problem, and the quiescense search extension is one of the traditional solutions. but if you intend to be real serious as SF seem to have geared, in focussing on improving the leaf functions given the chess-engine world of tournament pools defining what is the extent of chess positoins wilderness to be able to best play from, then, using the 1,3,3,5,9
eval might make you naturally find the quiecene search as the only attractive solution to the horizon problem.
And I think one might be forced to at least start that way. but to be conscious of what it might mean further down the line on the extent of domain sampling for that leaf function class (unless you fix all the parameters from your own expert guesses, or well use the others findings.
To go with that opinion, here is a possible suportting idea. SF is able to train its NN leaf eval, with SF+simple only, over any type of positions, not just quiescent. (and it is better being trained that way, to explore the full breadth of the SF+simple eval input output function training it.
As a complete search and leaf eval function of the root input position, SF+simple eval can score any position as its root input, only the non-user visible search from it will have to test for the leaf position quiescience. So NNue is actually learning to mimic SF+simple eval on shallower positions, than simple eval is finding worth min-max signal to backbubble in AB pruning or more aggressive stuff.
At some point one might consider that the NNue might become so good at detecting shallower more sublte signals, that even in "mexican standoff" or during a battle mid brawl not done, such signals could still be found.. That during play, one might not even need the simple eval.. I don't know where they have been in their exploration. It seems they are not wanting users to be aware of the inside story. I am priming myself to find out actualy. reading your updates. is good timing.
Also what are your thoughts about my ideas/questions.. any thing there sounding like I am missing some information you might have seen..