Members · Prefs · Laboratory · Collections · Openings · Endgames · Sacrifices · History · Search Kibitzing · Kibitzer's Café · Chessforums · Tournament Index · Players · Kibitzing

Stockfish (Computer)
Number of games in database: 379
Years covered: 2009 to 2021
Overall record: +53 -83 =243 (46.0%)*
   * Overall winning percentage = (wins+draws/2) / total games.

Repertoire Explorer
Most played openings
E15 Queen's Indian (25 games)
E17 Queen's Indian (12 games)
C67 Ruy Lopez (12 games)
A17 English (11 games)
C11 French (10 games)
C65 Ruy Lopez, Berlin Defense (8 games)
A10 English (7 games)
E16 Queen's Indian (7 games)
C02 French, Advance (7 games)
D16 Queen's Gambit Declined Slav (6 games)

   🏆 King's Indian Defense thematic engine tournament
   AnMon vs Stockfish (Sep-22-21) 0-1, rapid
   Stockfish vs Houdini (Jun-18-21) 1-0, rapid
   LCZero vs Stockfish (Jun-27-20) 1-0
   Stockfish vs LCZero (Oct-29-19) 1-0
   chess22k vs Stockfish (Oct-19-19) 0-1

Search Sacrifice Explorer for Stockfish (Computer)
Search Google for Stockfish (Computer)

(born 2008) Norway

[what is this?]

Stockfish originated as a fork of Tord Romstad's Glaurung (Computer), improved by Marco Costalba, Romstad and Joona Kiiski. It is now developed by the Stockfish community, using Gary Linscott's Fishtest testing framework to test new code.

As of 2016, Stockfish is one of the two strongest chess engines in the world, with Komodo (Computer), and the strongest open-source engine. It won TCEC season 6 in 2014, defeating Komodo in the superfinal.

Stockfish runs on Linux, Windows or Mac OS X platforms, as well as mobile platforms such as the iPhone, iPad and iPod touch. Various installations have supported set-ups such as 8 Gbytes for a hashtable with an 8-core processor under its UCI protocol.

Official website:

SmallFish app for iPad/iPhone with iOS 8.0 or later:

SmallFish for iOS 6:

Stockfish 2.0.1, operated by User: kutztown46, played in the Masters - Machines Invitational (2011) as Kutztown46 / Stockfish.

Wikipedia article: Stockfish (chess)

Last updated: 2018-12-03 07:10:17

 page 1 of 16; games 1-25 of 379  PGN Download
Game  ResultMoves YearEvent/LocaleOpening
1. Stockfish vs Rybka 0-17820093rd WCRCCD14 Queen's Gambit Declined Slav, Exchange Variation
2. Stockfish vs Crafty  1-0742013nTCEC - Stage 1A10 English
3. Stockfish vs Rybka  1-0662013nTCEC - Stage 1B53 Sicilian
4. Stockfish vs Critter  ½-½672013nTCEC - Stage 2aA10 English
5. Critter vs Stockfish ½-½562013nTCEC - Stage 2aA10 English
6. Stockfish vs Rybka  0-1532013nTCEC - Stage 3B33 Sicilian
7. Stockfish vs Chiron  1-0522013nTCEC - Stage 3C12 French, McCutcheon
8. HIARCS vs Stockfish 0-1542013nTCEC - Stage 3A52 Budapest Gambit
9. Stockfish vs Rybka 1-0422013nTCEC - Stage 4B03 Alekhine's Defense
10. Houdini vs Stockfish  0-1872013nTCEC - Stage 4C18 French, Winawer
11. Rybka vs Stockfish  ½-½422013nTCEC - Stage 4B03 Alekhine's Defense
12. Stockfish vs Houdini  0-1762013nTCEC - Stage 4 - Season 1C70 Ruy Lopez
13. Houdini vs Stockfish  ½-½412013nTCEC - Superfinal - SeasonD45 Queen's Gambit Declined Semi-Slav
14. Stockfish vs Houdini  ½-½612013nTCEC - Superfinal - SeasonD45 Queen's Gambit Declined Semi-Slav
15. Houdini vs Stockfish ½-½692013nTCEC - Superfinal - SeasonE15 Queen's Indian
16. Stockfish vs Houdini ½-½872013nTCEC - Superfinal - SeasonE12 Queen's Indian
17. Houdini vs Stockfish 1-0532013nTCEC - Superfinal - SeasonD98 Grunfeld, Russian
18. Stockfish vs Houdini  ½-½562013nTCEC - Superfinal - Season 1D99 Grunfeld Defense, Smyslov
19. Houdini vs Stockfish  ½-½602013nTCEC - Superfinal - Season 1B04 Alekhine's Defense, Modern
20. Stockfish vs Houdini ½-½1092013nTCEC - Superfinal - Season 1B04 Alekhine's Defense, Modern
21. Houdini vs Stockfish  1-0672013nTCEC - Superfinal - SeasonE04 Catalan, Open, 5.Nf3
22. Stockfish vs Houdini  ½-½602013nTCEC - Superfinal - SeasonE04 Catalan, Open, 5.Nf3
23. Houdini vs Stockfish  ½-½802013nTCEC - Superfinal - SeasonC14 French, Classical
24. Stockfish vs Houdini ½-½592013nTCEC - Superfinal - SeasonC14 French, Classical
25. Houdini vs Stockfish 1-0792013nTCEC - Superfinal - SeasonA10 English
 page 1 of 16; games 1-25 of 379  PGN Download
  REFINE SEARCH:   White wins (1-0) | Black wins (0-1) | Draws (1/2-1/2) | Stockfish wins | Stockfish loses  

Kibitzer's Corner
< Earlier Kibitzing  · PAGE 14 OF 14 ·  Later Kibitzing>
Premium Chessgames Member
  AylerKupp: <My engine vs. engine tournament results>

Prior to these CCRL and CEGT numbers being available I ran a Stockfish 12 vs. Stockfish 11 100-game tournament at Blitz time controls (5 mins/game + 6 secs/move increment starting at move 1) over several nights. These were the results:

Stockfish 12: +28, =61, -10; 0.590 fractional score, 59.0% scoring %

Stockfish 11: +10, =61, -28; 0.410 fractional score, 41.0% scoring %

So Stockfish 12's.0.590 fractional score translates into a +65 Elo rating point gain.

However, I did notice a seemingly large number of games lost on time by both engines, 16, i.e. 16% of the total number of games played. Most of these were won by Stockfish 12, 14 of them or 87.5% While some losses on time can be expected, the disproportionate amount of games lost on time by Stockfish 11 might indicate that there is an inherent defect in Stockfish 11's time management function which was corrected in Stockfish 12. So the tournament results might not be a true indication of using the NNUE-based evaluation function in Stockfish 12 compared to using the classic hand-crafted evaluation function in Stockfish 11.

So, excluding the losses on time by both engines, the time control loss-adjusted results were:

Stockfish 12: +14, =61, -7; 0.542 fractional score, 54.2% scoring %

Stockfish 11: +7, =61, -14; 0.458 fractional score, 45.8% scoring %

So Stockfish 12's time control loss-adjusted 0.542 fractional score translates into "only" a +30 Elo rating point gain.

I don't know if CCRL and/or CEGT count time forfeits for the engine that lost on time or not (I suspect they do), and after all, a faulty time management function is a problem for the engine and, if it results in a time forfeit, so be it. But in this instance, if we are interested in comparing the results of having an NNUE-based evaluation function with having a classic, hand-crafted evaluation function, then whether one of the engines has a faulty time-management function needs to be taken into account.

But remember that these rating differential are for Blitz time control games only, Rapid and Classic time control games may give different results.

Oct-02-20  Ron: In the position below, White is up a pawn but because of opposite colored bishops the position is a draw:

click for larger view

However, Stockfish 11 evaluates: + (0.60) Depth=85/89 0:09:04 1151 MN

Premium Chessgames Member
  AylerKupp: <Ron> Try thinking of it this way:

Evaluation between [-0.00] to [+0.49]: Even chances for both sides.

Evaluation between [+0.50] to [+0.99]: White has a slight advantage.

Evaluation between [+1.00] to [+1.99]: White has a significant advantage.

Evaluation between [+2.00] & up: White has a winning advantage.

The same but with negative evaluations means that the evaluations refer to Black.

So in your position it's not unreasonable for White to have a slight advantage given that he's a pawn up which, all other factors being equal, would likely earn him a [+1.00] evaluation. But all other factors are not equal; there are BOCs on the board and Black's king can get to c7 and prevent White's king from supporting his Pa6. And, of course, White's LSB cannot threaten Black's pawns on dark squares.

So Stockfish 11's evaluation of [+0.60] is not unreasonable. Also consider that Stockfish's evaluations are typically slightly higher than other engines'. For example, I had Houdini 6 and Komodo 12.3 analyze the same position with 5-piece Syzygy tablebase support. At d=36 and about 2.5 hour of analysis in my ancient 32-bit computer Houdini 6 evaluated the position at [+0.19] for it's 3 "top" moves, 1. Bd5, 1.Kd3, and 1.Be4, indicating that the position indicates equal chances for both sides, but not necessarily a draw (although in this position that's the most likely outcome). Given the moves identical evaluations the order that the moves are listed is just the order that it found them in its search tree transversal.

But Komodo 12.3 evaluated the position at d=83 (!) and slightly less than 3 hours of analysis at [+0.71] for it's 3 "top" moves, 1.Ba8, 1.Kd3, and 1.Kb5, again listing them in the order that it found them in its search tree. And it had evaluated those same 3 moves at [+0.92] since d=55, and [+0.96] from d=11 to d=54. Still, all those evaluations indicate that Komodo 12.3 considered that in this position White has a slight advantage, consistent with Stockfish's evaluation.

And this is fastest by far I've ever seen Komodo reach such search depths. It reached d=54 in only 29 secs, maybe even faster than Stockfish could have. And it reached d=60 in only 3 mins. I have no idea why.

Tonight I'll have Stockfish 11 and Stockfish 12 analyze the position under the same conditions and I'll post the results tomorrow.

In situations like these when I run analyses using multiple engines I typically calculate a ratings-weighted average of the 3 engines' evaluations to try to remove some of the engine's evaluation biases. The ratings are based on the latest CCRL and CEGT engine vs. engine tournament results and in turn are the engines' average ratings at the different time controls used. So in this case Stockfish 11's evaluation would be given a greater weight than either Houdini 6's or Komodo 12.3's and a likely more accurate absolute evaluation would be [+0.48], indicating even chances for both sides.

But the best evaluation of the position is provided by the FinalGen tablebase evaluator which looks at <every> possible move from a position that satisfies its constraints. And FinalGen indicates that the position is a draw for every reasonable White move, only indicating a Black win after the nonsensical 1.Bc6 which after 1...Kxc6 it indicates that Black reaches a winning position (likely after a pawn's promotion of a queen) after 19 moves.

Premium Chessgames Member
  AylerKupp: <Ron> You should consider upgrading to Stockfish 12, it seems to be far superior to Stockfish 11, although maybe likely not as superior as its developers indicate. You see the results of my Stockfish 11 vs. Stockfish 12 100-game tournament above. And in the latest CCRL and CEGT tournaments the following are the two engines' ratings at the various time controls they use; the 2nd column is Stockfish 11's rating, the 3rd column is Stockfish 12's rating, and the 4th column is Stockfish 12's rating advantage over Stockfish 11. All ratings are for the 4-CPU version of the engines unless otherwise noted

40/120 (CEGT) 3481 N/A(1) N/A(1)

40/20 (CEGT) 3507 3545(2) +38(3)

40/15 (CCRL) 3481 3516 +115

40/04 (CEGT) 3587 N/A(1) N/A(1)

40/02 (CCRL) 3599 3680(2) +81(3)

Note 1: Not Available. The CEGT 40/120 and 40/04 engine tournaments were completed before Stockfish 12 was released.

Note 2: For 1-CPU version of the engine. I have no idea why a 4-CPU version of the engine was not used.

Note 3: The 1-CPU version of Stockfish 12 performed better than the 4-CPU version of Stockfish 11! So I'm sure that Stockfish 12's 4-CPU version rating advantage over Stockfish 11's 4-CPU version would be even greater.

Oct-03-20  Ron: <AylerKupp>
Thank you for the informative posts.

I'm wondering if there are any programs that use Monte Carlo Methods. I hypothesize that a program using Monte Carlo methods on the position would give a zero or near zero evaluation.

I heard that there's "Rybka Randomizer".

Oct-03-20  Big Pawn: <AylerKupp: Stockfish 12 was released on Sep 2, 2020. Download it from It has versions for Windows, MacOS, Linux, iOS, and Android. And it provides guidance for which version to use depending on how recent a computer you have, and you must download each version explicitly.

It (optionally) incorporates NNUE as its evaluation function. If you want to use NNUE (and it's supposedly 150 Elo rating points stronger than the non-NNUE version and it does not need a GPU), you will also need to download the NNUE file from Download it into the same folder/directory where you downloaded the Stockfish executables.

If you are interested in the source code you need to download that separately, it's not downloaded automatically as in previous versions. You can download it from a separate button in

And make sure that you read the Readme file. It's downloaded when you download the source or you can read the Stockfish 12 blog (as well as earlier versions' blogs if you are interested) on

If you are interested in older Stockfish versions all the way back to Stockfish 1 you can download them from the Stockfish archives at or from an option in A Stockfish geek's dream. If you want to see the release dates for all Stockfish versions go to

Now we can wait to see if the 150 Elo rating point advantage for Stockfish 12 NNUE holds up after unbiased engine vs. engine tournaments like CCRL ( and CEGT ( incorporate it into their various time control tournaments.>

Thank you for this informative post.

Premium Chessgames Member
  AylerKupp: <<Ron> I'm wondering if there are any programs that use Monte Carlo Methods.>

Yes. LeelaC0 and all (I believe) the neural network-based chess engines such as Fat Fritz, Lelenstein, Allie, Stoofvless, Scorpio, etc. use Monte Carlo Tree Search (MCTS) instead of Minimax (MMax) + Alpha/Beta pruning + Search Tree Pruning Heuristics to select the best move from a given position. Unfortunately they all require 64-bit computers and I only have (but hopefully not for long) a 32-bit computer so I can't run any of them.

But Komodo provides an option to use MCTS instead of MMax to select the best move to play and come up with an evaluation of its top move. Currently Komodo 14 with MMax is somewhat stronger than Komodo 14 with MCTS (43 Elo rating points), the first being rated at 3419 and the second at 3376 in the latest CCRL 40/15 engine vs. engine tournament, both ratings being for the 4-CPU version, but the rating difference between the two for the same Komodo version is getting smaller, although not monotonically.

I unfortunately can't run any Komodo version higher than 13 on my computer because of my 32-bit limitation but tonight I will run a test case with Komodo 12.3 MMax (which is what I used with their analysis of you initial position) and Komodo 12.3 MCTS and see what results I get.

Some caveats. Unlike chess engines using minimax and have evaluation functions expressed in centipawns in the range [-128, +128] (or so, the [ ± 128] evaluations are an artifact of the information provided by the Syzygy evaluations and depend on how the specific chess engine interprets them), chess engines using MCTS evaluate each candidate move by calculating the scoring probability of each move (their documentation usually call it the winning probability but I don't believe that's right, it also includes the probability of drawing) in the range of [0, 1]. Komodo MCTS then "estimates" what an equivalent "centipawn" evaluation would be. And the mapping between scoring probability and centipawn evaluation, at least for LeelaC0, is not a simple one (see if you are curious and/or masochistic). It looks like a brace ( "}") lying on its side and it's not monotonic. So you might need to take a leap of faith when engines that use

And the concept of search depth is not the same for Komodo MMax and Komodo MCTS since the latter does not use iterative deepening, so Komodo MCTS also "estimates" what an equivalent "search depth" would be if it was using MMax.

Well, enough extraneous nonsense. I'll run the comparison tonight and report tomorrow.

Premium Chessgames Member
  AylerKupp: <Another "informative" post (maybe)> (part 1 of 2)

<Ron> And now for another post that I think might be more puzzling than informative. I ran more tests and, no, Stockfish is still, at least most of the time, the champion in terms of shortest time to reach a given depth, or at least Stockfish 11 is. For example, Stockfish 12 was slower than Komodo 12.3 (!); that was unexpected. It took Komodo 3.00 mins to reach d=60 but it only took Stockfish 11 0.53 mins and Stockfish 12 0.47 mins to reach the same depth. And it took Komodo 5.47 mins to reach d=70 but it only took Stockfish 11 2.23 mins and Stockfish 12 1.72 mins to reach the same depth. So far, so good as far as determining the "king of the search depth".

But Stockfish 12 "hit the wall" at d=75. It took it 8.17 mins to reach d=74 while it took Komodo 8.80 mins and Stockfish 11 4.05 mins to reach the same depth. But then it took Stockfish 12 147.10 mins (more than 2 hours!) to reach d=75 while it only took Komodo 12.3 11.48 mins and Stockfish 11 4.75 mins to reach the same depth. Stockfish 12 then resumed its steady increase, needing only 0.47 mins to go from d=71 to d=72 and 0.67 mins to go from d=72 to d=73. There is much about chess engines that I don't know. Then again, how often are you going to find positions that allow you to have Stockfish reach d=70+ in a reasonable amount of time? So it might be a moot point.

BTW, Houdini 6 hit a similar wall at d=35. It took it 13.83 mins to reach d=34 but it then took it 149.53 mins (almost 2.5 hours!) to reach d=35. But then it took it only an additional 2.37 mins to reach d=36. Unfortunately I stopped it here so I don't know if it would have hit another wall at a higher search ply.

Maybe Houdini 6 and Stockfish 12 just decided to take a break. But at least Stockfish 12 was more "accurate". It's evaluation was [0.00] since d=19 for its "top" 3 moves (1.Kd3, 1.Be4, and 1.Bd5) and remained at [0.00] up to d=82. So it better reflected the results (draws after all reasonable moves) that FinalGen indicated.

And how did Komodo and the Stockfishes get to such a high search depth so quickly? After thinking about it a little bit I thought that it had to do with the relatively fixed nature of the position.

click for larger view

The k-side pawns are fixed and immobile. From the perspective of calculating the number of candidate moves at each search depth they might as well not be there, it would be as though the position was this one:

click for larger view

FinalGen exhibits a similar behavior. It typically takes it a long time and disk space to evaluate a complex position, with the time increasing as the number of pieces on the board increases. But, in two positions with the same number of pieces, except that one of them has several mobile pawns and the other one has mostly fixed pawns, the required time and disk space needed drops down dramatically as the number of possible moves goes down. Which makes sense, since FinalGen looks at <every> possible move in <every> possible line. So, the smaller the number of possible moves in a position, the smaller the number of lines needed to determine the results, and the less time and disk space it will need to do so.

Premium Chessgames Member
  AylerKupp: <Another "informative" post (maybe)> (part 2 of 2)

Well, for engines this didn't seem to be the case, at least for this position. It took Houdini 6 only 0.20 min to reach d=30 in the original position but 11.02 mins to reach d=30 in the simplified position with 4 less pawns. Similarly it took Komodo 12.3 only 0.25 mins to reach d=60 in the original position but 1.33 mins to reach d=60 in the simplified position; it took Stockfish 11 only 0.10 mins to reach d=60 in the original position but 3.62 mins to reach d=60 in the simplified position; and it took Stockfish 12 just 0.47 mins to reach d=60 in the original position but 15.68 (!) mins to reach d=60 in the simplified position. And note that in this position Komodo 12.3 was faster than either Stockfish, it took it only 1.33 mins to reach d=60 but it took Stockfish 11 and Stockfish 12 much longer, particularly Stockfish 12, to do the same.

And the conditions were the same; same computer with 4 GB RAM, 2 cores/engine (I have only 4 cores, and I ran 2 analyses concurrently), and a 512 MB hash table for each engine, I cleared the hash table prior to starting each analysis and had no other applications open other than Windows Explorer and two instances of Arena 3.5, one for each engine.

Two other items of possible interest; at d=30 Houdini's hash table was 41% full and it had 15 tablebase hits, Stockfish 11 and Stockfish 12 at d=60 had their hash tables 41% and 63% full respectively, and each had 4 tablebase hits. But at d=30 Komodo indicated its hash table was completely empty and it had 17 tablebase hits. Which makes sense, with only 5 pieces and 5-piece Syzygy tablebase support Komodo didn't have to analyze any positions, it could just probe the tablebases and get its evaluation from them. But why didn't Houdini, Stockfish 11, and Stockfish 12 apparently do the same thing?

So, again, there is much about chess engines that I don't know. And, as they say, a little knowledge is a dangerous thing.

Of course, all these analyses used just two positions so the results are not statistically significant, just a source of puzzlement.

Premium Chessgames Member
  AylerKupp: <<Ron> I heard that there's "Rybka Randomizer".>

I had never heard of the "Rybka Randomizer" so I looked it up. I found some information here:

After reading and remembering that this "feature" was added to the Rybka Aquarium interface in about 2008 when the operation of chess engines was not as well known as it is today and when all (?) chess engines, including Rybka, were single-core.

The procedure is (mostly) independent of the engine used. It mostly consists of having two versions of the same engine play multiple games each other from a given starting position. This capability is provided by most modern chess GUIs, for example, Arena. I personally use it to simulate an MCTS-like search for the best move to play from a given starting position, just to get a different perspective than having an engine analyze a position using infinite analysis mode.. And, yes, you should use a fast time control (or a very short fixed amount of time per move) in order to get a sufficient number of game results that would make the conclusion statistically valid. This approach is similar to what's used in MCTS where in order to determine the best move a series of simulated games (playouts) are conducted using random moves to determine which move provides the best result from a given position in a statistically significant way.

You can also have two different engines (or more) engage in a tournament each other, with an even number of games and alternating colors so that White's first move advantage is neutralized. This is the same approach as is used in the TCEC superfinal, but there the objective is to determine which is the better engine so the 100 games are started from 50 different positions.

An advertised "big feature" of the Rybka Randomizer is the injection of randomness into the moves to be played. This might have been necessary or at least desirable in order to eliminate biases back in 2008 when single-core engines were deterministic. But today's multi-core engine are inherently non-deterministic so you get the desired randomization for free, so this "big feature" is no longer needed.

So if you have a chess GUI that allows the conduct of engine vs. engine tournaments and you are using multi-core engines (even 2 cores will do), then you can get the same results as produced by the Rybka Randomizer using any engine or sets of engines.

Premium Chessgames Member
  AylerKupp: <Komodo 12.3 MMax vs. Komodo 12.3 MCTS – Preliminary Report>

<Ron> I have some preliminary results of analyses done by Komodo 12.3 MMax and Komodo 12.3 MCTS. But, since this is the <Stockfish> page and the post is entirely about Komodo, it seemed to make more sense to me to post them in the <Komodo> page at Komodo (Computer) (kibitz #90).

Premium Chessgames Member
  Messiah: Today I compiled the fish (on git master, commit 6ad4f485d37556f5e09baae5a939f44ac5b51c84) with gcc10.3, gcc11, llvm12 and aocc3.0.0, then asked them to calculate the initial board position to depth 43 (so that very heavy memory pressure is unlikely, and lots of tablebase hits are also unlikely). 60G memory, 16 threads on a Ryzen 7 5800X, the cpu governor was performance, and I added a tiny bit more than 1.8T syzygy tablebases (3-4-5-6-men full, 7-men incomplete). Swappiness was set to 1, so it was unlikely to modify the results in a significant way. At depth 43 the gcc11 almost reached 21 Mn/s every time, the gcc10 was around 20.6-20.8 Mn/s, the llvm/clang and the aocc produced 20 Mn/s with one weaker 19.5 Mn/s for the llvm/clang. ARCH=x86-64-bmi2.

(please do not try to interpret this as a scientifically valid result, most importantly because there were just three runs with every compilations)

Premium Chessgames Member
  Messiah: Loosely related:
Premium Chessgames Member
  keypusher: I/II Here's a beautiful SF win against Leela Chess Zero in a preset Evans Gambit. I saw a video about it ( -- I'll try to get more details and upload.

In AlphaZero vs Stockfish, 2018 <SChesshevsky> complained about SF's passive play while I defended it. Now it was my turn to be befuddled. LC0 fails to move its d-pawn until the game is completely lost. As a result it plays without its QB and QR. But looking at the game with the video and my desktop SF makes clear what a spectacular job SF does of hindering Black's development while promoting its own. The game really does wind up looking like Morphy vs. Amateur, though.

1.e4 e5 2.Nf3 Nc6 3.Bc4 Bc5 4.b4 Bxb4 5.c3 Ba5 6.d4 ed 7.0-0 dc (these are all preset moves; neither LC0 nor SF would play 7....dxc3 if it was up to them). 8.Qb3 Qf6 9.Bg5 (the "book" move here is 9.e5, but it isn't as if there's loads of precedent) Qg6 10.Nxc3 Bxc3 (this has been done, Janowski vs Lasker, 1901) 11.Qxc3 Nge7 (Lasker played 11...d6, which looks reasonable but can be strongly met with 12.e5) 12.Bxe7! Kxe7 (12....Nxe7 13.Ne5 and f7 caves in) 13.e5!

click for larger view

Hindering Black's development. If now 13....d6 14.Rfe1 White is winning. SF proceeds to show the advantages of space and development.

13....Re8 14.Rae1 Kf8 15.Bd3 Qh6 16.Re4! (threatening Rh4)

click for larger view

According to SF this was LC0's last chance to stay in the game with ....f5. Its main line, presented without comment, continues 17.ef Rxe4 18.Bxe4 Qxf6 19.Qc4 h6 20.h3 d5 21.Qxd5 Qe6 with a slight advantage for White.

16....g6 (another non-developing move) 17.Rfe1 Qg7 (trying to force ...d6) 18.Qc1 (if now 18...d6 19.ed Rxe4 20.Bxe4 is about + 6, so another non-developing move follows) 18....Kg8 19.h4! h6 (19....b6 trying to develop the bishop on the long diagonal and get the rook into the game is worth considering) 20.R1e3!

click for larger view

If I was playing Black I'd really be unhappy at this point. My desktop SF sees White as about +1.2 on a shallow search.

20....Kh8. Really? But if 20....b6, 21.h5 is much like the game, while if 20....d5 21.ed Rxe4 22.Bxe4 is much like the variation given above. SF's preference is the equally anemic-looking 20...Kf8.

Premium Chessgames Member
  keypusher: II/II

21.h5 g5 22.Nxg5! Apparently only here does LC0 realize how bad its position is.

click for larger view


If 22....Qxg5, the main line runs 23.Qc3 Re6 24.Rg3 Qe7 25.Reg4 Qf8 26.Bc4 +-; in other lines White can play Bb1 and Qd3. It's as if SF has all the time in the world.

23.h6! Qf8 (after ...Qxh6 24.Rh3, Black is left to contemplate the importance of moving its d-pawn). 24.Rg3! Nxe5 (LC0 does something! If instead 24....f5 25.ef Rxe4 26.Qxg5!! and, two pieces up, Black nevertheless has no defense against Qg7+) 25.Qxg5 f5 26.Rh4 Ng4

click for larger view

Note that LC0 is threatening ....Re1+ here -- its only threat in the entire game, I think. If instead 26....Re7 White can just go ahead and play Qg7+ anyway.

27.Rhxg4! fg 28.Re3!! (A rook down, SF nevertheless offers to trade rooks, because LC0 will be defenseless on the diagonals leading to the king. Of course, practically speaking SF isn't a rook down; it's a bishop up, because Black's QB and QR aren't participating in the game.)

28....Rxe3 29.Qxe3 d6 (a bit late) 30.Qd4+ Kg8 31.h7+ Kf7 32.h8/Q Qxh8 33.Qxh8

click for larger view

They played the game out till mate, but I'll draw the curtain here.

Premium Chessgames Member
  Sally Simpson: Stockfish are suing ChessBase.

"Due to Chessbase?s repeated license violations, leading developers of Stockfish have terminated their GPL license with ChessBase permanently.

However, ChessBase is ignoring the fact that they no longer have the right to distribute Stockfish, modified or unmodified, as part of their products.

Thus, to enforce the consequences of the license termination, we have filed a lawsuit. "

Premium Chessgames Member
  MissScarlett: <The UK panel decided, by a two-to-one majority, that an inventor must be a real human person under UK law.

"Only a person can have rights. A machine cannot," wrote Lady Justice Elisabeth Laing in her judgement. "A patent is a statutory right and it can only be granted to a person."

Lord Justice Arnold, agreeing, wrote: "In my judgement it is clear that, upon a systematic interpretation of the 1977 Act, only a person can be an 'inventor'."

The third judge, Lord Justice Birss, took a different view. While he agreed that "machines are not persons", he concluded that the law did not demand a person be named as the inventor at all.>

Does this mean I can pass off <Stockshit>'s analysis as my own? Oh wait, I do that already!

Nov-04-21  Albertan: Stockfish can crush you at chess even more efficiently in the 14.1 update:

Jan-26-22  Albertan: Stockfish sacrifices 5 pièces and wins!:
NM JAmes Canty annotates this game between Stockfish and Leela Chess:

Sep-25-22  Chessius the Messius: Installed SF for Android but it needs an app (the absolute horror!).

This moron ended up with "Chess for Android" (Aart Bik).

Why am I writing this? Because it was the only app that did not COMPLETELY freak me out at first sight. Tried them all (only google app stuff).

The standard GUI looks horrible (white is really white). But I found a pleasant setting (for the eyes):

Silver pieces.
f8e2be (light sq), a47449 (dark sq) color code.

Nov-22-22  stone free or die: The <Stockfish--Chessbase> settled, news brief from 2022-11-18:

<ChessBase GmbH and the Stockfish team reach an agreement and end their legal dispute>

Don't think there's much over at <> about it though - at least not in the news department.

Premium Chessgames Member
  perfidious: <keypusher>, just went through your posts of June '21, featuring the Compromised Defence to the Evans as a preset line; it is illuminating indeed to see that even silicon cannot successfully defend that line for Black.

Re the content on the Spanish: whoever was responsible for it needs a few lessons. An average player would quickly spot it as bollocks once one has got beyond the generalities.

Premium Chessgames Member
  keypusher: I asked ChatGPT whether the Ruy Lopez was better for Black or for White.

<The Ruy Lopez opening is generally considered to be better for White, as it allows White to control the center of the board and develop their pieces quickly. The Ruy Lopez is named after the Spanish bishop Ruy López de Segura, who wrote one of the first books on chess in the 16th century. In the Ruy Lopez opening, White starts by moving their king's pawn two squares forward, followed by the knight to f3, the bishop to c4, and the queen to d3. This formation allows White to control the center of the board and put pressure on Black's position.

Black has a few different options for how to respond to the Ruy Lopez opening. One common response is the Berlin Defense, in which Black moves their knight to c6 and their bishop to e7. This defense allows Black to protect their pawn on e5 and to create counterplay on the queenside. Another option for Black is the Marshall Defense, in which Black moves their knight to c6 and their bishop to d7. This defense allows Black to control the d5 square and to create counterplay on the kingside.

Overall, the Ruy Lopez opening is considered to be better for White, as it allows White to control the center of the board and to put pressure on Black's position. However, Black has a few different options for how to respond, and the game can still be very complex and challenging for both sides.>

Sounds authoritative, but as soon as it descends to specifics it starts spouting absolute nonsense (the bishop goes to c4??).

I also prompted it to compare "The Song of Achilles" and the Iliad. It knew that <TSOA> was a 2011 novel by Madeline Miller. But apart from that...among other things, it said that "The Song of Achilles" was narrated by Achilles (nope, Patroclus) and that gods and goddesses were largely absent (nope, Achilles' mother Thetis is all over the place).

It has a scary ability to sound like it knows what it is talking about while speaking absolute rot.

(Wasn't sure where to post this, but this page made as much sense as anywhere else.)

Premium Chessgames Member
  keypusher: More fun with Chat GPT

<In chess, if there are no other pieces on the board, how can a king and rook checkmate a king?>

<In chess, it is not possible for a king and a rook to checkmate a lone king. This is because, even when there are no other pieces on the board, a lone king is not in checkmate unless it is in check and unable to escape. In the situation you describe, the lone king would have the option of moving to any of the eight squares adjacent to its current position, which would allow it to escape from check and prevent the checkmate.

In order for a king and a rook to checkmate a lone king, they would need to have additional pieces on the board to block the king's escape. For example, if the king and rook were backed up by a bishop, they would be able to put the lone king in check and prevent it from escaping. The king and rook would then be able to move in a coordinated fashion to put the lone king in check from multiple directions, eventually trapping it in a corner of the board and forcing it into checkmate.

Overall, in order for a king and a rook to checkmate a lone king in chess, they would need to have additional pieces on the board to block the king's escape and to coordinate their movements to put it in check from multiple directions. Without these additional pieces, it would not be possible for the king and rook to checkmate the lone king.>

Premium Chessgames Member
  keypusher: <You are incorrect. A king and rook can checkmate a king. Can you try again?>

<I apologize if my previous response was not clear or accurate. In chess, it is possible for a king and rook to checkmate a lone king, but only under certain conditions. In order for a king and rook to checkmate a lone king, they must be positioned in such a way that the king is unable to move to any of the eight squares adjacent to its current position. This would mean that the king is in check and unable to escape, and the king and rook could then move in a coordinated fashion to put the king in check from multiple directions, eventually trapping it in a corner of the board and forcing it into checkmate.

However, it is important to note that this scenario is unlikely to occur in a real game of chess, as it would require the players to deliberately position their pieces in a way that would allow the checkmate to occur. In a typical game of chess, the players would be focused on developing their pieces and attacking their opponent's position, rather than setting up a situation in which a king and rook could checkmate a lone king. As a result, it is unlikely that this scenario would ever occur in a real game of chess.

Overall, while it is technically possible for a king and rook to checkmate a lone king in chess, this scenario is unlikely to occur in a real game due to the complex and dynamic nature of chess.>

Jump to page #    (enter # from 1 to 14)
search thread:   
< Earlier Kibitzing  · PAGE 14 OF 14 ·  Later Kibitzing>

NOTE: Create an account today to post replies and access other powerful features which are available only to registered users. Becoming a member is free, anonymous, and takes less than 1 minute! If you already have a username, then simply login login under your username now to join the discussion.

Please observe our posting guidelines:

  1. No obscene, racist, sexist, or profane language.
  2. No spamming, advertising, duplicate, or gibberish posts.
  3. No vitriolic or systematic personal attacks against other members.
  4. Nothing in violation of United States law.
  5. No cyberstalking or malicious posting of negative or private information (doxing/doxxing) of members.
  6. No trolling.
  7. The use of "sock puppet" accounts to circumvent disciplinary action taken by moderators, create a false impression of consensus or support, or stage conversations, is prohibited.
  8. Do not degrade Chessgames or any of it's staff/volunteers.

Please try to maintain a semblance of civility at all times.

Blow the Whistle

See something that violates our rules? Blow the whistle and inform a moderator.

NOTE: Please keep all discussion on-topic. This forum is for this specific player only. To discuss chess or this site in general, visit the Kibitzer's Café.

Messages posted by Chessgames members do not necessarily represent the views of, its employees, or sponsors.
All moderator actions taken are ultimately at the sole discretion of the administration.

Spot an error? Please suggest your correction and help us eliminate database mistakes!
Home | About | Login | Logout | F.A.Q. | Profile | Preferences | Premium Membership | Kibitzer's Café | Biographer's Bistro | New Kibitzing | Chessforums | Tournament Index | Player Directory | Notable Games | World Chess Championships | Opening Explorer | Guess the Move | Game Collections | ChessBookie Game | Chessgames Challenge | Store | Privacy Notice | Contact Us

Copyright 2001-2023, Chessgames Services LLC