< Earlier Kibitzing · PAGE 15 OF 39 ·
Later Kibitzing> |
Jan-06-19 | | ChessHigherCat: <WannaBe: King's Gambit is a bust, but Queen's gambit are busts!> Can't we keep abreast of recent developments without such crude puns? |
|
Jan-09-19
 | | offramp: Has anyone seen any really outlandish openings in recent computer games, that are at chessgames.com? I have seen an Evans Gambit, and a Two Knights (the one where White plays Ng5 and Black plays ...b5), but those are quite standard openings... I am thinking more of games like Trompowskys, Muzios, Cochrane Petroffs, Goring Gambits, Frankenstein-Draculas. These openings are theoretically reasonable, so they might have been included in the ballot. |
|
Jan-09-19
 | | keypusher: <Offramp>
Here's a Frankenstein-Dracula (apparently called the Blanel Gambit in more refined circles). AlphaZero vs Stockfish, 2018 (Draw, 135 moves) Blumenfeld:
Stockfish vs AlphaZero, 2018 (Draw, 194 moves) King's Bishop's Gambit:
Stockfish vs AlphaZero, 2018 (Drawn in a brisk 56 moves) Finally, a Fried Liver:
Stockfish vs AlphaZero, 2018 (Draw, 198 moves) Only the 10 games released in 2017 seem to be accessible via opening explorer. I'll register a complaint with the proper authorities. |
|
Jan-09-19 | | Diademas: The more It watch Alpha0's games, the more I want to give up chess. |
|
Jan-09-19 | | john barleycorn: < Diademas: The more It watch Alpha0's games, the more I want to give up chess.>
Why? you are not forced to play the devil |
|
Jan-09-19 | | Diademas: <john barleycorn: < Diademas: The more It watch Alpha0's games, the more I want to give up chess.> Why? you are not forced to play the devil> I'm afraid I struck a deal with the devil in the early 80's that I can't withdraw from... |
|
Jan-09-19 | | john barleycorn: < Diademas: ...
I'm afraid I struck a deal with the devil in the early 80's that I can't withdraw from...>Well,then good luck. |
|
Jan-09-19 | | Diademas: Thanks <John>, I might need it. |
|
Jan-09-19
 | | offramp: <Keypusher>, many thanks. That is exactly what I was looking for. Those games are very interesting! |
|
Jan-10-19
 | | offramp: Another Fried Liver attack, a win this time.
AlphaZero vs Stockfish, 2018, 55 moves. |
|
Jan-10-19 | | Tiggler: <Unless you run 2 software programs on computers of similar performance you can't reach a definite conclusion as to which software programs is the superior performer.> But if one engine is unable to use state-of-the-art hardware, that is hardly a reason to be skeptical of the one which can. |
|
Jan-10-19
 | | keypusher: <offramp> Thanks, cool game! |
|
Jan-16-19 | | scholes: Alphazero claimed to beat tcec champion stockfish. Alphazero little open source sister leela aka lco breaksthrough at TCEC. Qualifies for TCEC superfinal against stockfish ahead of Komodo and Houdini 6, by winning division 3, division 2 and finishing 2nd to stockfish in div premier. https://tcec.chessdom.com/live.html
It is also currently at 2nd place in cccc
https://www.chess.com/computer-ches... |
|
Jan-17-19 | | zanzibar: I'm going to release a rationalized version of the recent AlphaZero--Stockfish games, first focusing on the fixed opening games. If all the games were available, I'd divide the match into a series of matches, one for each opening (which I think was 20 games/color(?)). In other words, each game for an opening would have its own Event tag. That would make getting the actual stats easy. (Aside - does anybody know a table of the numbers, as compared to the colored bars in the supplemental. I'd like to know the numbers, instead of eyeballing 'em). I'm also thinking of removing the misleading "book" comments, and instead use the FEN tag for the PGN, putting the opening moves into a comment. That would have the advantage of clarity (ie. what moves the engines actually made). It would also eliminate having to play over the same opening moves in the 40 (?) match games. Hopefully this time round we'll be able to get all the games (now that DeepMind has gotten its paper published). Anyone else have thoughts on this, particularly on using FEN to specific the starting position? . |
|
Jan-17-19 | | zanzibar: Does anybody have a computer-readable file of the TCEC-2016 openings? A PGN or even a list of FEN's would be nice.
And can someone tell me when the openings were used? Was it in the superfinal stage only? (Yes, I might be able to dig that out, but if someone could provide a quick pointer I'd be obliged.) |
|
Jan-18-19 | | zanzibar: Best way to sort the TCEC opening games?
1) ECO code?
2) By S4 diagram?
. |
|
Jan-18-19 | | zanzibar: RE: <Does anybody have a computer-readable file of the TCEC-2016 openings?> http://www.mediafire.com/file/i311b... Has < [ 2006, 2008, 2012, 2012 (Topical), 2014, 2016 ]> Noomen Testsuites. (Following the "canonical" link from chessprogramming/etc yields a stale mediafire link: http://blogchess2016.blogspot.de/20... from here:
https://www.chessprogramming.org/TC...) I like this one opening example PGN, which provides a fuller description than Noomen's. http://www.chess2u.com/t10903-tcec-... I'm trying to fill out the PGN info similarly.
. |
|
Jan-19-19
 | | keypusher: <zanzibar> I have nothing to offer you but best wishes -- I suspect you can do more for yourself than most of us can do for you. I've ordered <Game Changer>, and I'll pass on anything l learn. But I don't think I'll get it for another month. |
|
Jan-19-19 | | zanzibar: Thanks for the good wishes, <kp>. I miss the days when <Pawn and Two> and <PaintMyDragon> (and others, of course) offered valuable help. But I guess I've brought it upon myself, trying to tame the PGN wilds. |
|
Feb-07-19 | | scholes: Leela playing in tcec superfinals now. After 17 games 3 wins for both sides https://tcec.chessdom.com/live.html#/ |
|
Feb-07-19
 | | AylerKupp: <<Tiggler> But if one engine is unable to use state-of-the-art hardware, that is hardly a reason to be skeptical of the one which can.> It's not a question of being skeptical, it's just that whenever one engine is running on much more powerful <hardware> than the other engine then the results are inconclusive as to which engine is superior. Remember that we are trying to evaluate the relative superiority of two <software> engines, not two combinations of <hardware> + <software>. At least that's what I thought. Now, if the time control for the engine running on more powerful hardware was reduced to try to compensate for that engine's superior hardware capability over the other engine's hardware, then the contest would be more even. So, as a first order approximation if one engine's hardware was capable of executing 100X more instructions than the other engine's hardware, then a time control reduction of 1/100 for that engine would allow both engines to execute about the same number of instructions in the same amount of time. And that's what Figure 2 in "A general reinforcement learning algorithm that
masters chess, shogi and Go through self-play" (https://deepmind.com/documents/260/...) shows. When AlphaZero was given the same amount of time to play as Stockfish, AlphaZero won convincingly. When AlphaZero was given 1/30 of the time to play as Stockfish, Stockfish outscored AlphaZero with both the White and Black pieces. And when AlphaZero was given only 1/100 of the time to play as Stockfish, Stockfish outperformed AlphaZero as convincingly as AlphaZero outperformed Stockfish when both engines were given the same amount of time. So, given that AlphaZero was running on hardware that was able to execute approximately more than 100 times the number of instructions in a given amount of time as the hardware that Stockfish was running on (see AlphaZero (Computer) (kibitz #348) above), what word would you use to describe the relative <software> performance of the two engines as a result of their 2 matches? I think that calling it "inconclusive" is being very generous to AlphaZero. None of this is to demean or minimize AlphaZero's accomplishments or that of the DeepMind team. A big step forward compared to previous neural network-based chess engines, and one that may hold much promise. But I just don't think that the match conditions conclusively support the premise that AlphaZero and the neural network technology used to implement it is inherently superior to the (dare I say?) "classical" chess engine technology used by Stockfish and the other top engines. |
|
Feb-11-19 | | Allanur: <WorstPlayerEver: <Allanur>
Well, I think the man-in-the-moon project cost about $100B in the 60s. No one questioned why they did not film the whole trip from point A to point B and back. No one. As if c ameras did not exist in the 60s! Instead people rather watch real Hollywood movies. If you dare to question them, you must be 'making a fool out of yourself.' Simple as it is. Care to discuss 'evolution'???>
@worstplayerever, If you *really* think Google broadcasting live matches in 2017 is as hard as filming and televising the apollo projects in 1969-72 then I am not up to the challenge of elaborating and explaining the differences between the two events.
But I can give you a small hint: Chess.com, which is remarkably a smaller company than google, regularly broadcasts two engines playing each other live. Still think your analogy works??? |
|
Feb-11-19 | | Allanur: < keypusher: <Allanur> Well, it seems clear that Deep Mind used Stockfish to come up with Stockfish’s moves. What’s your theory for how they came up with AlphaZero’s?>
I do not have any hypothesis on it, I just do not see any reason to think these matches indeed took place and AZ proved it is far stronger than contemporary technologies. But anyway, there may be some possibilities like a human co-operating with Houdini or other engine (or maybe Stockfish itself) and trying several strategies. I do not know but there are good reasons to doubt the event: For google it is so easy to broadcast the matches but they did not. Why? What reason could have made them to play the matches behind the closed doors without prior announcement, without post-game announcement. Nothing, nothing. They just popped up and 'revealed' they have conquered the chess world. When they were first working on Go, everything was transparent where as with Stockfish it was not. |
|
Feb-11-19
 | | alexmagnus: <I do not have any hypothesis on it, I just do not see any reason to think these matches indeed took place and AZ proved it is far stronger than contemporary technologies.> The authors of LC Zero have much fewer ressources and compete on equal terms with a newer Stockfish than the one AZ played against... <For google it is so easy to broadcast the matches but they did not. Why? What reason could have made them to play the matches behind the closed doors without prior announcement, without post-game announcement. Nothing, nothing. They just popped up and 'revealed' they have conquered the chess world.> Because, as someone put it, their goal was not to use AI to prove something about chess but to use chess to prove something about AI. |
|
Feb-11-19
 | | alexmagnus: The entire paper was not intended at the chess community but at the AI community. And this is the main difference with Go. As to why they made such a hype around Go matches - it was first ever Go program to beat <any human professional>, let alone world's top players. And what is the sensation in chess, from a layman's point of view? Nada, a computer beat another computer, big yawn... The victory over Stockfish was not as marketable to the general public as the one over Lee Sedol. |
|
 |
 |
< Earlier Kibitzing · PAGE 15 OF 39 ·
Later Kibitzing> |