[Pachi] Play multiple games per descent ?

Erik S. Steinmetz erik at steinmetz.org
Tue May 17 23:15:20 CEST 2016


And for MCTS in general, there’s only one place to start: Browne et al.’s 2012 “A Survey of Monte Carlo Tree Search Methods”. A lovely overview and 240 references.


> On May 17, 2016, at 3:10 PM, Erik S. Steinmetz <erik at steinmetz.org> wrote:
> 
> Greetings,
> 
> Working from paper to paper is not necessarily the worst thing in the world, just keep a nice bibtex file of the good ones for your own work.
> 
> Sadly there is no longer a comprehensive list of go papers online anymore, though apparently there is a list of some sort on cite-u-like. Google scholar is not such a bad place to hunt things down, either. The best searchable bibliography database for computer science is of course the dblp (http://dblp.uni-trier.de/ <http://dblp.uni-trier.de/>). 
> 
> I can say that for a short reasonably comprehensive list you won’t go wrong, especially about parallelization of MCTS Go, by looking through the 31 references found in this more recent paper:
> 
> @inproceedings{Niekerk:2012pa,
>   author    = {Francois van Niekerk and
>                Gert{-}Jan van Rooyen and
>                Steve Kroon and
>                Cornelia P. Inggs},
>   editor    = {Jan H. Kroeze and
>                Ruth de Villiers},
>   title     = {Monte-Carlo tree search parallelisation for computer go},
>   booktitle = {2012 South African Institute of Computer Scientists and Information
>                Technologists Conference, {SAICSIT} '12, Pretoria, South Africa, October
>                1-3, 2012},
>   pages     = {129--138},
>   publisher = {{ACM}},
>   year      = {2012}
> }
> 
> It makes for a nice “short-list” of important works in the field.
> 
> My bibtex file now has about 200 entries in it, and I have a folder with most of those papers in it (I’m finishing up my Ph.D., and part of it concerned parallelizing MCTS Go.) I’ll be happy to send you a copy of that file directly, if you’re interested.
> 
> All the best,
> 
> Erik
> 
> 
>> On May 17, 2016, at 2:56 PM, lemonsqueeze <lemonsqueeze at free.fr <mailto:lemonsqueeze at free.fr>> wrote:
>> 
>> Awesome, thanks for all the answers.
>> I have a lot to catch up on, so much to read !
>> 
>> Btw, is there a list of influential papers somewhere ?
>> I'm kindof working my way through references in other papers and somehow i have a feeling this may not be the best way to go
>> about it =)
>> 
>> 
>> On 05/17/2016 07:37 PM, Erik S. Steinmetz wrote:
>>> Greetings,
>>> 
>>> Since I’m sorting through these references right now, I can point you at
>>> a 2008
>>> paper by Chaslot, Winands, and van den Herik discussing various kinds of
>>> parallelization where the relevant paragraphs talking about leaf
>>> parallelization
>>> are in section 3.1:
>>> 
>>> Leaf parallelization introduced by Cazenave and Jouandeau [3] is one of
>>> the easiest ways to parallelize MCTS. To formulate it in
>>> machine-dependent terms, only one thread traverses the tree and adds one
>>> of more nodes to the tree when a leaf node is reached (phase 1 and 2).
>>> Next, starting from the leaf node, independent simulated games are
>>> played for each available thread (phase 3). When all games are finished,
>>> the result of all these simulated games is propagated backwards through
>>> the tree by one single thread (phase 4). Leaf parallelization is
>>> depicted in Fig. 2a.
>>> 
>>> Leaf parallelization seems interesting because its implementation is
>>> easy and does not require any mutexes. However, two problems appear.
>>> First, the time required for a simulated game is highly unpredictable.
>>> Playing n games using n different threads takes more time in average
>>> than playing one single game using one thread, since the program needs
>>> to wait for the longest simulated game. Second, information is not
>>> shared. For instance, if 16 threads are available, and 8 (faster)
>>> finished games are all losses, it will be highly probable that most games
>>> 
>>> Parallel Monte-Carlo Tree Search 63
>>> 
>>> page4image992
>>> 
>>> Fig.2. (a) Leaf parallelization (b) Root parallelization (c) Tree
>>> parallelization with global mutex (d) and with local mutexes
>>> 
>>> will lead to a loss. Therefore, playing 8 more games is a waste of
>>> computational power. To decrease the waiting time, the program might
>>> stop the simulations that are still running when the results of the
>>> finished simulations become available. This strategy would enable the
>>> program to traverse the tree more often, but some threads would be idle.
>>> Leaf parallelization can be performed inside an SMP environment, or even
>>> on a cluster using MPI (Message Passing Interface) communication.
>>> 
>>> 
>>> 
>>> Here’s the ref in bibtex format:
>>> 
>>> @inproceedings{Chaslot:2008fk,
>>>   author    = {Guillaume Chaslot and
>>>                Mark H. M. Winands and
>>>                H. Jaap van den Herik},
>>>   title     = {Parallel {M}onte-{C}arlo {T}ree {S}earch},
>>>   booktitle = {Computers and Games, 6th International Conference, {CG}
>>> 2008, Beijing,
>>>                China, September 29 - October 1, 2008. Proceedings},
>>>   pages     = {60--71},
>>>   year      = {2008}
>>> }
>>> 
>>> You may also want to look at Kato 2008 where a client-server model is
>>> set up, and the result
>>> propagation does not wait for all the results to come in, thus partially
>>> alleviating one of the
>>> above problems:
>>> 
>>> @inproceedings{Kato:2008pm,
>>>   title={Parallel monte-carlo tree search with simulation servers},
>>>   author={Kato, Hideki and Takeuchi, Ikuo and others},
>>>   booktitle={13th Game Programming Workshop (GPW-08)},
>>>   year={2008}
>>> }
>>> 
>>> Obviously, Beijing was the place to be in fall of 2008! :-)
>>> 
>>> I hope that helps,
>>> 
>>> Erik
>>> 
>>> 
>>>> On May 17, 2016, at 10:12 AM, Petr Baudis <pasky at ucw.cz <mailto:pasky at ucw.cz>
>>>> <mailto:pasky at ucw.cz <mailto:pasky at ucw.cz>>> wrote:
>>>> 
>>>> This was one of the ideas when people were exploring MCTS
>>>> parallelization, to batch playouts and play them in parallel after
>>>> a single descent - it's essentially something very similar to what
>>>> you described.  But it didn't work really well, the details are in
>>>> the respective papers.
>>>> 
>>>> On Tue, May 17, 2016 at 05:05:24PM +0200, lemonsqueeze wrote:
>>>>> Hi,
>>>>> 
>>>>> Playing with my moggy test suite got me wondering about the difference
>>>>> between raw playout speed and in-game speed (about 2200 games/s vs
>>>>> 1600 games/thread/s here)
>>>>> 
>>>>> I guess it's probably unreasonable to expect something similar with the
>>>>> additional thread synchro and tree logic, but still it got me wondering:
>>>>> since we're going to visit each node many times, would it make sense
>>>>> to play
>>>>> more than one game per descent ? I'm playing with this patch in my
>>>>> uct_multigames branch, playing 100 games makes it about 10% faster (~1800
>>>>> games/s)
>>>>> Does this sound like a good idea at all ?
>>>>> 
>>>>> It seems to be losing a lot though, maybe this just breaks uct badly =)
>>>>> 
>>>>> Cheers
>>>>> _______________________________________________
>>>>> Pachi mailing list
>>>>> Pachi at v.or.cz <mailto:Pachi at v.or.cz> <mailto:Pachi at v.or.cz <mailto:Pachi at v.or.cz>>
>>>>> http://rover.ms.mff.cuni.cz/mailman/listinfo/pachi <http://rover.ms.mff.cuni.cz/mailman/listinfo/pachi>
>>>>> 
>>>> 
>>>> --
>>>> Petr Baudis
>>>> If you have good ideas, good data and fast computers,
>>>> you can do almost anything. -- Geoffrey Hinton
>>>> _______________________________________________
>>>> Pachi mailing list
>>>> Pachi at v.or.cz <mailto:Pachi at v.or.cz> <mailto:Pachi at v.or.cz <mailto:Pachi at v.or.cz>>
>>>> http://rover.ms.mff.cuni.cz/mailman/listinfo/pachi <http://rover.ms.mff.cuni.cz/mailman/listinfo/pachi>
> _______________________________________________
> Pachi mailing list
> Pachi at v.or.cz
> http://rover.ms.mff.cuni.cz/mailman/listinfo/pachi

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://rover.ms.mff.cuni.cz/pipermail/pachi/attachments/20160517/1d8e6085/attachment-0001.html>


More information about the Pachi mailing list