Machines That Learn to Play Games

Machines That Learn to Play Games

by Johannes Fürnkranz, Miroslav Kubat
Machines That Learn to Play Games

Machines That Learn to Play Games

by Johannes Fürnkranz, Miroslav Kubat

Hardcover

$105.00 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

Now that bosses have replaced droves of workers who would sit around on company time engaged in trivial pursuits with efficient and un- distractable machines, scientists are teaching the machines to play games as well. Contributors in computer science, artificial intelligence, game theory, military theory, and other fields explore such dimensions as whether machines should learn how to play games, a survey of the field, human learning in game playing, reinforcement learning and chess, comparison training of chess evaluation functions, a tutorial on Hoyle, acquiring go knowledge from game records, and learning to play strong poker. Annotation c. Book News, Inc., Portland, OR (booknews.com)

Product Details

ISBN-13: 9781590330210
Publisher: Nova Science Publishers, Incorporated
Publication date: 09/01/2001
Series: Advances in Computation Series
Pages: 298
Product dimensions: 7.10(w) x 10.20(h) x 1.00(d)

Table of Contents

1Should Machines Learn How to Play Games?1
1.1Motivation1
1.2Limitations of search4
1.3Merits of learning8
1.4Bon Voyage!10
2Machine Learning in Games: A Survey11
2.1Samuel's Legacy11
2.1.1Machine Learning13
2.1.2Game Playing13
2.1.3Chapter overview14
2.2Book Learning15
2.2.1Learning to choose opening variations15
2.2.2Learning to extend the opening book16
2.2.3Learning from mistakes17
2.2.4Learning from simulation19
2.3Learning Search Control19
2.4Evaluation Function Tuning21
2.4.1Supervised learning22
2.4.2Comparison training24
2.4.3Reinforcement learning28
2.4.4Temporal-difference learning30
2.4.5Issues for evaluation function learning33
2.5Learning Patterns and Plans41
2.5.1Advice-taking42
2.5.2Cognitive models43
2.5.3Pattern-based learning systems46
2.5.4Explanation-based learning48
2.5.5Pattern induction50
2.5.6Learning playing strategies53
2.6Opponent Modeling55
2.7Conclusion58
3Human Learning in Game Playing61
3.1Introduction61
3.2Research on memory and perception63
3.2.1Memory64
3.2.2Perception67
3.2.3Modeling experts' perception and memory: The chunking and template theories68
3.3Research on problem solving70
3.3.1De Groot's results71
3.3.2Theories and computer models of problem solving72
3.4Empirical studies of learning75
3.4.1Short-range learning76
3.4.2Medium-range learning77
3.4.3Long-range learning78
3.5Human and machine learning79
3.5.1How has human learning informed machine learning?79
3.5.2What does machine learning tell us about human learning?79
3.6Conclusions80
4Toward Opening Book Learning81
4.1Introduction81
4.2Basic Requirements82
4.3Choosing Book Moves83
4.4Book Extension84
4.5Implementation Aspects86
4.6Discussion and Enhancements87
4.7Outlook88
5Reinforcement Learning and Chess91
5.1Introduction91
5.2KnightCap94
5.2.1Board representation94
5.2.2Search algorithm95
5.2.3Null moves95
5.2.4Search extensions96
5.2.5Asymmetries96
5.2.6Transposition Tables96
5.2.7Move ordering97
5.2.8Parallel search97
5.2.9Evaluation function97
5.2.10Modification for TDLeaf([lambda])98
5.3The TD([lambda]) algorithm applied to games100
5.4Minimax Search and TD([lambda])102
5.5TDLeaf([lambda]) and Chess105
5.5.1Experiments with KnightCap105
5.5.2Discussion111
5.6Experiment with Backgammon113
5.6.1LGammon113
5.6.2Experiment with LGammon114
5.7Future Work115
5.8Conclusion115
6Comparison Training of Chess Evaluation Functions117
6.1Introduction117
6.2Comparison Training for Arbitrary-Depth Searches119
6.3Tuning the SCP evaluation function120
6.3.1Experimental details121
6.3.2Simple 1-ply training122
6.3.3Training with 1-ply search plus expansions123
6.4Tuning Deep Blue's evaluation function125
6.4.1Modified training algorithm126
6.4.2Effect on the Kasparov-Deep Blue rematch127
6.5Discussion129
7Feature Construction for Game Playing131
7.1Introduction131
7.2Evaluation Functions132
7.3Feature Overlap133
7.4Constructing Overlapping Features134
7.4.1Parameter tuning134
7.4.2Higher order expansion136
7.4.3Quasi-random methods138
7.4.4Knowledge derivation138
7.5Directions for Constructing Overlapping Features139
7.5.1Layered learning139
7.5.2Compression142
7.5.3Teachable systems143
7.6Discussion151
8Learning to Play Expertly: A Tutorial on Hoyle153
8.1Introduction153
8.2A Game-Playing Vocabulary154
8.3Underlying Principles156
8.3.1Useful knowledge157
8.3.2The Advisors159
8.3.3The architecture160
8.3.4Weight learning for voting164
8.4Perceptual Enhancement166
8.4.1Patterns167
8.4.2Zones168
8.5An Empirical Framework172
8.6Results173
8.7Conclusion: Why Hoyle Works176
9Acquisition of Go Knowledge from Game Records179
9.1Introduction179
9.1.1Purpose179
9.1.2Classification of Go Knowledge180
9.1.3Two Approaches181
9.2Rules Of Go181
9.3A Deductive Approach183
9.3.1System183
9.3.2Rule Acquisition186
9.4An Evolutionary Approach190
9.4.1Algorithm191
9.4.2Application to Tsume-Go196
9.5Conclusions202
10Honte, a Go-Playing Program Using Neural Nets205
10.1Introduction205
10.1.1Rules206
10.1.2Strength of programs207
10.2General Approach in Honte210
10.3Joseki Library212
10.4Shape Evaluating Neural Net212
10.5Alpha-Beta Search215
10.6Influence218
10.7Neural Nets Evaluating Safety and Territory218
10.8Evaluation of Honte220
10.9Conclusions223
11Learning to Play Strong Poker225
11.1Introduction225
11.2Texas Hold'em228
11.3Requirements for a World-Class Poker Player229
11.4Loki's Architecture231
11.5Implicit Learning233
11.6Explicit Learning236
11.7Experiments238
11.8Ongoing Research240
11.9Conclusions242
Bibliography243
Contributors269
Person Index275
Subject Index281
From the B&N Reads Blog

Customer Reviews