<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<font face="Lucida Grande">Yesterday, I talked to some of our
undergraduates who are working on the 'scoring and learning AMDPs'
problem, and mentioned David Abel's writeup on analyzing MDP
abstractions. Not all of them were onboard when David sent out
his writeup, so I'm forwarding it again.<br>
<br>
dabel_char_mdp_abstractions.pdf is a document from last November
that gives some more context and notation than the later document,
so I think it's the best starting place for those who haven't seen
this work yet.<br>
<br>
robot_learning_writeup_5_17-2.pdf is from May and talks about
progress on some of the "missing parts" of the earlier document
(compression and loss in abstract MDPs, properties of good
abstractions, finding near-optimal abstractions).<br>
</font>
<div class="moz-forward-container"><br>
These documents are pretty technical and dense, so it may warrant
some groupwork in collectively reading and understanding the
implications.<br>
<br>
I also mentioned that it would be useful for the students in this
subgroup to learn about some of the previous work on structure
learning in Bayes nets (which is a very extensive literature).
David Heckerman's tutorial is the classic introduction to this
topic -- warning, it gives the basic but also delves into
minutiae. (Chapter 11 contains the core material about using
heuristic search to find Bayes net structures.)<br>
<br>
<a class="moz-txt-link-freetext" href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-06.pdf">https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-95-06.pdf</a><br>
<br>
Daphne Koller has some good tutorial slides:<br>
<br>
<a class="moz-txt-link-freetext" href="http://robotics.stanford.edu/~koller/NIPStut01/tut6.pdf">http://robotics.stanford.edu/~koller/NIPStut01/tut6.pdf</a><br>
<br>
Don't get bogged down too much in the subtle details in either of
these sources -- mainly the goal is to get a general sense of how
heuristic search and Bayesian scoring can be used to discover a
good Bayes net structure using a data set.<br>
<br>
If you want (a LOT) more technical detail about Bayes nets and
Bayes net learning, Chapter 8 of Neapolitan's book is a good start
(and if you don't know much about Bayes nets, the earlier chapters
are a pretty good introduction to probability theory, Bayes nets,
parameter estimation, and inference with Bayes nets).<br>
<br>
<a class="moz-txt-link-freetext" href="http://www.cs.technion.ac.il/~dang/books/Learning%20Bayesian%20Networks(Neapolitan,%20Richard).pdf">http://www.cs.technion.ac.il/~dang/books/Learning%20Bayesian%20Networks(Neapolitan,%20Richard).pdf</a><br>
<br>
Marie<br>
<br>
</div>
</body>
</html>