<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Offtopia &#187; Machine Learning</title>
	<atom:link href="http://www.offtopia.net/wp/?cat=20&#038;feed=rss2" rel="self" type="application/rss+xml" />
	<link>http://www.offtopia.net/wp</link>
	<description>nothing personal</description>
	<lastBuildDate>Mon, 01 Oct 2018 13:40:51 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.8.5</generator>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
			<item>
		<title>anglican.ml</title>
		<link>http://www.offtopia.net/wp/?p=269</link>
		<comments>http://www.offtopia.net/wp/?p=269#comments</comments>
		<pubDate>Fri, 07 Oct 2016 13:14:30 +0000</pubDate>
		<dc:creator>dvd</dc:creator>
				<category><![CDATA[Computer Science]]></category>
		<category><![CDATA[Machine Learning]]></category>

		<guid isPermaLink="false">http://www.offtopia.net/?p=269</guid>
		<description><![CDATA[http://anglican.ml/, the proper domain for the Anglican way of machine learning.
]]></description>
			<content:encoded><![CDATA[<p><a href="http://anglican.ml/">http://anglican.ml/</a>, the proper domain for the <a href="http://bitbucket.org/probprog/anglican/"><strong>Anglican</strong></a> way of <strong>m</strong>achine <strong>l</strong>earning.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.offtopia.net/wp/?feed=rss2&amp;p=269</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Immanuel Kant and Probability</title>
		<link>http://www.offtopia.net/wp/?p=255</link>
		<comments>http://www.offtopia.net/wp/?p=255#comments</comments>
		<pubDate>Thu, 08 Oct 2015 20:18:21 +0000</pubDate>
		<dc:creator>dvd</dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Computer Science]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Philosophy]]></category>
		<category><![CDATA[Probability]]></category>

		<guid isPermaLink="false">http://www.offtopia.net/?p=255</guid>
		<description><![CDATA[ Kant said: there are two a priori intuitions &#x2014; space and time. There are also categories, and &#8220;the number of the categories in each class is always the same, namely, three&#8221;, like unity-plurality-modality, or possibility-existence-necessity. It would be fun to have three a priori intuitions, but only two exist, sigh. Really though?

Kant probably did [...]]]></description>
			<content:encoded><![CDATA[<p> Kant said: there are two <em>a priori</em> intuitions &#x2014; space and time. There are also categories, and &#8220;the number of the categories in each class is always the same, namely, three&#8221;, like unity-plurality-modality, or possibility-existence-necessity. It would be fun to have three <em>a priori</em> intuitions, but only two exist, sigh. Really though?<br />
<span id="more-255"></span><br />
Kant probably did not realize: there is a third one &#x2014; probability, to wit, certainty of our experience. Just like space, probability precedes any experience. Every object is uncertain as much as it is extended. </p>
<p>The three <em>a priori</em> intuitions are related &#x2014; infinite and undirected space, infinite and directed time, finite and undirected probability.  Physics knows of <em>uncertainty principle</em>, we are uncertain about relation of time and space: both time and space cannot be intuited with certainty. Probability is as basic and fundamental as time and space for our cognition. </p>
<p>Just like geometry deals with <em>a priori</em> intuition of space, and mathematical analysis &#x2014; with intuition of time, theory of probability deals with intuition of probability. There is philosophical justification for studying uncertainty, probability, and bayesian inference.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.offtopia.net/wp/?feed=rss2&amp;p=255</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Maximum a Posteriori Estimation by Search in Probabilistic Programs</title>
		<link>http://www.offtopia.net/wp/?p=235</link>
		<comments>http://www.offtopia.net/wp/?p=235#comments</comments>
		<pubDate>Wed, 10 Jun 2015 20:33:25 +0000</pubDate>
		<dc:creator>dvd</dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Computer Science]]></category>
		<category><![CDATA[Machine Learning]]></category>

		<guid isPermaLink="false">http://www.offtopia.net/?p=235</guid>
		<description><![CDATA[Paper, slides, and poster as presented at SOCS 2015.
We introduce an approximate search algorithm for fast maximum a posteriori probability estimation in probabilistic programs, which we call Bayesian ascent Monte Carlo (BaMC). Probabilistic programs represent probabilistic models with varying number of mutually dependent finite, countable, and continuous random variables. BaMC is an anytime MAP search [...]]]></description>
			<content:encoded><![CDATA[<p><a href="http://arxiv.org/abs/1504.06848">Paper</a>, <a href="http://offtopia.net/bamc-slides.pdf">slides</a>, and <a href="http://offtopia.net/bamc-poster/">poster</a> as presented at <a href="http://www.ise.bgu.ac.il/socs2015/">SOCS 2015</a>.</p>
<p>We introduce an approximate search algorithm for fast maximum a posteriori probability estimation in probabilistic programs, which we call Bayesian ascent Monte Carlo (BaMC).<span id="more-235"></span> Probabilistic programs represent probabilistic models with varying number of mutually dependent finite, countable, and continuous random variables. BaMC is an anytime MAP search algorithm applicable to any combination of random variables and dependencies. We compare BaMC to other MAP estimation algorithms and show that BaMC is faster and more robust on a range of probabilistic models.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.offtopia.net/wp/?feed=rss2&amp;p=235</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Path Finding under Uncertainty through Probabilistic Inference</title>
		<link>http://www.offtopia.net/wp/?p=225</link>
		<comments>http://www.offtopia.net/wp/?p=225#comments</comments>
		<pubDate>Mon, 08 Jun 2015 07:43:06 +0000</pubDate>
		<dc:creator>dvd</dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Computer Science]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.offtopia.net/?p=225</guid>
		<description><![CDATA[An early workshop paper, superseded by current research but still relevant,  slides, and a poster.
Abstract
We introduce a new approach to solving path-finding problems under uncertainty by representing them as probabilistic models and applying domain-independent inference algorithms to the models.   This approach  separates problem representation from the inference algorithm and provides a [...]]]></description>
			<content:encoded><![CDATA[<p>An early workshop <a href="http://arxiv.org/abs/1502.07314">paper</a>, superseded by current research but still relevant,  <a href="http://offtopia.net/ctp-pp-slides.pdf">slides</a>, and a <a href="http://offtopia.net/ctp-pp-poster/">poster</a>.</p>
<h3>Abstract</h3>
<p>We introduce a new approach to solving path-finding problems under uncertainty by representing them as probabilistic models and applying domain-independent inference algorithms to the models.  <span id="more-225"></span> This approach  separates problem representation from the inference algorithm and provides a framework for efficient learning of path-finding policies. We evaluate the new approach on the Canadian Traveller Problem, which we formulate as a  probabilistic model, and show how probabilistic inference allows efficient stochastic policies to be obtained for this problem.  </p>
]]></content:encoded>
			<wfw:commentRss>http://www.offtopia.net/wp/?feed=rss2&amp;p=225</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Anglican the Probabilistic Programming Concept</title>
		<link>http://www.offtopia.net/wp/?p=210</link>
		<comments>http://www.offtopia.net/wp/?p=210#comments</comments>
		<pubDate>Tue, 05 May 2015 22:44:57 +0000</pubDate>
		<dc:creator>dvd</dc:creator>
				<category><![CDATA[Computer Science]]></category>
		<category><![CDATA[Machine Learning]]></category>

		<guid isPermaLink="false">http://www.offtopia.net/?p=210</guid>
		<description><![CDATA[Anglican is a probabilistic programming language, or better yet, a concept, living in symbiosis with Clojure. Anglican stands for Church of England (because we are here in Oxford). To create your Turing-complete probabilistic models, clone anglican-user and hack away. Or, look at cool examples.
Read more&#8230;
]]></description>
			<content:encoded><![CDATA[<p><a href="https://bitbucket.org/dtolpin/anglican">Anglican</a> is a probabilistic programming language, or better yet, a concept, living in symbiosis with <a href="http://clojure.org" target="_blank">Clojure</a>. Anglican stands for <a href="http://projects.csail.mit.edu/church/wiki/Church" target="_blank">Church</a> of England (because we are here in <a href="http://www.robots.ox.ac.uk/">Oxford</a>). To create your Turing-complete probabilistic models, clone <a href="https://bitbucket.org/dtolpin/angllican-user">anglican-user</a> and <a href="https://bitbucket.org/dtolpin/anglican-user/src/HEAD/doc/intro.md">hack away</a>. Or, look at cool <a href="http://www.robots.ox.ac.uk/~fwood/anglican/examples/index.html" target="_blank">examples</a>.</p>
<p><a href="https://bitbucket.org/dtolpin/anglican-demo-paper/src/HEAD/paper/paper.pdf">Read more&#8230;</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.offtopia.net/wp/?feed=rss2&amp;p=210</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Output-Sensitive Adaptive MH for Probabilistic Programs</title>
		<link>http://www.offtopia.net/wp/?p=200</link>
		<comments>http://www.offtopia.net/wp/?p=200#comments</comments>
		<pubDate>Wed, 10 Dec 2014 10:20:18 +0000</pubDate>
		<dc:creator>dvd</dc:creator>
				<category><![CDATA[Machine Learning]]></category>

		<guid isPermaLink="false">http://www.offtopia.net/?p=200</guid>
		<description><![CDATA[We introduce an adaptive output-sensitive inference algorithm for MCMC and probabilistic programming, Adaptive Random
Database. The algorithm is based on a single-site updating Metropolis-Hasting sampler, the Random Database (RDB)
algorithm. Adaptive RDB (ARDB) differs from the original RDB in that the schedule of selecting variables proposed for modification
is adapted based on the output of of the probabilistic program, rather than being fixed and uniform.  We show that ARDB still
converges to the correct distribution. We compare ARDB to RDB on several test problems highlighting different aspects of the adaptation
scheme.]]></description>
			<content:encoded><![CDATA[<p>A <a href="http://www.offtopia.net/nips-pp-ws-2014-ardb-poster/">poster</a>  for the <a href="http://probabilistic-programming.org/wiki/NIPS*2014_Workshop">3rd NIPS Workshop on Probabilistic Programming</a>; also available as <a href="http://www.offtopia.net/nips-pp-ws-2014-ardb-poster/poster.pdf">A0 PDF</a>. <a href="http://www.offtopia.net/almh-slides.pdf">Slides</a> for a 15-minute talk.</a></p>
<h4>Abstract</h4>
<p><span id="more-200"></span></p>
<p>We introduce an adaptive output-sensitive Metropolis-Hastings algorithm for probabilistic models expressed as programs, Adaptive Lightweight Metropolis-Hastings (AdLMH). The algorithm extends Lightweight Metropolis-Hastings (LMH) by adjusting the probabilities of proposing random variables for modification to improve convergence of the program output. We show that AdLMH converges to the correct equilibrium distribution and compare convergence of AdLMH to that of LMH on several test problems to highlight different aspects of the adaptation scheme. We observe consistent improvement in convergence on the test problems.</p>
<p><a href="http://arxiv.org/abs/1501.05677">Full paper.</a></p>
]]></content:encoded>
			<wfw:commentRss>http://www.offtopia.net/wp/?feed=rss2&amp;p=200</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Slides for my Tea Talk</title>
		<link>http://www.offtopia.net/wp/?p=187</link>
		<comments>http://www.offtopia.net/wp/?p=187#comments</comments>
		<pubDate>Wed, 01 Oct 2014 22:55:22 +0000</pubDate>
		<dc:creator>dvd</dc:creator>
				<category><![CDATA[Computer Science]]></category>
		<category><![CDATA[Machine Learning]]></category>

		<guid isPermaLink="false">http://www.offtopia.net/?p=187</guid>
		<description><![CDATA[My Tea Talk slides, on October 1st, 2014.
]]></description>
			<content:encoded><![CDATA[<p>My <a href="http://offtopia.net/60days-of-research.pdf">Tea Talk slides</a>, on October 1st, 2014.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.offtopia.net/wp/?feed=rss2&amp;p=187</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
