Kant said: there are two a priori intuitions — space and time. There are also categories, and “the number of the categories in each class is always the same, namely, three”, like unity-plurality-modality, or possibility-existence-necessity. It would be fun to have three a priori intuitions, but only two exist, sigh. Really though?
Kant probably did not realize: there is a third one — probability, to wit, certainty of our experience. Just like space, probability precedes any experience. Every object is uncertain as much as it is extended.
The three a priori intuitions are related — infinite and undirected space, infinite and directed time, finite and undirected probability. Physics knows of uncertainty principle, we are uncertain about relation of time and space: both time and space cannot be intuited with certainty. Probability is as basic and fundamental as time and space for our cognition.
Just like geometry deals with a priori intuition of space, and mathematical analysis — with intuition of time, theory of probability deals with intuition of probability. There is philosophical justification for studying uncertainty, probability, and bayesian inference.
Imagine that you have a great idea. You write it down on a napkin, show to your colleagues, they photograph the napkin with their smartphones, and will get back to you with investment proposals.
Now, what if instead of a napkin one of your colleagues has a laptop or a tablet handy? (more…)
- a client on an old tablet or laptop in your kitchen, (sitting on the fridge and also holding a recipe book),
- and a server serving a web page with shopping check list, automatically updated, to a mobile app.
Every time you run out of something (eggs, sugar, tea, …), you add this thing to the list of ‘missing’ goods (lookup/predictive input make adding easier). When you go shopping, whatever you added is in the shopping list, when you buy, you cross out the entry.
A background knowledge module knows how to measure different things (sugar in kg or packets, eggs are counted, etc.), and suggests default amounts to buy. If you have to buy too often, the amount is automatically increased.
Imagine: a web app that sits on a collection of ebooks, shows the user a paragraph from a book, and asks the user whether they want
- get (buy) the whole book to read;
- read another paragraph from this book;
- read a paragraph from a similar book;
- read a paragraph from a different book.
The app can remember user’s past history to adjust suggestions. How paragraphs, similar, and different books are chosen is an interesting question.
For testing/development, free text repositories are available, for example, Project Gutenberg, but also many others.
Paper, slides, and poster as presented at SOCS 2015.
We introduce an approximate search algorithm for fast maximum a posteriori probability estimation in probabilistic programs, which we call Bayesian ascent Monte Carlo (BaMC). (more…)
An early workshop paper, superseded by current research but still relevant, slides, and a poster.
We introduce a new approach to solving path-finding problems under uncertainty by representing them as probabilistic models and applying domain-independent inference algorithms to the models. (more…)
Anglican is a probabilistic programming language, or better yet, a concept, living in symbiosis with Clojure. Anglican stands for Church of England (because we are here in Oxford). To create your Turing-complete probabilistic models, clone anglican-user and hack away. Or, look at cool examples.
Found my own slides from a talk I gave a year ago, about rational meta-reasoning. Do they seem interesting to me because I have degraded during this year?
We introduce a new algorithm for multi-agent path finding, derived from the idea of meta-agent conflict-based search (MA-CBS). (more…)