- doing it quick and dirty;
- starting over, cutting down, and still doing it right.

*Quick and dirty* is like giving up on antiseptics. The patient will die anyway, flesh rotting from infection. But you “did everything you could”.

*Starting over* is like giving up on anesthesia. It’s hard and unpleasant. But the software project has a chance to get back on track.

It’s not a choice how to save the project. It’s a choice whether to save the project or to cover your neck (neck is an euphemism).

]]>- Test-driven development results in programs which work well.
- Bug-driven development results in programmers which work hard.

Kant probably did not realize: there is a third one — probability, to wit, certainty of our experience. Just like space, probability precedes any experience. Every object is uncertain as much as it is extended.

The three *a priori* intuitions are related — infinite and undirected space, infinite and directed time, finite and undirected probability. Physics knows of *uncertainty principle*, we are uncertain about relation of time and space: both time and space cannot be intuited with certainty. Probability is as basic and fundamental as time and space for our cognition.

Just like geometry deals with *a priori* intuition of space, and mathematical analysis — with intuition of time, theory of probability deals with intuition of probability. There is philosophical justification for studying uncertainty, probability, and bayesian inference.

Now, what if instead of a napkin one of your colleagues has a *laptop* or a *tablet* handy? Imagine that you just **grab** their laptop or tablet, **enter URL** enapk.in, **type in/draw** your idea, and let your colleagues **scan the barcode** or **copy the URL** of this napkin. Napkins are stored forever; but are only accessible through their short URLs (just like “tiny URLs”).

This way, any computer is just like a napkin, does not require a log-in to take notes or express ideas. Everyone with physical access

to the napkin at the time of writing can later retrieve and use it.

- a client on an
*old tablet or laptop in your kitchen,*(sitting on the fridge and also holding a recipe book), - and a server serving a web page with shopping check list, automatically updated, to a
*mobile app*.

Every time you **run out** of something (eggs, sugar, tea, …), you **add** this thing to the list of ‘missing’ goods (lookup/predictive input make adding easier). When you **go shopping,** whatever you added is in the shopping list, when you buy, you **cross out** the entry.

A background **knowledge module** knows how to *measure* different things (sugar in kg or packets, eggs are counted, etc.), and suggests default amounts to buy. If you have to buy too often, the amount is automatically increased.

- get (
**buy**) the whole book to read; - read
**another**paragraph from this book; - read a paragraph from a
**similar**book; - read a paragraph from a
**different**book.

The app can remember user’s past history to adjust suggestions. How *paragraphs*, *similar*, and *different* books are chosen is an interesting question.

For testing/development, free text repositories are available, for example, Project Gutenberg, but also many others.

]]>We introduce an approximate search algorithm for fast maximum a posteriori probability estimation in probabilistic programs, which we call Bayesian ascent Monte Carlo (BaMC). Probabilistic programs represent probabilistic models with varying number of mutually dependent finite, countable, and continuous random variables. BaMC is an anytime MAP search algorithm applicable to any combination of random variables and dependencies. We compare BaMC to other MAP estimation algorithms and show that BaMC is faster and more robust on a range of probabilistic models.

]]>We introduce a new approach to solving path-finding problems under uncertainty by representing them as probabilistic models and applying domain-independent inference algorithms to the models. This approach separates problem representation from the inference algorithm and provides a framework for efficient learning of path-finding policies. We evaluate the new approach on the Canadian Traveller Problem, which we formulate as a probabilistic model, and show how probabilistic inference allows efficient stochastic policies to be obtained for this problem.

]]>