May 25, 2010

Keeping simple is a robust optimization

Filed under: Artificial Intelligence, Computer Science — dvd @ 2:59 pm

A good design is approximately optimal. When a reasonable probabilistic model is available, the design can be optimized in expectation: flight delays should be rare, e-mails should arrive within seconds, and buildings should protect from elements and provide comfort on most days of the year. But a single disaster can cause big trouble.

Most objective functions assume that the probability distributions have short tails, and few probabilistic models provide accurate representations of long tails: Newton’s laws of mechanics assume that an object does not move too fast, and most objects comply with the assumption.

Occasionally long tails matter. Negative consequences of a highly improbable event outweigh benefits of good behaviour in an average case. A model in which very rare events are neglected leads to designs optimized towards wrong goals; but discovering and analyzing a model which takes care of rare but harmful disasters properly isn’t easy.

Since long tails are difficult to handle, they should be avoided. Chances of rare bad luck grow with the complexity of design: an iron hammer with a wooden handle is slow in application, but almost nothing can inpair its ability of hitting nails. On the other hand, an electronically controlled high power nail gun is very efficient, but a failure in the electronic circuit renders the tool useless.

Speed, resource consumption, ergonomics are important most of the time; but simplicity is the only reliable mean to achieve robustness — the promise that a tool or a system remains usable when things go wrong.

1 Comment

  1. Haskell is elegant, but not robust performance-wise: thus the troubles of darcs; Ocaml is robust with respect to the performance.

    Comment by dvd — June 20, 2010 @ 9:49 pm

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress