Lessons From Bayes’ Rule

I was thinking of some of the not-so-mathematical lessons we learn from Bayes’ Rule.  I’d like to include actual examples for each, and how it maps onto the various pieces of Bayes’ Rule, but I figured I’d put up my list here and add to it as I think of it.
  • Confidence in a claim should scale with the evidence for that claim
  • Ockham’s razor – simpler theories are preferred (i.e. you pay a marginalization penalty for each parameter, across its prior)
  • Simpler means fewer adjustable parameters
  • Simpler means that the predictions are both specific and not overly plastic. For example, a hypothesis which is consistent with the observed data, and also be consistent if the data were the opposite as well would be overly plastic.  Arguing for the God hypothesis, saying that a universe fine tuned for life is evidence for design is a hypothesis which is overly plastic.  If our universe were not fine tuned for life, and life is exceptional, then that too would be evidence for design – thus the data, and its opposite, are covered by the hypothesis.
  • Your inference is only as good as the hypotheses that you consider.  If you consider only randomness and psychic, then nearly every octopus will be psychic.
  • Extraordinary claims require extraordinary evidence.
  • It is better to explicitly display your assumptions rather than implicitly hold them.
  • It is a good thing to update your beliefs when you receive new information, and not a sign of waffling.
  • Not all uncertainties are the same.

Any other lessons we learn?

Advertisements

About brianblais

I am a professor of Science and Technology at Bryant University in Smithfield, RI, and a research professor in the Institute for Brain and Neural Systems, Brown University. My research is in computational neuroscience and statistics. I teach physics, meteorology, astonomy, theoretical neuroscience, systems dynamics, artificial intelligence and robotics. My book, "Theory of Cortical Plasticity" (World Scientific, 2004), details a theory of learning and memory in the cortex, and presents the consequences and predictions of the theory. I am an avid python enthusiast, and a Bayesian (a la E. T. Jaynes), and love music.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s