Successful betting revolves a lot around getting the trade-off right between recent and not-so recent information. Timeform's Head of Research and Development, Simon Rowlands, turns to Alan Turing and The Reverend Thomas Bayes for answers...
the definition of Bayesian inference might be stated as “the process by which our evaluation of a horse’s (or team’s, or individual player’s) ability (or odds in a given context) gets revised as additional evidence is learned”. Or, more simply still, “betting”, or “handicapping”.
It is just over one hundred years since the birth of Alan Turing, one of Britain's true "greats" though criminally undervalued in many quarters until recently.
Turing was a prodigiously clever man who played a major part in decoding "Enigma" - the machine by which German naval messages were encrypted during the Second World War - and almost certainly shortened the conflict as a result.
He also laid some of the foundations for what became the computer age by designing a machine which performed mechanical calculations according to instructions: an embryonic computer, no less.
The secret nature of Turing's war work, along with his persecution as a homosexual (which led to his probable suicide in 1954), meant that he went largely unheralded until recent years.
A recently opened exhibition of Turing's life and work at London's Science Museum is the latest in a series of attempts to recognise and honour a man who may merit the accolade "Britain's Greatest Genius".
What has this to do with racing and with betting, you may ask?
The answer, in a roundabout way, is plenty.
Turing's genius in decoding Enigma was to understand that a form of mathematical thinking - centuries old and forgotten in many circles - held many of the answers in an environment in which the identity and value of considerable amounts of information were uncertain.
Named after the Reverend Thomas Bayes, an 18th century English mathematician and Presbyterian minister, Bayes' Theorem gave rise to what is known as Bayesian inference.
Bayesian inference has been defined as "the process by which the probability estimate for a hypothesis is updated as additional evidence is learned".
Turing used it to fast-track solutions of otherwise impenetrable German messages. Punters use it all the time, but most of them probably do not know it.
In terms more familiar to the reader, the above definition of Bayesian inference might be stated as "the process by which our evaluation of a horse's (or team's, or individual player's) ability (or odds in a given context) gets revised as additional evidence is learned".
Or, more simply still, "betting", or "handicapping".
"Additional evidence" can be anything from a change in the going to an unexpected defeat, from the acquisition by a team of a valuable player to an unexpected victory, and much more besides.
Sporting events contested by just two teams or two individuals illustrate this concept most clearly.
Amir Khan was 1.201/5 to win a fight against Danny Garcia the other night but was unexpectedly beaten. Is Garcia now a short price for the rematch? No, Khan is still favourite, but at a bigger price of 1.402/5.
Bayesian inference gets a lot more complicated when applied to a dozen or more horses which have not run against each other previously and may not run against each other again, but the principle still holds.
In horseracing, a central question is, how much should we revise our opinion of the ability and aptitude of a horse- and thereby of its probability of winning or placing in a given context- as a result of its latest run or runs?
There is a well-known cognitive phenomenon called "availability bias", whereby the availability of a piece of information biases the significance accorded to that piece of information.
In racing, people often refer to this as "recency bias", for little can be more publicly "available" about a horse's background than its most recent run.
It is impossible to be sure quite how big a part recency bias plays in distorting the betting market, but the suspicion is that it is plenty.
The amount by which you should revise your prior hypothesis in the light of new evidence depends on the confidence you feel entitled to have in that hypothesis and the strength of the new evidence.
For instance, it would seem folly to revise drastically one's opinion of the merit of Black Caviar in the light of one scrambled victory when there is a large body of prior evidence to suggest she is better than that. That is, unless you have serious reason to doubt the previous evidence (in which case, in this instance, I suggest you familiarise yourself more with it).
Then again, it would seem folly to hold onto a conviction that a lightly raced horse is a world beater if all the evidence from its latest run is that it is not.
Bayesian inference suggests the truth- or at least the best estimate- is somewhere between the two extremes, though likely to be much nearer the prior expectation than the posterior evidence in the first instance than in the second.
The importance of context means that it is impossible to come up with simple rules for applying Bayesian inference when assessing horses and their chances. But it should warn against the tendency to switch erratically from one extreme viewpoint to another or of sticking stubbornly to a given viewpoint come what may.
If Turing were around in this day and age, he might just point out that he or she who best understands and applies the principles of Bayesian inference- whether knowingly or otherwise- is well placed to rule the betting world.