The world’s oceans provide the setting for some of the most difficult and prestigious races in sailing, some of which have been in existence for a century or more. From the world-renowned America’s Cup, which has been sought after by yachting enthusiasts for more than 150 years, to more recent big prizes such as the Volvo Ocean Race and Vendée Globe, this infographic by East Fremantle Yacht Club (http://www.efyc.com.au/) traces the origins of these time-honoured sailing competitions. It also gives the reader a few interesting facts about each race, in addition to acknowledging the reigning champions.

]]>I think this pretty much covers it, at least as far as white sails go. You end up either close hauled and constantly steering to the point where the sails flap, or reaching and constantly trimming to the point where the sails flap. I know that in practice it can be more complicated that this, but not much, this flowchart certainly captures the two essential modes of sailing. I also like the fact that there is no end point, no state which you can relax in, you're doomed to constantly trim, or steer, to the edge of flapping: I think that's pretty accurate!

]]>For the purposes of our comparison we can think of an F1 team as like a department at a university, and individual functions within an F1 team, such as aerodynamics, or engines, as like individual research groups within a university. This is a fairly good comparison, about the same numbers of people work in each and cutting edge research goes on in both. To get a measure of the speed at which information is shared between groups let's use a kind of echo-time – the time taken between having an idea and seeing that someone else has developed that idea. First, let's see what this time is in the F1 case:

- An aerodynamicist has an idea (this does happen occasionally) for a new shape of front-wing endplate (for example), first she does some CFD to see if it works in theory; this will take maybe a couple of days.
- If that works then the model makers will make a scale model of it which is put into the wind-tunnel and tested against the old endplates; depending on how good it looked in CFD, maybe a week or two.
- If the wind tunnel agrees with CFD (which it never does) about how good the new shape is then some real endplates will get built; this normally takes a couple of weeks but in rare circumstances it has been known to be rushed through in days, sometimes even making solid metal pieces when hopes are really inflated about a part.
- The endplates will be taken to the next race where they will be run in practice as test items, and then in the race if a driver likes them – at this point everyone in other teams can see them and the idea has, in effect, been published – F1 engineers are always examining each others cars for good ideas. From here it's a similar development time for other teams who covet the shiny new endplates to make there own ones, possibly a bit shorter because they have a greater conviction that they will be good than ideas they've come up with themselves.

The total echo-time, from the first aerodynamicist having the idea to seeing that another team has developed (copied) it, is between 6 and 10 weeks, assuming it's an idea worth developing! Now for the academic equivalent:

- Post-doc researcher has an idea/discovers something, decides to write a paper about it; timescales vary enormously for paper writing, from three days to three years, let's be generous and say two weeks.
- Paper is reviewed by supervisor and revisions are made; let's say another week, although it might take that long before the supervisor even reads it.
- Paper is submitted to a journal where it is sent for peer review; this is where the time really starts stacking up, it could take absolutely ages, the mean is probably about three months.
- Let's assume it's a really great paper with really lazy reviewers and no revisions are necessary (which would necessitate a repeat of steps 2 and 3), next step publication in the journal; depends on the frequency of publication, let's say another 3 months.
- Now another researcher might see it, immediately have a fully fledged idea, and sit down to write her own paper in reply; repeat steps 1-4 for the minimum time until the original researcher sees that his idea has been developed.

This academic echo time is 27 weeks, and that would probably set a new record! It would be more realistic to make this estimate about 2.5 years. Even at 27 weeks it's between 3 and 4 times slower than the F1 equivalent – at 2.5 years it's between 13 and 21 times slower! The really baffling thing about this discrepancy is this: F1 teams do not *want* to share ideas with other teams, in fact efforts are actively made to *prevent* sharing information, whereas academia has constructed a whole publishing industry *specifically to facilitate the sharing of ideas*! So what's going on here, why is one so much faster than the other? Here are two candidate factors:

**Validation:** The F1 team can validate its ideas itself, in CFD, in the wind tunnel, and ultimately on track, whereas the academic has to send his ideas off to be scrutinised by some other academics who aren't paid enough to do it quickly. This is an obvious difference in terms of time that raises an equally obvious question: *why* are F1 teams able to validate their own ideas while academics are not? There are two issues here, practicality and motivation: practically it's easier to test a mechanical or aerodynamic idea (when you've got a wind tunnel and a car to play with) than it is to test a more abstract idea, especially in biomedicine where the true tests for many ideas would take years themselves or are simply not possible. So the best substitute is used instead of real testing, which is asking some other experienced people if they think it sounds sensible, given the evidence available. The second issue is motivation, we know that an F1 team wants to go faster, so it has no interest in putting parts on its car that don't work – if a part is raced on a car, we know that team really thinks it's good. Not only that but if that team wins we have *proof* that the parts on their car work. Conversely the academic has an incentive to publish papers, good papers are better than bad papers but a publication is, more often than not, better than no publications. Academics can't be *trusted* to validate their own work because it's not necessarily in their interest to find fault with it.

**Competition: **It sounds too obvious to be worth pointing out, but F1 is a competition – all the teams want exactly the same thing, and so have exactly the same problem. A good idea for one team is a good idea for all other teams too, and getting it on the car one race earlier will have real tangible benefits. Academia is not (supposed to be) a competition but rather a collaboration – research groups actively avoid overlapping domains with research groups at other institutions. Publishing a week earlier won't bring any real benefit, it's very unlikely that someone else is about to publish the same thing, there is not a fortnightly "best published idea" prize worth hundreds of thousands of pounds to the university.

I could go on about this all day, but this is already quite a long post so I'll wrap it up here and open up to discussion. Given the relative benefits to humanity of medical/scientific research vs. motor racing it's alarming to find that the useless and supposedly secretive one provides a much,* much* faster environment for sharing ideas than the hugely valuable and purportedly open one. If we want to eliminate diseases and make people healthier for longer then we have to bring academic information sharing up to speed.

If you haven't seen the RaceTrace™ before, it's a chart showing each car as a line starting at the left at lap zero (the start) and progressing to the right as laps are completed. The vertical axis is time and the horizontal axis is distance (number of laps), so the slope of a line is determined by the laptime of the car. To make things a little bit harder to describe, but a lot easier to see, the time axis is not simply time since the start of the race but rather time relative to an imaginary reference car which doesn't make pitstops. To see the difference have a look at the plots below:

.

As you can see, the raw times don't tell you much whereas the time relative to a constant-paced reference car really shows you what happened in the race. Now if we just focus on Ferrari and Mercedes we can get some insight into how Vettel managed to win the race, fig. 3 shows just the two main protagonists, Ferrari and Mercedes:

You can see that when the safety car came out the two Mercedes dropped down the order and got stuck in some traffic while Seb drove off into the distance, and you can see that all three front-runners stopped twice after the safety car. You will also notice that the line gets steeper after each pit-stop and the lines at the end are much steeper than at the beginning – this is due to *fuel-effect.* Every lap the cars use about 1.8kg of fuel, and each kg of mass on the car slows it down by about 0.03s/lap, so every lap the cars get lighter and faster, by about 0.054s/lap at this track. Meanwhile the tyres are getting worn out lap by lap and making the cars slower, this is *tyre degradation* or *tyre deg*, or simply *deg *(engineers are not verbose people on the whole). So if tyre deg is bigger than fuel-effect then the lines for each stint will tend to curve downwards as the car gets slower each lap. If deg and fuel-effect are equal then the lines will be relatively straight and change slope at the pit-stops (as in Vettel's last stint). The thing is, fuel-effect isn't really interesting, it affects all cars to roughly the same extent and it doesn't really contribute to the strategic picture, so we'd like to get rid of it and see the effects of tyre deg on their own. To do this we simply change the reference car to have its own fuel effect; instead of showing time relative to a car doing constant lap times, we show time relative to a car going 0.054s/lap faster every lap. Fig. 4 shows the fuel-corrected version of the RaceTrace™ from fig. 3:

This fuel-corrected RaceTrace™ shows us the tyre deg much more clearly and we can start to see how much gentler on the tyres Seb is than the Mercedes by how little his trace curves downwards compared to the traces of the Mercs. Bear in mind that the Mercs had fresh tyres coming out of the safety car period and so made three stops in total to Vettel's two. Another thing we can discern from this is that Mercedes pit-stop strategy was not as good as Ferrari's, their second stint was too long. We know this from looking at the slope of the fuel-corrected traces just before each pit-stop: Vettel's line has pretty much the same slope before each pit-stop, meaning the same fuel-corrected lap time before each stop and hence the same amount of deg on each set of tyres. The Mercedes on the other hand have a much more downward slope before the second stop than before the third – this means that they used the second set of tyres a lot more than the third set and that if they had done fewer laps on the second stint and more laps in the third stint they would have reached the chequered flag sooner. Fig. 5 zooms in on the post-safety car stints and highlights the slopes of the three lines at the ends of each stint:

This was a very interesting race, strategically-speaking, we have seen from the RaceTrace™ that Ferrari nailed it and Mercedes got their tyre deg estimates wrong. If they hadn't got stuck in traffic under the safety car it would have been a very close race indeed!

]]>Tyre saving has become a hot topic in F1 over the past few seasons. I've heard engineers remind drivers to save tyres over the radio, and drivers blame it for poor race performance. But what should a team and driver do through a race to best exploit tyre saving? Is there anything that can be done?

To begin thinking about this, we need a couple of things in place - a model for tyre performance, and an idea of the size of the effects.

Models can be complicated, but luckily something simple seems to fit the evidence pretty well. If we assume that every lap at racing speed increases, by a constant amount, the minimum potential lap time a car can achieve on subsequent laps, then all looks good- we have an explanation for why lap times fall massively after a pit-stop (when tyres with many laps worth of this “tyre degradation” are replaced) and we see lap times that don't get much faster through a stint.

Why are constant lap times through a stint evidence of cumulative tyre deg? Well, we know F1 cars go faster when they are lighter - estimates in the public domain seem to hover around 0.03s/lap per kg. This means an F1 car should get quicker as it burns off fuel. So when it doesn't, we know a cumulative slowing effect must exist. If we assume all of this is tyre deg, then we instantly have an estimate for the size of the per lap tyre degradation effect- it has to be about the same as the gain expected from burning 1 lap of fuel. If fuel consumption is 1.75 kg a lap, then that makes tyre deg about 1.75kg0.03s/lap/kg = 0.05 s/lap lost every lap. With this model and 20 lap old tyres, we'd be going 200.05 = 1 s/lap slower than if we were on fresh new tyres.

So what's tyre saving all about then? The idea seems to be that, if the driver goes slower at particular points on the lap, he can reduce the tyre degradation he accumulates on that lap. With our model for tyre deg, we know tyre deg slows him on all subsequent laps. Hence a reduction in tyre deg benefits him over the remainder of the laps he completes on those tyres. How a driver might save tyre deg during a lap is likely pretty complicated, and probably the focus of a fair amount of research in F1. Fortunately, we can make progress without it here - we can just model it as the function between the deliberate slowness, seconds, a driver adds to his lap time and the per lap tyre deg, seconds, he accumulates on that lap.

What shape should this relationship take? The upper and lower end points seem pretty obvious- it seems unlikely that tyres would get faster no matter how slow a car goes, and you'd expect Tyre deg to be maximal at a drivers flat out speed when he is doing no tyre saving. This suggests the most likely shape is something like an exponential decay:

for

Where:

= a positive constant.

= the deliberate slowness a driver adds to his laptime to save tyres on a lap (s).

= tyre deg accumulated on this lap that will affect all subsequent laps (s/lap).

= tyre deg with ; no deliberate slowness (s/lap).

So if we go flat out () we accrue our maximum deg (), if we go really slow (big ), we accrue close to zero deg. If there are laps left on these tyres, the effect on total race time of going s/lap slow on this lap is the cost on this lap + the tyre deg effect on every remaining lap. If we let change to race time due to saving on this lap = :

We can look at the minimum of this by differentiating it and setting the result to zero. This yields:

Which results in :

This solution has a problem. y can be negative- which means it can ask the driver to go faster that his fastest, accumulating even more tyre degradation than at his flat out place. We've specifically disallowed this as unhelpful in our model, and believe the driver is flat out when he says he is at . This makes our real solution for the minimum:

This doesn't depend on our behaviour on any other lap, so the fastest way to the end of the race is to go this optimal amount slower on every lap.

Was all this worth bothering about? Lets put some numbers in and see.

From our argument above, lets assume is about the same as our reducing fuel load weight effect = 0.05 s/lap.

If we set , then going 1s/lap slower than flat out saves 0.007 s/lap of tyre deg -- which doesn't seem ridiculous.

With these settings, optimal slowness, s/lap, looks like this:

The most striking feature of this is that it is zero towards the end of the stint. This suggests the driver shouldn't be doing any saving from lap 11 onwards - just rinsing his tyres for all they are worth. It's just not worth going slow at all from here on in as there aren't enough future laps to recoup how slow you had to go on this lap to get the performance. All the important tyre saving is done at the start of a stint.

Slightly less expected is the behaviour at the start of a long stint with a lot of laps left - you don't go that much slower than on the previous lap. The function is convex in this area. Despite the massive gain you get by having your saving last for a lot of subsequent laps, you're already going quite slow and are well into the greatly diminishing part of the exponential and so barely get any return for going a lot slower.

So how would a car driven like this stack up in a race with a car driven flat out from the start? I've compared those two, and a car driven with optimal constant slowness and optimal linear reducing slowness in the race trace below.

Pleasingly, our optimal deliberate slowness model wins. It performs only a little bit better than optimal linear reducing slowness, but a load better than going flat out every lap. The race trace shows the optimally driven car drops back by over a second over the first few laps, but then catches up and more as he puts his saved tyre performance to use.

This profile (and the more extreme ones for higher deg) make for some interesting possibilities. In an effective two horse race, there seems little disadvantage to the second car in going a little bit slower than the lead car at the start of the stint – you will save tyre performance and be quicker at the end of the stint. If the first car is driven optimally, he won't catch it before the end of the race, but if the first car has any issues at all (safety cars, missing a chicane, a slow lap...) he will not only close the gap, but will have a faster car than his opponent for the remainder of the race. Moreover, if the other car has underestimated the actual tyre deg rate, he will be driving closer to optimal and be able to catch, and have the chance to pass him, before the end of the race.

The model also gives us a clue as to why tyre saving seems to be a relatively recent hot topic. With just a small reduction in tyre deg (to 75% of our estimate), the optimal slowness is always zero and a driver should be going flat out from the start. No tyre saving helps.

As ever, the real situation is likely to be more complicated than we have modelled. We've ignored the probability of being slowed by other cars and we've assumed tyre deg behaviour is constant and known throughout a race, rather than variable and hard to predict. All of these are likely to be important factors, and must make for some interesting race day strategy debates within teams.

]]>Each car/driver is represented by a line, the x-axis is number of laps completed, the y-axis is time ahead/behind an average laptime. So at the left-hand edge is the start, at the right is the chequered flag. The higher a line is, the further ahead that car is; so the winner is the line on top at the right hand side, in this case Hamilton. The sharp downward steps are where a car makes a pit stop and so loses about 23s relative to the cars around it. The time gap between two drivers on track is shown by the vertical distance between those two lines on the chart. Faster cars have steeper lines, slower cars have shallower, or even downward-sloping, lines.

So what can this Race-Trace tell us? Well, we can obviously see that Hamilton was out-front for the whole race thanks to his superior pace from the outset, pulling away from the field in the first 8 laps and then further extending his lead as other cars pitted. We can see that Alonso undercut Vettel by pitting one lap before him in the first round of pit-stops: when Alonso pitted on lap 11 he was behind Vettel, but because his first lap on new tires was quicker than Vettel's lap on older tires he was able to come out ahead of Vettel after Vettel had made his stop one lap later. Vettel is able to hang on to Alonso for only two laps before suffering a serious loss of pace. Compare Vettel's line in the second stint to those of Alonso, Rosberg, and Ricciardo, it's much flatter. You can see that Rosberg catches Vettel on lap 21 and is stuck behind him for a lap before getting past and pulling away rapidly.

We can also see where Ricciardo catches Vettel on lap 23 and gets stuck behind him for 3 laps. This is the point where Vettel was told to let Ricciardo past and replied with his "Tough luck." comment. A zoomed-in section of the Race-Trace shows this more clearly in Fig. 2.

The reason that RedBull must have issued that order is because they could see (maybe from watching the Race-Trace live) that Vettel was having a shocking second stint and that Ricciardo was going much faster and in a close fight with Alonso. Ricciardo did finally get past Vettel and went on to finish only 1.3s behind Alonso. What would have happened if Vettel had done what he was told? Would Ricciardo have been on the podium instead of Alonso? Christian Horner thought not, saying "Arguably he [Ricciardo] would have been a second further up the road", not enough to catch Alonso.

If we suppose that instead of holding up Ricciardo, Vettel had been ordered to let his teammate past before Ricciardo caught him (and he'd obeyed) then we can fill in those three laps with normal, unimpeded lap-times and see from the Race-Trace what would have happened. Fig. 3 shows a hypothetical trace for the case where Ricciardo wasn't impeded by Vettel (green line with purple dots):

As you can see, it looks like he would have caught Alonso. Whether or not he'd have got past him in the last few laps is questionable, but it would have been exciting to watch! It certainly wouldn't have done Vettel any harm to do what he was told, and it could have got his teammate a podium finish.

]]>"...our knowledge increases exponentially."

I know the Master was speaking figuratively, but being an engineer I started to wonder if knowledge *actually does* increase exponentially, and what conditions would be necessary to make this happen?

Where to start? Well, let's start with Einstein, or rather one of Einstein's collaborators, John Archibald Wheeler. In 1992 he said,

"We live on an island surrounded by a sea of ignorance. As our island of knowledge grows, so does the shore of our ignorance"

How fast does this island of knowledge grow? Let's start by making some assumptions about Wheeler's universe, namely:

- We expand the island by reclaiming 'knowledge land' from 'ignorance sea'.
- We always have enough resources on the island to reclaim land, but...
- We don't have any boats, so we have to stand on the island to do the island expansion work.

All of these assumptions add up to the realization that the rate of expansion of the island is *proportional* *to the length of the shoreline*. So if our island is round, or roughly round, and the area of the island corresponds to the amount of knowledge, *K*, then it will have a shoreline length, *s*, equal to the circumference of the circle. So we can write:

With an ignorance-reclaiming-speed of *A *m/s. This doesn't give us the exponential rate that the Master spoke of, it's merely quadratic:

.

Perhaps if the island were a more convenient shape we could achieve exponential growth? In fact a circle is the worst possible shape for the island because it has the lowest ratio of shoreline to area of any 2D shape. The best possible shape would be long and thin, infinitely thin in fact, then area and shoreline would be proportional, and so our island would grow exponentially, wouldn't it? Well, no, it wouldn't, at least not for very long. In order to maintain exponential growth the island must stay infinitely thin and so it can only grow at its ends, but this isn't really the land reclaim model we started off with. You can hardly say that shoreline length is the limiting factor in the growth of an island if you insist on reclaiming land only at the infinitely thin ends of the island! In fact we find an interesting result that if we grow uniformly from each part of the shoreline then no matter what clever shape we start off with we'll always end up with a circular island! Even if we start with exponential growth of the island (perhaps from some clever fractal geometry), we will soon settle to the growth rate given above, which increases with time, but is nonetheless far from exponential.

In fact there may be a paradox even in the Master's statement of the problem: if knowledge grows exponentially, then the growth rate must be proportional to the amount of knowledge. Growth in knowledge requires research, and researchers, but the Master's statement was itself concerned with the fact that each year we must select a smaller fraction of our ever growing knowledge to teach to the next generation of researchers. If this fraction is decreasing, then it looks like we're on a circular island of knowledge - it will grow ever faster, but at the same time, ever slower than exponential growth.

]]>Aside from all of the turbo, MGU-H, battery stuff that makes it more complicated for us to understand, there are two parts of the 2014 rules which are very straightforward, from the technical regulations:

5.1.4Fuel mass flow must not exceed 100kg/h.

And from the sporting regulations:

29.5No car is permitted to consume more than 100kg of fuel, from the time at which the signal to start the race is given to the time each car crosses the Line after the end-of-race...

First off, let's observe that if fuel flow is limited to 100kg/h, then we know the fuel flow of every car at full throttle (100kg/h!). Let's take an extreme example and look at Monza, *the* high speed circuit of F1. The race in 2012 was won by Lewis Hamilton in a total time of 1:19:41.22. Monza is 53laps, so that's an average laptime of 90.21s. Depending on which website you get your F1 stats from, a lap of Monza is about 75% full throttle. The remaining 25% is divided into braking (at zero throttle) and accelerating from zero to full throttle. If we say that the split between braking and part throttle acceleration is even, and that the average throttle in the part throttle regions is 50%, then we can approximately represent all the time that's not full throttle by saying that a quarter of that time is full throttle, and three quarters of it is zero throttle. Adding this throttle usage to the 75% full throttle portion we get to 81.3% of the lap at full throttle as an approximation including the part throttle regions. Now, we should translate the FIA's fuel rate into a proper unit:

100kg/h / 3600s/h = 0.0278kg/s

If we multiply our three numbers so far together, we should get fuel use per lap:

90.21s/lap x 81.3% x 0.0278kg/s = 2.039kg/lap

Which is 2.039kg/lap x 53laps/race = 108.06kg/race. Which was fine in 2012, but it's over budget for 2014 by a little over 8kg! So, clearly you can't just smash round the track at full throttle like you did in the good old bad old use-as-much-fuel-as-you-like days. The question is, what's the best way of saving fuel without losing time? As any well informed motorsport fan knows, the most laptime-efficient way to say fuel is to completely lift off the throttle at the end of each straight and coast for a bit before hitting the brakes for the next corner, called 'lift-off', or 'lift and coast'. The reason is that saving fuel at the start of a straight means that you accelerate less and therefore go slower for the whole straight, whereas lifting at the end of the straight doesn't make you slower further down the track because you were about to brake anyway. How much lift-off are we talking about here? Well, if we lift-off at the ends of the straights then we're swapping time spent at full fuel-flow for time spent at zero fuel-flow, so we save 0.0278kg/s during lift-off. To save our 8.06kg per race we need:

8.06kg / 0.0278kg/s = 289.9s/race

289.9s/race / 53laps/race = 5.47s/lap of lift-off!

No, your eyes do not deceive you (I encourage you to check my maths): this season, at Monza, drivers could have to lift-off the throttle at the ends of straights for an average of nearly 5 1/2 seconds *per lap*! If we distribute this time mainly over the four longer 'straights' (the Curva Grande is effectively a straight) and a little on the two shorter sections leading into each Lesmo then we might get something like this for an average lift-off schedule:

1.24s on the main straight into the first chicane

1.02s after Curva Grande into the second chicane

0.54s before the first Lesmo

0.54s before the second Lesmo

1.02s into Ascari

1.02s into Parabolica

As you can see, it's not exactly going to be the traditional 'last of the late brakers' scenario into the first chicane, or into any of the corners for that matter. Even if we're overestimating the amount of lift-off by a factor of two, we're still talking about half a second at the end of each long straight.

Of course drivers and teams might decide that lifting-off at the end of the main straight is too costly in strategic terms because it's a prime spot to overtake, or be overtaken. In which case they may not lift there, but they will then have to lift even more elsewhere to save the extra fuel.

There will be races where fuel saving is not an issue, Monaco for example is so slow that it's likely that no fuel saving will be necessary. Monza is an extreme example of a high-speed track where the fuel limit will have a big effect. The other factor that would eliminate the need for fuel saving is the safety car. F1 cars use (relatively) so little fuel when behind the safety car that as few as 4 laps behind a safety car could save enough fuel to complete the race without lifting-off any more.

I can't wait to see how this season plays out, and what strategic effects these radical fuel saving regulations will have.

]]>A quick search on Google Scholar revealed a very interesting paper written by researchers at Stanford who have looked into this question and found some interesting results. They calculated the optimal place to aim on the dartboard for players of varying levels of skill. Skillful players consistently land their darts close to the point that they're aiming at, so when aiming at a fixed point their darts land in tight groups; in other words, the standard deviation of the distance between the landing point and the aiming point is small. Rubbish players' darts land all over the place; they have a high standard deviation. Tibshirani, Price, and Taylor found the best place for any player to aim at by maximising the integral of the score over the area in which their darts are likely to land, weighted by the likelihood that their darts land there, for any given aiming point. As you might expect, they found that a very good player, with a standard deviation of only a few millimetres, should aim at the treble 20 for maximum points. A rubbish player, with a very high standard deviation, should aim very near the centre of the board, to minimise the chance of missing the board altogether! The interesting bit is what happens in between very good and rubbish. As standard deviation increases from zero, the optimum aiming point moves slightly up and to the left, to favour hitting the 5 instead of the 1 on the occasional wild dart. Then, when standard deviation increases beyond 16.4mm, the optimum aiming point jumps to treble 19! As the standard deviation further increases, perhaps after the third pint, the optimal aiming point curves upwards and then around to the right until it settles just to the left of the bullseye.

So it seems that our pundit was right in one case, it is worth "going downstairs" for the treble 19 if your aim on treble 20s degrades past a certain point. But in terms of maximising expected score, it's not worth switching to 18 if your aim on 19 isn't good, nor to 17 if your aim on 18 isn't good. Of course there may be psychological factors, and a certain bias due to a player's dart distribution being skewed at an angle and therefore making her better at hitting trebles at one angle than another (this is also investigated in the paper).

In terms of whether darts players do the right thing during matches (statistically speaking) there is still a question mark over switching to treble 19. Players obviously only switch to 19 if they start missing treble 20s, in other words their internal estimate of their standard deviation rises and they react by switching to 19. The question is how does this internal estimate compare to the true value, and how close to the ideal threshold of 16.4mm do they switch between 19 and 20? This is much more difficult to answer than our first question. To do so we would need dart by dart position data for a player over many many games, including many occasions where the player switched to 19, which most players don't do that often. Perhaps I should write a video analysis algorithm to watch and datafy TV footage of darts and then crunch the numbers to see which players' mental estimates are closest to the truth. My guess would be that some players tend to switch too soon because of one random dart going awry when in fact this is not sufficient evidence that the underlying standard deviation has actually changed. On the other hand, maybe the player has a good idea of not only what his aim is like at the moment, but whether he's feeling more tense and therefore about to get worse, enabling him to preemptively switch to 19 before hitting the 1!

]]>Figure one shows the windward mark, *W*, the start line as it was in Shoreham, *C**-B*, and the more interesting case of a squarer startline, *A*-*B*. When the windward end is further from the top mark than the leeward end there is a trade-off between the extra distance you have to sail and the extra speed you get on a lower point of sail. So how do you resolve this trade-off and find the optimal starting point?

The answer lies (as do the answers to so many sailing questions) in the polar diagram for the boat. The polar tells us how fast the boat will sail at every wind angle, so we must be able to use it to find the point on the start line which will get us to the windward mark in the least time, we just need the right bit of geometry.

The right bit of geometry is shown in figure two; step one is to draw/trace/superimpose the polar diagram centred on the windward mark, but we draw it *upside-down* relative to the wind direction, scale doesn't matter. Next we transpose the startline (maybe with a parallel rule) towards the windward mark until it just touches the polar. Finally, we draw a line through windward mark and the point where the polar and the transposed startline touch, extending it until it intersects the actual startline. The point of intersection between this line and the startline is the optimal place to start, easy!

In the case of a one-design fleet sailing on the course shown in figure two, a boat starting from *X* will take about 4.5% less time to reach *W *than a boat starting on the layline (from *L*). So if these boats do 6kn (knots) when hard on the wind (i.e., along *L-W*) and the distance *L-W* is one mile, then the boat starting from *L* will reach the windward mark after 10 minutes. By which time the boat that started from *X* will be 27 seconds ahead, which is a lead of 83m!

Looking at it another way, the J92 I was sailing on in Shoreham is 9.2m long, so it does one length every 3s when travelling at 6kn, which means that even if the windward mark was only 250m from *L*, the boat starting from *X* would still be able to round the windward mark clear ahead of the boat that started from *L* without having to give mark room!

The table below lists the distance sailed, speed, and time to reach *W* for each of four boats that start at *A, X, L, *and *B*:

starting point | distance (nM) | speed (kn) | time |

A | 1.15 | 7.16 | 9:40 |

X | 1.10 | 6.91 | 9:33 |

L | 1.00 | 6.00 | 10:00 |

B | 0.95 | 5.33 | 10:41 |

So clearly working out where on the line to start is well worth it in situations such as this, if we had assumed that the line can only be biased towards one end or another then we would have been over two lengths behind where we could have been had we started at *X.*

Figure three shows us practising our reach to the windward mark, that's me in red and white easing the jib sheet:

]]>