The big debates of the past decade: 3) High intensity v high volume training

March 31, 2014

The debate between high intensity and high volume training has been a perennial topic since the early days of scientifically-grounded training.   Interval training was developed in the 1930’s by the German coach and academic, Woldemar Gerschler. He based his recommendations on the theory that the heart muscle would be strengthened by the increase in cardiac stroke volume that occurs as heart rate drops immediately following an intense effort. A decade later, Gerschler’s compatriot, sports physician Ernst van Aaken proposed that the crucial requirement was delivering copious amounts of oxygen to the heart, and this could best be achieved by running long distances at relatively slow paces. It is noteworthy that a large volume of slow running also increases delivery of oxygen to the leg muscles. Van Aaken’s approach was later developed by New Zealander, Arthur Lydiard, based largely on trial-and-error adjustments of his own training. Lydiard’s method led to medals for his athletes, Peter Snell, Murray Halberg and Barry Magee in distances from 800m to the marathon at the Rome Olympics in 1960. While Lydiard promoted a high volume approach to building basic aerobic fitness, his program also included periodization – a progression from base building to a period of race specific training and final sharpening immediately prior to competition.

Meanwhile, interval training retained its devotees and underpinned the golden age of British middle distance running that reached its pinnacle with Seb Coe’s Olympic gold medals in the 1500m in 1980 and 1984.   By the end of the century, Japanese academic, Izumi Tabata had demonstrated that repeated very intense brief maximal efforts lasting only 20 seconds separated by even briefer recovery periods, produced impressive increases in aerobic capacity (reflected in increases in VO2max) while also enhancing anaerobic capability.

Meanwhile, devotees of high volume, less intense training, led by charismatic individuals such as John Hadd and Phil Maffetone, emphasized the risk that focussing on high intensity training might undermine sound long term development.   So what has the past decade contributed to this long-standing debate?

I think that three main strands of evidence have advanced the debate. These strands are: evidence from physiological investigations; the training of African distance runners; and evidence from a small number of fairly well conducted controlled comparisons of different training protocols

Physiological investigations

The fundamental principle of training is that training produces stress on the various physiological systems within the body, such as the cardiovascular system, skeletal muscles and the nervous system, and subsequent adaptive change as the body responds to that stress leads to increased fitness. The past decade has seen an explosion of knowledge about the multitude of biochemical signalling processes that trigger these adaptive changes. In addition to the hormones produced by the major endocrine glands, there are a vast number of other relevant signalling molecules, including the numerous cytokines that regulate inflammation (the cardinal process that mobilises repair in tissues throughout the body) and growth factors that promote changes in many tissues. In particular, growth factors and hormones promote the activation of satellite cells in muscle. These satellite cells are a type of stem cell that fuse with muscle cells to repair and strengthen them.

While this explosion of knowledge does provide useful clues regarding the way the body might react to various forms of training, at present the complexity of the information precludes any simple answer to the high volume v high intensity debate. It does however provide support to both sides, indicating that the best answer will prove to be a combination of the two.

In light of concerns that high intensity training might destroy the aerobic enzymes that catalyse the chemical transformations involved in aerobic metabolism in the mitochondria of muscle cells, it is of particular relevant to note that a series of studies, Gibala and colleagues at McMaster University in Canada have demonstrated that high intensity interval training is as effective as high volume training for developing these aerobic enzymes. Furthermore, Bangsbo and colleagues in Copenhagen reported that speed endurance training consisting of six to twelve 30 second sprints 3-4 times/week for 6 – 9 weeks improved ability to pump the potassium ions back into muscle cells. Potassium ions are expelled from muscle during exercise. The depletion of potassium within the muscle probably plays an important role in fatigue.   Bangsbo demonstrated that the improved ability to pump potassium back into muscle cells was accompanied by an average improvement of 18 seconds in 3 Km race time, and an average improvement of 60 seconds in 10 Km time, in a group of 17 moderately trained male endurance runners

 

Elite Africans

The most striking feature of elite distance running in the past decade has been the dominance of African runners, mainly from the highlands of Kenya and Ethiopia. There have been many anecdotal accounts that make it clear that high volume training, with several training sessions per day, is an important aspect of the training program of virtually all elite Africans. Usually the day’s program includes one session of quite low intensity running, but many accounts also describe other sessions of quite intense running – especially sustained tempo efforts.  I will not attempt to review all this information here, in part because of its diversity but even more importantly, it remains unclear just how much cultural factors (such as running to school in childhood); multiple genetic factors; and up-bringing at high altitude have contributed to the African dominance.   It remains to be demonstrated convincingly that the training methods employed in Africa can adapted to produce similarly impressive performances by non-Africans.

I will nonetheless draw attention specifically to the training methods adopted by Renato Canova, coach to many of the leading African half-marathoners and marathoners. I have described Canova’s training previously. In his lectures and writing, Canova places little emphasis on low intensity running, perhaps because the athletes he trains have already achieved extensive development of capillaries and other aspects of type 1 fibre development. Nonetheless, the training dairies of the athletes he coaches reveal that in addition to the relatively intense sessions there is a large amount of low intensity running. For example about 80% of the training of Moses Mosop is at an easy pace, with occasional sessions as slow as 5 min/Km (which should be compared with his marathon pace of around 3 min/Km). Canova advocates a periodized approach. The crucial feature of the race specific phase is long runs at near race pace.

Controlled comparisons of training programs

As mentioned above, some of the studies comparing high intensity interval training with standard endurance training, such as the study by Bangsbo and colleagues, demonstrate greater improvement in performances over distances from 3Km to 10Km with the high intensity training, while others, such as those by Gibala and colleagues report similar gains in performance with high intensity training and conventional endurance training, although the high intensity programs achieved similar benefit from a much smaller volume of training. However, those studies were performed over a time scale of approximately 8 weeks. This is scarcely long enough to exclude the possibility that high intensity training might result in a harmful accumulation of stress.

The question of longer term effects was tested in a study by Esteve-Laneo and colleagues from Spain.  They randomly allocated 12 sub-elite distance runners to one of two training programs: a polarised program involving a large amount of low intensity training and small volume of moderate and high intensity training; and a threshold program involving a predominance of training near lactate threshold and a small amount of higher intensity training, for a period of five months. Training was classified in three zones: low intensity below the first ventilatory threshold (VT1) corresponding to the point where lactate rises to around 2 mM/litre; moderate intensity between VT1 and the second ventilatory threshold (VT2) corresponding to the point where lactate exceeds 4 mM/litre; and high intensity, above VT2 during which lactate accumulates rapidly. In the polarised program the proportions of low-, moderate- and high-intensity training were 82%, 10% and 8% while the proportions in the threshold program were 67%, 25% and 8%. At the end of the program, the group allocated to polarised training achieved significantly better performances in a 10.4Km cross country race.

More recently, Stoggl and Sterlich from Austria performed a study comparing a 9 week polarised training program with three other programs: high intensity; high volume (low intensity) and predominantly tempo training, in a sample of national class endurance runners, triathletes, cyclists, and nordic skiiers. The polarized training group exhibited the greatest improvement in VO2 max (+ 11.7%) and time to exhaustion (+17.4%). The high intensity group achieved a 4.8% increase in VO2 max and an 8.8% time to exhaustion 8.8 percent.  The high intensity group lost 3.8% of body weight, which Stoggl and Sterlich attributed to a harmful catabolic state. Improvements were small and insignificant for the other two training programs. It should be noted that these athletes were a national standard and had probably achieved the improvement that might be expected from either a high volume of low intensity training or from a predominance of tempo training.

Neal and colleagues used a cross-over study design in which a group of well-trained cyclists underwent polarised training and threshold training, each for 6 weeks in randomised order. Similar baseline fitness was established by a 4 week de-training period before each training period. The proportion of training time in low-, moderate and high intensity zones was 80%, 0%, 20% in the polarised program, and 57%, 43% and 0% in the threshold program.The polarised training produced greater increases in peak power output, lactate threshold and high-intensity exercise capacity (time to exhaustion at 95% maximum work rate).

 

Summary and Conclusions

Stephen Seiler, a Texan sports scientist based in Norway for the past decade, presented a summary of the evidence from the controlled comparisons of different training programs and also from studies that have examined the proportions of training time that elite athletes spend in different intensity zones, at a lecture delivered in Paris in October 2013. He provided a compelling argument for polarised training. However, despite the evidence that many elites follow a polarised program, the role of key sessions at a pace near to race pace in the training recommended by Renato Canova indicates that at least a modest proportion of threshold training is beneficial for marathoners. Furthermore, Canova recommends a moderate degree of periodization with a clearly defined period of specific preparation for key races.

Overall, it is likely that any sensible training program will produce benefit for an unfit athlete provided it is consistent. However for an athlete who has achieved a plateau of fitness, it is probable that a polarised program with proportions of low-, moderate- and high-intensity of approximately 80%, 10%, 10% is most effective. Nonetheless, during a period of preparation for a specific race the key sessions should incorporate running at a pace near to race pace.

The big debates of the past decade: 2) shoe design

February 24, 2014

For almost a decade many runners have been captivated by the issue of running shoe design – a preoccupation fuelled two opposing factors.  On the one hand padding is expected to provide protection and in particular, provides shock absorption attenuating the impact of foot-strike.  On the other hand, there is the allure of the idealistic notion of barefoot running – based at least partially on the rational argument that if our distant ancestors survived by persistence hunting, the human frame must be well adapted to barefooted running.   These opposing influences have led to fluctuating enthusiasm for fashions ranging from barefoot (or minimalist shoes such as the Vibram Five Fingers) to the heavily padded Hoka one-one.

In addition to these two opposing influences there is the issue of the effects, either helpful or harmful, that shoes might have on the twisting movements that occur at the joints of foot and leg. Most notable are the compound motion of pronation occurring at the forefoot and ankle that allows the foot to roll inwards transferring weight onto the medial longitudinal arch as the leg is loaded during stance; and the inwards bend of the leg below the knee (varus deformation) that places pressure on the vulnerable medial aspect of the loaded knee joint while also dragging the ilio-tibial band towards the lateral femoral condyle.  Although pronation is a natural movement, shoe companies have placed strong emphasis on the potential dangers of over-pronation.  To prevent this, they have marketed motion control shoes with a medial post, a structure embedded in the medial side of the shoe that arrests the inward roll.  This affects not only the impact absorbing capacity of the foot, but also modifying the varus torque acting at the knee.

Ethics

The question of high technology shoe design also brings with it the issue of the ethics of unfair technical enhancement of natural ability.   While this ethical issue can only be dismissed entirely by adopting barefoot running, it might be argued that in the modern man-made environment, denying at least a modest degree of protection would be unreasonable.  In principle there is a difference between basic protection and the overt assistance provided by embedded springs such as in the Spira.  However, once any layer of fabric is interposed between foot and ground, there is a continuum of assistance provided depending on the elastic properties of the material.   Nonetheless, most runners accept that the assistance provided by the bulk properties of a compressible material primarily designed for protection against either shock or penetrating injury is reasonable.

Cadence and foot-strike

The protective effect of shoes is clearly demonstrated by two automatic responses seen in most habitually shod runners when they change to barefoot running.  Self-selected cadence increases leading to decreased length of airborne time during each gait cycle, thereby decreasing the magnitude of the vertical force required to get airborne.   Furthermore, as discussed in my recent post on style, foot-strike tends to change.  The investigation led by Daniel Lieberman of Harvard University indicates that barefoot runners are more likely to adopt a mid-foot or forefoot strike rather than the rear foot strike typically seen in about 75% of shod runners.  This change in foot strike abolished the potentially harmful sharp rise in vertical ground reaction force that is generated by heel striking.  Nonetheless, it is noteworthy that avoidance of rear foot strike is not necessarily the case in habitual barefoot runners.  For example the study of north Kenyan habitual barefoot runners by Hatala found that 72% were heel strikers at their self-selected endurance pace, though the majority landed on mid or forefoot when sprinting, when vertical forces are greater.

As discussed in the post on style, there is little evidence that fore-foot strike is more metabolically efficient whereas several studies actually show rear-foot strike is most efficient at low speeds.  The situation with regard to risk of injury is mixed, with greater risk to knee with heel strike and greater risk to structures around the ankle with forefoot strike.  The balance of risks favours a mid-foot strike. Therefore, a shoe style that allows this is preferable.

Joint torques

Kerrigan and colleagues reported that the torques acting at ankle, knee and hip occurring in runners wearing Brooks Adrenaline shoes were increased in comparison with barefoot running.  The Adrenaline is described as a neutral shoe, meaning that it is not designed to strongly inhibit pronation, and has midsole thickness ranging for 24 mm at the heel to 12 mm at the front.  The increases in torque when shod were especially marked for knee varus (increased by 38%); knee flexion (36%) and hip internal rotation (54%) around mid-stance.   Only a minor portion of these increases in torque could be accounted for by the lower cadence of the shod runners.

Knee varus places stress on medial aspect of knee joint at a site especially prone to osteo-arthritis.  It also drags the ilio-tibial band towards the lateral femoral condyle increasing the risk of iliotibal band syndrome.  Knee flexion torque flexion places stress on patella-femoral  joint and increases load on patella tendon and quads.  It should be noted that tension in the patella tendon at mid-stance  is not necessarily bad, as it would be expected to increase the eccentric loading of the quads and facilitate the upward drive of the body that occurs after mid-stance.   Similarly a moderate degree of internal rotation of the hip is required as the pelvis rotates around the hip joint during stance, so a torque promoting internal rotation torque is not necessarily bad, though it is noteworthy that some runners do develop osteo arthritis of the hip.

The heel-toe drop

Elevation of the heel relative to the toe is the most likely explanation for the additional knee flexion torque revealed in Kerrigan’s study of joint torque at the joints. Furthermore despite providing padding, the presence of a bulky heel makes it difficult to avoid localized impact at the heel, and thereby make a substantial contribution in the rapidly rising spike of vertical ground reaction force observed in heel strikers.  As shown by Zadpoor, a rapid of rise of vertical force increases risk of injures such as tibial stress fracture.  Thus, it would appear that shoes with minimal or no drop from heel to toe that allows initial contact further forwards might  be safer, and will tend to be lighter.

An interesting alternative is the Healus, a shoe without a heel.  A slanted sole ensures that the runner avoids heel contact but instead makes contact via a well-padded mid foot.   Force plate data demonstrates that it abolishes the initial spike in vertical ground reaction force.  The padding under midfoot provides maximum protection when vertical ground reaction force is at its peak.  However, despite an endorsement by former European 5,000m record holder Dave Moorcroft , it does not appear to have achieved much popularity, possibly because it is produced by a small company.

Ankle and forefoot motion control

The inward rolling of the foot that occurs with excessive pronation has several potentially adverse consequences.  The ankle tends to be displaced towards the midline thereby increasing varus deformation at the knee, enhancing risk of iliotibial band syndrome and perhaps also osteo arthritis of the medial aspect of the knee joint.   The medial longitudinal arch of the foot is flattened increasing tension in the plantar fascia increasing risk of plantar fasciitis.  Thus, in runners with excessive pronation, a shoe with a medial post that limits pronation might be beneficial.  However it should be noted that Kerrigan observed increased knee varus torque in shod runners relative to barefoot.  The Brooks Adrenaline is a neutral shoe but nonetheless has a modest medial post and hence it might appear surprising that there was increased knee varus torque.  However the shoe had not been matched to the specific needs of individuals.  It is plausible that the one consequence of being shod was that individuals lacked the sensation and freedom of movement within the shoe required to produce optimal adjustment of the motion at the ankle according to their individual needs.   It might be argued that at least for non-injured runners, that light weight shoes or bare feet providing the freedom to adapt the ankle and foot motion according to individual needs and changing surface conditions, would be preferable.

Nonetheless, there is evidence that customised orthotics designed specifically to control ankle motion for each individual can reduce pain in runners with an established problem.  For example, Maclean and colleagues studied the effects of 6 weeks use of customised orthotics in a group of female recreational runners (15 to 40 km per week) who had a history of overuse running knee injury in the 6 months leading up to the study. The intervention decreased pain significantly and led to significant decreases in maxima for ankle inversion moment and angular impulse during the loading phase, impact peak, and vertical loading rate, though the effects at the knee were complex

Efficiency

Because the shoe is at the far end of the swinging leg, its mass makes a relatively large contribution to energy cost of repositioning the leg during the swing phase.   However, there is growing evidence that at least a small amount of padding brings a benefit that compensates for the additional weight.  Franz and colleagues from Roger Kram’s lab in Colorado compared oxygen consumption during running barefoot with that when wearing lightweight cushioned shoes (approximately 150 gm per shoe) in 12 runners with substantial barefoot  experience, running  with midfoot strike on a treadmill.  In additional trials to determine the effect added weight, they attached small lead strips to each foot/shoe (150, 300, and 450 g).   They found that in the absence of added weight there was no significant difference between shod and unshod running.  Adding weight led to an increased metabolic cost of 1% for each 100 gm of added weight.  When adjusting to equalise mass in shod and unshod condition, shod running had ∼3%-4% lower metabolic cost.

In a further experiment for the same lab, Tung and colleagues measured the metabolic costs of  barefoot running  on an unpadded treadmill and after adding strips of padding of either 10mm or 20 mm thickness to the surface of the treadmill.  They also measured the costs of running shod in lightweight shoes on the unpadded treadmill.  They found that when running barefoot, 10 mm of foam cushioning (approximately the thickness of the forefoot shoe midsole) afforded a benefit of 1.91%.   There was no significant difference between metabolic costs of shod and unshod running on the unpadded treadmill, indicating that the positive effect of shoe cushioning counteracts the negative effects of added mass.

Thus, running barefoot offers no metabolic advantage over running in lightweight, cushioned shoes. The explanation for this remains speculative.  One possible explanation is that when running barefoot, a runner maintains a lesser degree of stiffness in the legs, resulting in less efficient capture of impact energy as elastic energy, in the same manner as a floppy spring produced a less efficient recoil that a stiff spring.

While light weight shoes might offer adequate protection in short and medium distance events, it is necessary to consider the possibility that in a marathon or ultra-marathon, the cumulative damage from repeated eccentric contraction will result in a significant loss of power.  A little more padding might protect against this loss of power.    Similar issues apply during periods of high volume training.  Last summer, while training for a half-marathon, I built up my total training load to a substantially higher volume than during any recent year and found that I suffered a gradual accumulation of aches in my legs.  Hence, at least for an elderly person, light weight shoes should be employed sparingly, but nonetheless, frequently enough to produce the adaptive changes required if they are to be used for racing.

Conclusion

My overall conclusion is that for racing distance up to half a marathon, light weight shoes with near zero drop from heel to toe are preferable, as these give the optimum combination of efficiency and protection.  Unless the legs have been very well conditioned to the rigours of long races, for the marathon and ultra-marathons it might be preferable to use a little more padding.  Similarly, during periods of very high volume training, a modest amount of additional padding might provide helpful additional protection.  Motion control is only sensible if there is a clear need

The big debates of the past decade: 1) Running style

February 17, 2014

In those distant days when I was a fairly serious athlete, we did not think much about style.  Emil Zatopek’s three gold medals in Helsinki a few years previously had suggested that training mattered far more than style.   The ungainly tension in his neck and shoulders was an irrelevancy. We were much more interested in the word-of-mouth rumours of his prodigious training sessions.  At the time, we debated the merits of Percy Cerutty’s advocacy of running up sand-hills in contrast to Arthur Lydiard’s advocacy of 100+ miles per week.  Neither style nor injuries were a major preoccupation.

In contrast, during the past decade, running style has become a focus of attention among elite and recreational athletes.  The focus of the elites is illustrated by Alberto Salazar’s efforts to improve Mo Farah’s efficiency prior to his attempt at the London marathon this year. But perhaps even more importantly, style has been a focus of attention of recreational runners concerned about repeated injury.

A decade ago, distance running had blossomed into a mass participation sport and injuries were rife. Marketing of running shoes had become a major commercial enterprise.   The running world was primed to embrace the idea that running no longer came naturally to modern man.  Was it a consequence of wearing shoes all day or sitting for hours at a desk?  Maybe even it was the training shoes that large commercial companies encouraged us to buy.

The time was ripe for the emergence of gurus with messages about how to run naturally.  Techniques such as Pose and Chi became popular.  These techniques were embraced with almost religious fervour and many of the disciples found relief from their recurrent injuries.  Unfortunately other novices came away from their flirtation with these techniques with new injuries, especially problems with the Achilles tendon.  Now, a decade or so later, the reasons for these contrasting experiences are fairly easy to identify.   Although a few important issues about running style remain unresolved, the decade of experience and of research has provide fairly clear answers to the major questions.

Natural running: forefoot or heel strike?

One of the most hotly debated issues has been the question of heel striking versus fore foot striking.   In part, this debate arose from an idealistic quest to identify mankind’s natural running style, unsullied by the influence of modern life styles.  I will focus predominantly on Pose because its strengths and weaknesses are fairly well documented in books, research papers and on the Pose Tech website.  The second chapter of ‘Pose Method of Running’ (Pose Tech Corp, 2002) opens with an examination of images of runners on classical Greek pottery.  One of the images is from an amphora depicting runners at the panthenaic games in 530 BC.   The inventor of Pose, Nicholas Romanov writes:  ‘Look at these drawings and you will see quite clearly that all the athletes run on the front part of the foot without landing on the heel.  As barefoot runners this was the obvious technique for efficiency and to avoid injury.  To my mind this barefoot running style of landing on the forefoot is the purest example of the proper nature of running….As the Golden Age of Greece passed mankind appeared to leave these values far behind’.

Fig 1: Runners at the panathenaic games 530 BC .   These athletes were competitors in the stadion, a sprint over a single length of the track (over 200 meters).   Terracotta Panathenaic prize amphora, attributed to the Euphiletos Painter.  Copyright, The Metropolitan Museum of Art,  www.metmuseum.org

Fig 1: Runners at the panathenaic games 530 BC . These athletes were competitors in the stadion, a sprint over a single length of the track (over 200 meters). Terracotta Panathenaic prize amphora, attributed to the Euphiletos Painter. Copyright, The Metropolitan Museum of Art, http://www.metmuseum.org

The appeal to a golden age of classical Greece subsequently received some support from rigorous science.   Based on evidence that  our distant ancestors living on the African Savanah around 2 million years ago were probably persistence hunters who relied on the their capacity to chase their prey to exhaustion, Daniel Lieberman and colleagues at Harvard University examined the foot strike pattern of barefoot runners in comparison with runners wearing modern running shoes.  He found that barefoot runners tended to land on the forefoot or midfoot whereas runners wearing shoes tended to be heel strikers.  The heel strikers experienced a more rapid rise in the loading of the legs in early stance, although Lieberman was careful to avoid claiming at that stage that fore-foot striking would result in a lower injury rate.

Further investigation casts some doubt on the conclusion that habitual barefoot runners are not heel strikers.  A study of habitual barefoot runners from north Kenya by Hatala and colleagues did provide further evidence that forefoot strike reduces the magnitude of impact loading.  However, these habitually barefoot Kenyan runners tended to land on midfoot or forefoot only when running at sprinting speed, where impact loading is high.  The majority of them landed on the heel at endurance running speeds (5 m/sec or less).  At their preferred endurance speed (average of 3.3 m/sec) 72% were heel strikers.  Could it be that heel striking is actually more efficient at endurance paces?

Is heel striking more efficient at endurance paces?

Ogueta and colleagues from Spain compared efficiency in two well matched groups of sub-elite distance runners and found that heel strikers are more efficient than midfoot strikers, across a range of speeds.  Heel  strikers were 5.4%,, 9.3% and 5.0% more economical than mid-foot strikers at speeds of 11, 13 and 15 km/h respectively. The difference was statistically significant at 11 and 13 km/hr, but only showed a trend towards significance at 15 Km/hr.  DiMichele and Merni from Italy, who tested runners only at a single speed of 14 Km/hour, found no significant difference in efficiency between sub-elite heel strikers and mid foot strikers.  Overall, the evidence suggests that at paces typical of recreational endurance running, heel striking is more efficient but the advantage diminishes as pace increases.  This is consistent with the observation that in most runners the point of contact at footfall moves forward along the sole of the foot as speed increases.

These studies were cross sectional studies comparing different runners.   Indirect evidence of the effect of a change to forefoot landing within an individual is provided by the longitudinal study by Dallam and colleagues of 8 athletes who changed to Pose.  They found that 12 weeks after changing to Pose, the athletes were on average of 7.6 percent less efficient than before the change.    Perhaps 12 weeks is not long enough to achieve facility with a new style, but the consistency of the magnitude of the penalty associated with forefoot/midfoot striking in the study by Ogueta and the penalty attributable to Pose in the study by Dallam adds weight to the conclusion that heel-striking is more efficient at endurance paces.

With regard to risk of injury, the evidence is more complex.  In a retrospective study of US collegiate distance runners, Daoud and colleagues found that habitually rear-foot strike had approximately twice the rate of repetitive stress injuries than individuals who habitually landed on the forefoot. Traumatic injury rates were not significantly different between the two groups.    The sharp initial rise of ground reaction force observed with heel strikers is a likely factor in the risk of injures such as tibial stress fracture.  It is noteworthy that during  a session that included a total of approximately an hour of running at lactate threshold pace, Clansey and colleagues found that several kinematic variables, including rate of rise of ground reaction force in early stance, increased significantly, suggesting an increased risk of stress fracture with increasing fatigue.

However, mid-foot and forefoot strike have their own risks, especially for the muscles and connective tissues acting at the ankle, as indicated by the Capetown study of Pose.  Consistent with this, Almonroaeder and colleagues found a 15% greater load (averaged over stance) and an 11% greater rate of rise of tension in the Achilles tendon in mid-foot and forefoot strikers compared with heel-strikers.

Should the push be conscious?

One of the features that appears to account for some of the success of Pose in reducing injury rates among its dedicated disciples is the avoidance of a conscious push against the ground.   In reality, force plate data clearly demonstrates that runners do push against the ground, with peak vertical forces often exceeding three times body weight.  A study by Weyand and colleagues demonstrates that faster runners push harder against the ground.   Many elite sprinters, including Usain Bolt, report that they do consciously push.   However, my own speculation is that for recreational distance runners, a conscious push can be harmful if it encourages a delay on stance, and an associated increase in braking.  Paradoxically, since the delay decreases airborne time, a lesser vertical push is required to maintain the airborne phase, but a greater horizontal push is required to overcome braking.   Excessive horizontal push is potentially harmful, as we will discuss in the section on risks of braking, below.

Perhaps serendipitously, Pose discourages this potentially harmful conscious push by investing faith in the illusion of gravitational free energy.  According to Nicholas Romanov, one of the most important principles of Pose is the ‘Do Nothing Concept’ which he describes on pages 88 and  89 of Pose Method of Running:   ‘We must learn to get out of the way and let gravity propel us forward while we preserve as much of our energy as possible by the simple act of picking our feet off the ground.’

In the words of Romanov and Fletcher: ‘Runners do not push off the ground but fall forwards via a gravitational torque’.  Pose theory draws on the observation that pivoting forwards is an effective way to initiate the action of running to explain how gravitational free energy can allegedly be harnessed even during running at a steady speed. The theory proposes that this can be achieved by employing the sequence of Pose, Fall, Pull, in the period from mid-stance to lift-off.  Romanov’s description of the Pose does in fact match the balance posture of many good runners at mid-stance: knees and hips are slightly flexed while the hips and shoulders are aligned over the point of support through which force is transmitted from foot to ground.  However, the Fall, which Romanov claims provides gravitational free energy, simply does not occur.

The body’s mass rises rather than falls in the second half of stance. This is clearly predicted by computation based on the time course of ground reaction forces, and also clearly apparent from video clips.  The Pose Tech website claims that Usain Bolt employs Pose style, yet examination of the stills from the video of Bolt winning the 100m World Championship in Berlin in 2009 depicted on the PoseTech site, indicates that his hips and torso  rise about 7 cm between mid-stance and lift-off.   The origin of Romanov’s erroneous concept of the fall is revealed in fig 7 from his paper published with Graham Fletcher in Sports Biomechanics in 2007.  In that figure, the authors mistakenly assume that the vertical component of ground reaction force is equal to body weight whereas force plate data show that is several times body weight at mid-stance. I discussed this issue in greater detail in my post of  14 Feb 2010.   And finally, it no is more possible to get airborne by pulling the foot towards the hips than it is to self-elevate by pulling on one’s boot straps.

However, despite being based on fallacious theory, Pose does offer some benefits to at least some recreational runners.  The discouragement of harmful excessive  conscious pushing is balanced by focus on drills such as Change of Stance that help develop the neuromuscular coordination required to get off stance quickly.   However, a greater vertical push would be required to maintain the longer airborne time if stance time were to be decreased at constant cadence.  Pose technique averts this problem by encouraging increased cadence.   For recreational runners who tend to spend too long on stance and to run with cadence that is too low, Pose can be helpful.   However, short stance and high cadence each create their own problems.   A rational approach to the challenge of identifying the optimum foot-strike , duration of stance and cadence for an individual runner under particular circumstances requires an understanding of the benefits and risk associated with the three major  energy costs of running: overcoming braking while on stance; getting airborne; and repositioning the swinging leg during the airborne phase.

Balancing the three main costs

It is clear that efficient running requires a trade-off between the three major energy costs of running: getting airborne, overcoming braking and repositioning the limbs.  We can minimise the energy cost of braking by getting off stance quickly, but that creates a demand for greater energy expenditure to maintain a longer airborne phase, unless cadence is increased.  However, as described in my post of April 2012, increased cadence demands more energy expenditure to reposition the swinging leg, so we need to find a compromise that minimses total cost.  The optimum balance between the three costs  depends on pace and other circumstances, such as level of fatigue.   We also need to take account of the need to minimise injury.

Risks of getting airborne

Getting airborne demands a strong push against the ground.  It appears at first sight plausible that the stronger the push the greater the risk of injury. Surprisingly, studies that compare injury rates between individuals who differ in the magnitude of the vertical push they exert against the ground do not consistently find a significant association with injury rate.  This might well be because the strength that allows faster runners to push more strongly also helps protect them against injury.  Thus comparison between different runners might obscure a relationship between intensity of push and injury risk for an individual.

Some studies, reviewed by Zadpoor, do demonstrate an association between rate of rise of the vertical forces and risk of injures such as tibial stress fracture.  Rate of rise of force is related to both duration over which the force rises to a peak (determined largely by the type of foot-strike, with heel striking creating a steeper rise), and also by the magnitude of the average force (which is inversely proportional to the fraction of the gait cycle spent on stance).  Thus it is likely that at high speeds a strong push combined with mid or forefoot landing produces optimum efficiency and safety, though forefoot landing is only safe if the Achilles is well enough conditioned to take the strain.  For most runners it is probably safer to ensure that at least some of the load is taken on the heel in longer races.  At slower speeds, efficiency is greater with heel striking, and the risk of injury depends on whether the individual is more prone to adverse effects of stress at the knee or the ankle.  Stress on the knee is greater with heel-strike, but greater at the ankle with forefoot strike, as demonstrated in the Capetown study of Pose.

It should also be noted that precise timing of the vertical push is crucial.  For many runners, attempting to control the push consciously is counter-productive.   In contrast, most of us are capable of much more precise timing of hand movements.  Arm and leg on the opposite side are linked in their representation in the brain, and also, more tangibly, by the latissimus dorsi muscle and lumbar-sacral fascia that link the upper arm to the pelvis on the opposite side.  Therefore, conscious focus on a sharp down and backward movement of the arm can help ensure precise timing of the push by the opposite leg.  This sharp downswing of the arm should be accompanied by conscious relaxation of the shoulders.   I personally find this strategy more helpful than the cultivating an illusion of falling after mid-stance

Risks of braking

Braking generates both compression forces and shear forces at joints, and also increases stress on hip extensors which must overcome the excessive hip flexion associated with the forward angle of the leg at foot-strike.   One possible consequence is pain at the point of attachment of the hamstrings to the pelvis.   Therefore from the injury perspective excessive braking must be avoided but it is necessary to bear in mind that there is a trade off between the low braking costs of short time on stance and the costs of being airborne for a greater propotion of the gait cycle.  If excessive braking is to be avoided, it is crucial to avoid reaching forwards with the swinging leg, and ensure that the foot lands only a short distance in front of the centre of mass.  The jarring associated with braking can be reduced by ensuring that the knee is flexed slightly more than the hip at foot-strike, but the penalty is a loss of rigidity of the leg which might reduce the efficiency of the capture of elastic energy.  As discussed above there is a trade-off between braking and getting airborne.  Excessive braking demands excessive horizontal push after mid-stance, and an inevitable increase in total stance time.  For a runner prone to spend too long on stance focus on a precise push off, governed by conscious down-swing of the opposite arm can promote a good balance between the cost of braking and the costs of getting airborne.

Repositioning cost and cadence

The third element, leg repositioning cost, increases with increasing cadence, but conversely, the energy cost of getting airborne decreases with increasing cadence.  The  stresses on the tissues of the body associated with getting airborne, and therefore, the likely risk of injury, decrease with increasing cadence.    Therefore, many runners, both recreational runners and even some elites, including Mo Farah, might benefit from increasing their cadence, but not so far that the increased energy cost of repositioning become excessive.  The optimum cadence depends on various circumstances.  Based on observation of elite runners and the calculations presented in my blog posts in Feb and March 2012 suggest that optimum cadence is at least 180 steps/min at  4 m/sec, and  200 steps/min at 5.5 m/sec.   However the precise optimum for each individual will depend on leg strength and elasticity.  For runners with lesser power  and elasticity it is probably best to employ higher cadence, thereby reducing the need for vertical push.   As my leg muscle power and elasticity have deteriorated with age, I have been forced to increase cadence.  Typically my cadence is around 200 even at a pace of 4 m/sec.  This involuntary increase in cadence has helped minimise the risk of damage to my elderly legs, at the price of inefficient expenditure of energy on repositioning my swinging leg.  I am therefore working on increasing power and elasticity so that I can push off more powerfully and thereby decrease cadence safely.

Conclusion

Perhaps the most serious error promulgated by gurus is the claim that there is a single best style that is most efficient and safest.  The evidence regarding the greater efficiency of heel-striking at endurance paces, yet greater risk of at least some repetitive strain injuries with heel strike illustrates the fallacy of this claim.  The most efficient foot strike pattern, time on stance and cadence vary with pace, and in addition, the risk of injury depends on factors that vary between individuals, such as strength of muscles, tendons, ligament and bones.  Perhaps the most important strategy of all for minimising injury is building-up of training load slowly over time, and being aware of the effects of fatigue on form during demanding sessions.

Running style does play a crucial role but a much more nuanced approach based on an understanding of the costs and benefits of each aspect of form must be taken to identify what is best for each individual in their current circumstances.   The debate and the scientific studies of the past decade have indeed provided us with much information to make these nuanced judgments.

The five big debates of the past 10 years

February 6, 2014

The past decade has seen a continued growth of distance running as a mass participation sport.   The major city marathons continue to attract many thousands of entrants with aspirations ranging from sub 2:30 to simply completing the distance in whatever time it takes.  Perhaps more dramatically, parkrun has grown from a local weekly gathering of a few club runners in south-west London to an event that attracts many tens of thousands of individuals at hundreds of local parks, not only in the UK but world-wide, on Saturday mornings to run 5Km in times ranging from 15 min to 45 min before getting on with their usual weekend activities. Over this same period, the ubiquity of internet communication has allowed the exchange of ideas about running in a manner unimaginable in the days when distance running was a minority sport pursued by small numbers of wiry, tough-minded individuals whose main access to training lore was word- of-mouth communication.

Not surprisingly, within this hugely expanded and diverse but inter-connected community there have been lively debates about many aspects of running, with diverse gurus proposing answers to the challenges of avoiding injury and getting fit enough to achieve one’s goals.   Pendulums have swung wildly between extremes.  My impression is that the fire in most of the debates has lost much of its heat as the claims of gurus have been scrutinised in the light of evidence.   However, definitive answers have remained elusive.   What have we learned that us useful from this turbulent ten years?

There have been 5 major topics of debate:

1) Does running style matter and if so, is there a style that minimises risk of injury while maximising efficiency?

2) Are minimalist running shoes preferable to the heavily engineered shoes promoted by the major companies?

3) What is the optimal balance between high volume and high intensity training in producing fitness for distance running?

4) Is a paleo-diet preferable to a high carbohydrate diet?

5) Does a large amount of distance running actually damage health, and in particular, does it increase the risk of heart disease.

In all five topics, debate still simmers.  I have scrutinised the scientific evidence related to all five of these question in my blog over the past seven years, and I hope I will still be examining interesting fresh evidence for many years to come.   However whatever answers might emerge from future science, in our quest to determine the answers that will help us reach out running goals we are each an experiment of one and now is the point in time when we must act. I think that the evidence that has emerged in the past decade has allowed me to make better-informed choices in all five of these areas of debate than would have been possible ten years ago.   In my next few posts, I will summarise what I consider to be the clear conclusions for the past decade of debate, what issue remain uncertain, and what decisions I have made with regard to my own training and racing.

For me personally, the greatest challenge as I approach my eighth decade is minimising the rate of inexorable deterioration of muscle power, cardiac output and neuro-muscular coordination that age brings.  Therefore my approach to these debates is coloured by the added complications of aging.  Nonetheless, my goal is not only to continue to run for as many  years as possible, but also to perform at the highest level my aging body will allow during these years.  I hope that the conclusions I have reached will be of interest to any runner aiming in to achieve their best possible performance, whatever their age.

Why do marathon runner slow down – the role of muscle damage

January 5, 2014

While planning the next few months of base-building for a marathon in the autumn, I have been pondering the question of what are the most important foundations for marathon running.   The marathon is run in the upper reaches of the aerobic zone, so at first sight, the most important goal of training is extending the duration for which one can maintain a pace in the vicinity of lactate threshold.   This requires a good capacity for metabolizing lactate, so developing that capacity will be part of my base-building.

Perhaps the most infamous feature of the marathon, at least in the mind of many recreational runners is the ‘wall’ that awaits somewhere near the 20 mile mark.   It is often assumed that this wall reflects the point at which glycogen stores are exhausted, and all available glucose is shunted away from muscle to the brain.  For the ill-prepared runner, that might well be a major issue, but dealing with the risk of serious glucose depletion should be relatively straightforward.   A large volume of low to mid-aerobic running and sensible nutrition in the preceding months should ensure that a good proportion of the fuel at marathon pace is derived from fat, thereby conserving glycogen, which together with adequate ingestion of carbohydrates during the race itself, should minimise the risk of a shortage of glucose.

While it is true that for many marathoners the gruelling memories are centred on the final few miles, in my own memories of the times when I have run a marathon with inadequate preparation, the  point at which I became aware that I was not running well occurred shortly after half-way.   At that stage, the problem wasn’t breathlessness, or agony.  It was a loss of fluency in my stride.   I was therefore intrigued by Reid Coolsaet’s account of his tribulations in Fukuoka in December.   Reid’s blog provides the best personal account of elite marathoning available on the web.

Reid Coolsaet in Fukuoka, 2013

Reid had arrived in Fukuoka better prepared than ever and was aiming for sub 2:10; a PB and a Canadian national record.  He started in the leading pack behind the pacemakers, Collis Birmingham and Ben St. Lawrence from Australia, running 3 min Kms (2:07 pace).  When the lead pack split on an upward slope just before 16Km Reid sensibly opted to stay back with the second group, which included one of the current leading Japanese marathoners, Arata Fujiwara, who has a PB of 2:07.  However Fujiwara was having a bad day and the second group slowed too much so Reid left them at 18Km and ran on alone.   He was still comfortable maintaining his target pace when he reached half-way in 1:04:11.  He then lost a few seconds as a consequence of grabbing the wrong bottle at the 25.8Km water station.  He covered the 5Km from 30 to 35 Km in 15:51 but was not too worried at that stage. He reports that after 35km the going got really tough and he began to ‘lose it mentally’.  He eventually finished in 6th place in a very creditable 2:11:24, just over 5 minutes behind the winner, Martin Mathathi of Kenya.

Reid had again demonstrated that he is not very far behind the best of the current North American marathoners, despite lacking the resources of Nike’s Oregon Project.   In his own analysis, the problem was running solo from 18Km to the end.  That was almost certainly part of the problem.  However, despite the seconds lost as a result of the confusion with the wrong bottle at 25.8Km, I think that the crucial evidence that the wheels were coming off was the 15:51 split from 30 to 35 Km. I suspect that the damage had been done in the first 15 Km, which he had covered about 1 minute too quickly.   But what was the damage he had done?   I doubt that the burning a little more glucose in the first 15 Km nor the confusion with his re-fuelling had left him in a glycogen depleted state by 30Km.

Running pace decrease and markers of muscle damage during a marathon

I think perhaps a clue is to be found in the recently published study of marathon runners by Juan Del Coso and colleagues from Madrid.  Del Coso performed a variety of physiological measurements on a group of 40 amateur runners immediately before and after the 2012 Madrid Marathon.  The investigators retrospectively divided the runner into two groups according to how well they maintained pace during the race.  The group of 22 runners who exhibited a decrease in pace of less than 15% from the first 5Km to the end were classified as having maintained their speed, while the group of 18 runners who slowed by more than 15% between the first 5Km and the end were classified as having a pronounced decrease in speed.  The decreased speed group slowed their pace by an average of 29% while the group classified as having maintained speed exhibited an average decrease of 5%.

The most interesting feature of the 5Km split times over the course of the race was the fact that the group with a pronounced pace decrease began to slow-down  markedly shortly after half way.  The difference in pace between the two groups became statistical significant for the split from 20 to 25 Km.  But even more interestingly, the most significant difference in the physiological measurements was a much greater increase in the blood levels of the muscle proteins, myoglobin and lactate dehydrogenase, between the start and finish in the group who slowed.  These proteins are markers of muscle damage.    Both group exhibited a decrease in counter move jump (CMJ) height from before to after the event, but this decrease was greater in the group who slowed substantially.  The group who maintained their speed exhibited a 23% decrease in CMJ height, while the group with pronounced slowing suffered a 30% decrease.

Both groups of runners exhibited a decrease in weight of approximately 3%, assumed due to dehydration.   There was no evidence of decrease in blood glucose in either group.  The runners had been allowed to take fluids and carbohydrates according to their own inclination during the race.  There was no appreciable group difference in body temperature.  Thus, there was no evidence that dehydration, decrease in blood glucose, or hyperthermia, accounted for the different degree of slowing of the groups.  It is also noteworthy that there had been no significant difference in prior training volume between the groups. In fact the group who showed the most pronounced slowing has actually performed a slightly larger volume of training.

Thus the findings from this study suggest that for reasonably well- trained amateur runners who are allowed to re-hydrate and re-fuel according to their own inclination during the race, the major feature that is associated with deteriorating pace is muscle damage.  Furthermore the deterioration becomes manifest shortly after the half-way point.

The observation of appreciable loss of strength and power, together with increased levels of muscle proteins in blood indicating skeletal muscle damage during endurance events, has been reported previously.  For example, the year previously Del Coso and colleagues had studied 25 triathletes participating in a half-ironman event.  They found that after the event, the capacity of leg muscles to produce force was markedly diminished while arm muscle force output remained unaffected.  Leg muscle fatigue was correlated with increases in blood levels of the muscle proteins, myoglobin and creatinine kinase, suggesting that muscle breakdown is one of the most relevant sources of muscle fatigue during a half-ironman.

My own experience

Looking back to my own experience in the half marathon in September, I was aware of aching legs though much of the race. Indeed I had been experiencing pronounced aching of the legs following most of my long runs during the preceding months.  In my recent post I had discussed the possible role of elevated cortisol in my mediocre half marathon performance.  While a link between cortisol and muscle damage is speculative, it is perhaps plausible that sustained elevation of cortisol had left me in a catabolic state with reduced capacity to repair muscle damage following long runs, for a period of several months.

What are the implications for base-building this year?  The first implication is that I should build up the length of long runs cautiously to minimise the risk of developing a catabolic state. I am even considering adopting Geoff Galloway’ s run/walk approach to see if I can build-up to a weekly training volume of 50 miles or more without persistent aching of my legs.   A far as I can see there has been little good independent scientific investigation of the run-walk strategy though I think there are reasons to think that it might be a sound approach – and not just for elderly runners such as myself.   I will discuss this in a future post.

An alternative approach is to include more sprint training.   In a study of the muscle damage produced by drop-jumping (which is often regarded a good model f the eccentric stress produced by running, Skurvydas and colleagues compared sprinters with long-distance runners and a group of untrained controls.  Following 100 maximal effort drop-jumps, the sprinters experienced a smaller reduction in counter-movement jump height than the other two groups, while there was no appreciable difference in evidence of damage suffered by the distance runners and the untrained controls.  It appears that sprint training might protect against muscle damage much more effectively that long-distance training.

Re-appraisal: the benefits and damage produced by cortisol

December 30, 2013

This year has been frustrating in an undramatic but challenging way.  Undramatic because I have remained free of overt injury apart from some persistent though relatively mild problems with my joints and ligaments, but challenging because it has not been easy to identify why my fitness improved so slowly and then degenerated so rapidly.  I achieved a greater volume of training –  approximately 2000 miles (including the mileage equivalent of my elliptical cross-training sessions estimated on the basis of 100Kcal = 1 mile)  than in any year in the past four decades.  Taking account of my slower training paces, it is probable that I have actually spent more time training this year than during any year in my entire life.  Yet through the summer I was frustrated by the tardy rate of improvement in fitness.

There were few occasions when I experienced the exhilaration of running fluently and powerfully.  I felt tired much of the time and experienced persistent aching of the connective tissues in my legs.  My short term goal was a half marathon time faster than 101:50.  Despite the fact that I was unable maintain a pace of 5 min /Km (corresponding to 105:30 for the HM) for even a few Km as the date of the event approached, I nurtured the hope that a three week taper with some drills and faster running to sharpen my pace, would allow me to defy any rational prediction based the evidence of my limited fitness.  However, in the event, rational prediction was indeed confirmed and despite a spirited finish, I recorded a time of 107:45.

In the aftermath I took some consolation from the fact that I had coped with a large volume of training without injury, though I was aware that I would need a few easy months to allow my body to recover.  I cut my training volume to an average of slightly less than 30 (equivalent) miles per week, including an increased proportion of elliptical cross-training.   After a month or so, I added a modest plyometric program as described in my previous post.   The encouraging outcome was a modest improvement in hopping and jumping, indicating some gain in musculo-tendonous resilience and eccentric strength, but the dominating feature of the final few months of the year has been a devastating loss of fitness.     Although I had cut my training volume substantially, the cut was less ruthless than the cut forced upon me by an episode of arthritis a year ago, yet the loss of fitness has been far greater.  I am now at my lowest ebb since summer 2011 when a hectic and exhausting six months at work had left me with neither the time nor energy for solid training.  In 2011, my lack of fitness illustrated the fact that for the elderly, fitness is hard won and easily lost.  But in 2013, I am facing the disconcerting question:  has fitness  become even more difficult to gain and easier to lose as I have moved from my mid to late 60’s, or did I do something wrong this year?

I had used submaximal tests throughout the summer to ensure that I was just one step short of over-training, as indicated by autonomic measures of stress, such as heart rate at submaximal effort and resting heart rate variability.  However, perhaps I should have taken more notice of the chronic tiredness and aching legs.    I suspect that my mediocre half marathon performance demonstrated that I was not merely over-reaching but that a least a mild degree of over-training had interfered with my ability to benefit from training.   This lurking suspicion has been strongly re-enforced by the devastating loss of fitness since September.  It is clear that I had not built a sound base.

In the past few months, I have continually questioned whether or not I simply needed a bit more rest, but on the occasions when I have cut back the training volume even more drastically for a few weeks, the deterioration in fitness has accelerated.   So the evidence suggests that I was not merely in a state of functional over-reaching from which the body bounces back with renewed vigour after a brief respite.  I had almost certainly over-trained. I was in need of a more profound rest.  Yesterday, I ran 11 Km, the longest distance I have run in recent times.  My pace was very slow – around 6:30 min/Km – but for the first time since September, my legs did not ache.  I hope this indicates that I have now passed the nadir, and can again begin re-building, but very cautiously.

Do miles make champions?

Arthur Lydiard’s simple dictum, ‘miles make champions’ is undoubtedly true.  Indeed the rapid crumbling of my fragile fitness base as I have cut back the training volume since September confirms the crucial role of training volume.  But just as every wise proverb can be countered by one that draws the opposite conclusion, miles can also undo would-be champions.   In the summer, the most striking contrast with my training of recent years was the much greater proportion of long runs, mainly at a quite slow pace.  I think it is likely that I did too many long runs this year.   As I discussed in a post in October, Dudley’s studies of rats who ran at various intensities for various durations, showed that the increase in the mitochondrial enzymes that are essential for aerobic metabolism reaches a plateau after a sufficiently long duration of running. The plateau was achieved late, and was highest, in the rats that ran at the modest pace of 30 metres/min which they could maintain comfortably for over 90 minutes.  A somewhat faster pace of  40 m/min, which was near to the peak pace that they could maintain for 90 minutes, produced a slightly lesser gain in aerobic enzymes.   Paces below 30 m/min achieved an even lower plateau. On the other hand, at paces faster 40 m/ min the gains in fitness were more rapid but the duration for which the animals could sustain the pace was less and consequently the total gain in fitness was less than that achieved at 30 m/min.

Rats differ from humans in many respects, and the actual paces of animals with legs of only a few centimetres in length are of little relevance to humans, but the muscle physiology of rats is essentially similar to ours. It is likely that similar principles govern the effects of training.  There appear to be two major conclusions.  First, the greatest gain in aerobic capacity is achieved by a ’good aerobic pace’ that can be maintained comfortably for 90 minutes.  Second, there is a limit to the benefit in aerobic capacity obtainable by increasing the length of training sessions.   There may be other benefits of long runs, such as strengthening of connective tissues. But the occurrence of the plateau in development of aerobic enzymes suggests that at that beyond around 90 minutes something inhibits the further development of aerobic enzymes.  Perhaps the most plausible limiting factor is the accumulation of cortisol.

Skoluda and colleagues have demonstrated that endurance athletes exhibit a sustained elevation of cortisol, and furthermore, the magnitude of the increase correlates with greater training volume, measured in either hours per week or distance per week.   Cortisol is a catabolic hormone that promotes the breakdown of body tissues, including muscle, while inhibiting the synthesis of new protein.    To evaluate the plausibility of the proposal that cortisol limited the synthesis of aerobic enzymes in Dudley’s rats and also played a part in my mediocre half marathon and the subsequent crumbling of my aerobic base, it is necessary to re-examine some of the details of role of hormones in the regulation of energy metabolism discussed in the first of my posts comparing the Paleo diet with a high carbohydrate diet.

Cortisol

At the commencement of exercise, there is an acute elevation of cortisol, together with adrenaline, that mobilizes the body’s resources to meet the demand for increased energy.   The generation of  glucose from glycogen is stimulated, thereby releasing the fuel that can be utilized most rapidly for muscle contraction.  Cortisol also stimulates the metabolism of fats and of amino acids.   Conversely, protein synthesis from amino acids slows down and body systems that serve longer term survival needs are put on hold.    The immune system and the gut suffer first, though as long as the muscle continues to generate the amino acid glutamine, which helps sustain both immune cells and the lining of the gut, these body systems continue to function reasonably well.   However, as the duration of exercise extends into the period when glycogen stores show signs of depletion, cortisol level rises further.   Now the body’s priority is ensuring adequate supply of glucose for the brain.  The increased level of cortisol inhibits the glut 4 carrier proteins that transport glucose into muscles.    The muscles are increasingly reliant on the relatively slow production of energy via fat metabolism.  Meanwhile, synthesis of glutamine drains the pool of intermediate metabolites that participate in the Krebs cycle, the closed-loop of metabolic transformations that plays a central role in energy metabolism and also in the synthesis of amino acids.  Although fat metabolism can keep the Kreb’s cycle going, it cannot top-up the pool of intermediate metabolites.  This topping-up requires input from pyruvate which is generated by the metabolism of glucose.  In the absence of concurrent glucose metabolism, glutamine levels begin to fall, impairing the function of both the immune system and the gut.

Thus, the immediate effect of the elevation of cortisol is the provision of fuel for exercise, but as glycogen supplies diminish the priority is provisioning the brain, and the rest of the body suffers.  How long does it take for this to occur?   Cook and colleagues recorded salivary cortisol levels in recreational runners both during and after a marathon. The highest level was almost 6 times higher than typical morning levels, recorded 30 minutes after completion of the event, but the level had risen steadily throughout and was already very high by 25 miles.  Similar levels were recorded following a non-competitive marathon.  Thus, an appreciable elevation of cortisol is likely even during long training runs, consistent with Skoluda’s finding that endurance athletes have chronic  elevation of cortisol that rises in proportion to training volume.

What are the potential adverse medium and long term consequences?  The acute anti-inflammatory effect of cortisol is likely to hinder the repair and strengthening of muscles and other body tissues.  In particular, the synthesis of aerobic enzymes is inhibited.  Suppression of immune function creates the risk of infections.  Sustained elevation of cortisol will sustain a balance that favours breakdown rather than building up of tissues, and thereby promote further loss of fitness.  Furthermore, prolonged exposure to high levels of cortisol decreases the sensitively of the receptor molecules that mediate the effects of cortisol on body tissues, and might ultimately promote chronic  inflammation, harming joints and connective tissues while promoting the deposition of atheroma in blood vessels.

In my own situation, I suspect that a continuing bias towards catabolism rather than anabolism has hastened my loss of fitness, while the continued aches in my legs probably reflected chronic inflammation in the ligaments.  On the other hand, I have been pleased to note that I have not suffered any exacerbation of asthma this year.  I hope that any tendency towards increased formation of atheroma in my blood vessels has been minor.

Next year, I plan to train for a marathon.  But if I am to achieve a more robust fitness base and even more importantly,  to enhance rather than harm my long term health, I need to adopt a different training strategy.   I should start with a more careful scrutiny of the past year.

Closer scrutiny of the training log

While the most immediately apparent feature of my training during summer of 2013 was the relatively high proportion of long runs, a more careful inspection of my training log reveals a potentially more significant issue.   After the resolution of arthritis in the early months of the year I had gradually increased my training volume up to 30 (equivalent) miles per week, and was coping well.  Then, in March I increased the volume quite rapidly, by almost 15%  each week for 4 weeks, up to 50 (equivalent) miles per week by early April.  The submaximal test revealed that my fitness continued to improve fairly steadily until mid-April, but then suffered a slight decline in May, so I reduced the weekly volume back to 45 (equivalent) miles per week.  Once again fitness began to improve, albeit slowly and I was feeling tired much of the time.  I continued at that level of training until mid-August when I once again increased to 50 (equivalent) miles per week, but that produced only a marginal further increase in fitness by late September.   At the time, it appeared that I had pushed myself to the limit but had not quite over-stepped the mark.  However, in retrospect, I think I had overdone it.  The damage was probably done in March and early April when I had increased training volume by 15% per week. At that time, all had appeared well, as my fitness continued to improve. Indeed from mid-March to mid-April I saw the greatest gain in fitness in any month of the year.  It appeared I had got away with a relatively minor infringement of the 10% rule.  But the increases in weekly volume reflected another feature, an increase in the number of long runs.  By late April, I was occasionally slipping in two moderately long runs within a single week.  I suspect I was accumulating a surfeit of cortisol that led to the transient decline of fitness in May, the mediocre half marathon in September and  the subsequent devastating loss of fitness.

Plans for the future

I have about nine months until my target marathon next autumn.   This gives me five months to build a solid base, leaving four months for specific marathon preparation. The cardinal goal of the final four months will be developing the capacity to sustain marathon pace for 26.2 miles.  The goals of the preceding five months of base-building  are more varied.

First, I need to ensure that my connective tissues are well conditioned and free of any trace of lingering inflammation.   I will need to adjust my training over the next few months according to how well the recent recovery is maintained.  I will continue with moderately demanding weight sessions and some mild plyometrics.  It is likely that I will do a greater proportion of my aerobic training on the elliptical cross trainer next year, and I will build-up the long runs very gradually, aiming to increase from the current 11 Km to 25 Km by late spring.

Secondly I aim to develop the ability to utilise fat in preference to carbohydrate at low and mid-aerobic paces, thereby minimising the risk of excessive elevation cortisol during long training runs.  The main element of the strategy to achieve this goal will be a gradual increase in training volume, especially in the low and mid-aerobic zones.    I will also maintain my current nutrition, consuming a diet that matches the Mediterranean diet as described in my post two weeks ago.

My third goal will be the development of aerobic capacity, including the ability to utilise lactate.  The gradual increase of training volume in the low and mid-aerobic zones required to promote utilization of fat will also contribute to this goal.  In addition, I will do regular sessions similar to Hadd’s 25×200/200 sessions, in which brief epochs that are effortful enough to generate a modest amount of lactate, alternate with recovery periods long enough to allow the metabolism of the lactate.  Because there is minimal accumulation of acidity the session is only moderately stressful.

Fourthly, I will attempt to build up the strength to maintain a reasonable marathon pace without the need to increase cadence to an inefficient level.  At present, my cadence exceeds 200 steps per minute even at 5 min/Km.  My strategy for developing the strength required to lengthen my stride includes a mixture of short hills, long hills and sprinting in addition to weights and plyometrics.  A key feature of all of these sessions will be generous recovery after each effortful epoch, to maximise the stimulation of anabolic hormones and minimise cortisol production.

Above all these specific goals, I will aim to start the marathon specific training in a robust and relaxed state.

Plyometrics and running efficiency

December 21, 2013

For several years I have been concerned about the loss of length of my stride that had become increasing marked since my early sixties.  At peak sprinting speed, my step length is less than 1 metre.  To achieve even a modest pace of 5 min/Km,  I am forced to increase cadence to over 200 steps/min.  At paces in the vicinity of 5 min/Km, efficiency tends to increase as cadence increases from 180 to 200 steps per minute because the energy consumed in getting airborne and overcoming braking decreases as cadence increases up to 200 steps /min (as demonstrated  my post of 6th Feb 2012).  However, the energy cost of repositioning the legs during the swing phase increases with increasing cadence, as discussed  in my post of 27th Feb 2012, and in my calculations performed on 5th April 2012.  Therefore, at paces in the range 4 to 5 min/Km, efficiency falls as cadence increases substantially above 200 steps per min.

Initially I considered that loss of leg muscle strength was the cause of my short stride. So a year ago I embarked on a program of weight training, mainly employing squats and deadlifts.  I was delighted that I was able to recover my lost strength, but unfortunately, it made little difference to my stride length.  I had intended to follow the initial weight sessions with some plyometrics, in the expectation that plyometrics would help me harness the increased strength and allow me to capture more elastic energy to drive powerfully off stance, but a minor relapse of arthritis confounded my plan.  By the time the arthritis had settled it was time to direct my energy towards re-building aerobic fitness for the Robin Hood half marathon in September.  I increased weekly training volume fairly rapidly but only managed a rather mediocre half marathon.

After recovering from the half-marathon, it was time re-consider my former plan to introduce plyometrics.  However, I was a little alarmed by continuous aching in my legs, especially at the attachment of peroneus  longus to the upper part of the fibula in both legs.  In addition there was a generalised aching of the connective tissues around and below both knees.  This had built up gradually during the summer and did not resolve even after I cut back the amount of training quite drastically.  By late October I was reluctant to put off the plyometrics any longer, though it was clear that I would need to be fairly cautious.

What intensity of plyometrics is required?

What evidence is there that a modest program of plyometrics would lead to a worthwhile gain in running efficiency?  A study by Turner and colleagues had assessed the change in running efficiency produced by 6 weeks of fairly gentle plyometrics in a group of moderately trained, young adult runners. The program involved adding three plyometric session per week to the runners’ usual training.  Each plyometric session involved six exercises starting with sub-maximal double-leg vertical jumps at 50% effort as a warm-up, and then proceeding to various forms of double-leg and single-leg jumps.  For example, one of the exercises was submaximal double-leg repetitive vertical jumps of 6–8 in., using minimal knee and hip action while emphasizing the calf action.   In the first week, each session included 60 foot-contacts, increasing to 140 foot-contacts per session by six weeks.   The outcome was a significant increases in running efficiency of 2-3% at paces in the range 5 – 6 min/Km.  A control group who continued with training as usual showed no increase in running efficiency.  Neither group exhibited increase in VO2max, or a significant increase in counter-move jump (CMJ) height.  The lack of significant increase in CMJ height was perhaps surprising, though in fact the group undergoing the plyometrics did exhibit a mean increase from 36 to 38 cm. This was not statistically significant, but the study probably lacked enough statistical power to detect the magnitude of change that might reasonably be expected.   Nonetheless, it was encouraging to see a small but significant and worthwhile improvement in running efficiency from a relatively modest plyometric program.

A more demanding program

Spurrs and colleagues employed a slightly more demanding 6 week plyometric program in more experienced athletes.  In the first three weeks, there were two plyometric session per week and then three sessions per week for the remaining three weeks.  The majority of the exercises were hops (single or double-leg), all performed at maximal effort.  Depth jumps were introduced in the fourth week.  The number of foot contacts was 60 per session in the first week and increased gradually up to 180 per session by the final week.  The gains were substantial.  Running efficiency increased by 6.5% at 5 min/Km and by 4% at 3.75 min/Km.  CMJ height increased significantly from 38cm to 43 cm and musculo-tendonous stiffness increased significantly by about 10% in each leg.  3Km time trial performance improved significantly by 2.7 % from 10:17 min to 10:10.  There were no changes in VO2max or lactate threshold.  A control group who continued with training as usual showed no significant changes in any measures.

My cautious program

Overall, the prospect for gain in efficiency and in race pace from a 6 week plyometric program looked promising.  However in light of my age and aching legs, it was clear that I should be cautious.  I decided that in contrast to the approach employed by Turner, who placed emphasis on the muscles acting around the ankle (especially gastrocnemius and soleus), I would  allow more flexion of hips and knees, since the large muscles (quads, hams and glutes) acting at these joints play a key role in running.  I therefore anticipated that I would need to employ somewhat greater jump heights.   A cautious introductory session with some hopping over 30cm high hurdles and drop jumps from 16 cm did not exacerbate the aches.  In fact, at that time, running was somewhat more painful than the plyometrics, so I decided that I would proceed with the plyometrics while cutting back the amount of running to around 10 Km per week.  I allowed three days recovery after each plyometric session, giving a total of three sessions every two weeks.  I interleaved a mildly demanding weight lifting session between plyometric sessions. To prevent complete loss of aerobic fitness, I replaced the some of the running with sessions on the elliptical cross trainer.

In the first plyometric session, after a gentle warm up that included body-weight squats, hip swings, calf-raises  and line-jumps  I did 5 sets of 2 double-leg  hops over 30cm hurdles and 5 sets of 5 drop jumps from 16 cm, rebounding to 16cm (a total of 35 foot contacts in the session).    This modest session left me with barely perceptible DOMS the next day.  In subsequent sessions I increased the numbers of hurdle hops; the depth of the drop jumps; and added single-leg hurdle hops.  By the end of the six weeks, each session included 5 sets of 7 double-leg hurdle hops (over 30cm hurdles); 5 sets of 5 single-leg hurdle hops on each leg (over 15 cm hurdles); and 5 sets of 5 drop jumps (from 30 cm rebounding to 30 cm  (total of 110 foot contacts in the session).   Although exact comparison with Turner’s program is not possible, I estimate that the early sessions in my program were less demanding than Turner’s,  but the later sessions were roughly equivalent.  However, whereas Turner’s athletes performed 18 sessions, I performed only 10 sessions over the six week period.    My CMJ height increased from 30 to 33 cm.   My other outcome measurement was horizontal distance covered in 5 consecutive double-leg hops.  This increased from 8.63 m to 9.08 m.

Unfortunately, there seemed little point in assessing the impact on my running performance.   Having done relatively little running in the 12 weeks since the half marathon, my aerobic fitness had deteriorated quite markedly, despite the elliptical sessions.   It was clear that my fitness at the end of September had been built on a very narrow base, and by mid-December, it had melted away.  However, one pleasing observation was that the persistent aches in my legs had almost entirely disappeared.

Conclusion

Overall, the six weeks of quite modest plyometrics  produced a definite increase in my jumping ability – comparable as far as can reasonably be estimated, with the  gains exhibited in the study by Turner, though somewhat less than achieved in the more demanding program employed by Spurrs.   Although I have no direct evidence of improved running efficiency or pace, the findings of both Turner and of Spurrs suggest that the improvement in jumping ability would probably have been enough to produce a worthwhile improvement in running efficiency, had I not lost aerobic fitness due to a drastic reduction in my volume of running.

At present I find myself in an ambiguous position.   I am somewhat dismayed by the severe and persistent aching in my legs that had developed in the summer during my preparation for the half marathon.   If I am to succeed in my plan to run a full marathon next year, I will have to build up the volume of running more gradually than had been feasible this year.  I will probably also include a higher proportion of elliptical cross-training.  However, it is pleasing to have demonstrated that I can achieve gains in jumping performance from a relatively modest program of plyometrics.  The gains appear comparable to those achieved by the young adults in the study by Turner, and perhaps even comparable to those achieved in the study by Spurrs, after allowing for the differences in volume and intensity of the plyometrics.  Furthermore, it will be interesting to see whether or not this moderate amount of plyometrics makes me more resistant to aching legs, in the long term.

Paleo v High Carbohydrate diet: the evidence for differences in endurance performance; health and life-expectancy

December 15, 2013

Popular enthusiasm for the Paleo diet, including a relatively high proportion of fat and protein presumed characteristic of the diet of our hunter/gatherer ancestors, has re-ignited the long standing debate about the nutritional merits of fat and carbohydrates, especially for athletes.  In recent posts I have compared the effects of a high-fat diet with those of a high-carbohydrate diet on metabolic processes that have the potential to effect endurance performance, health and life expectancy.  We have examined the evidence of these different diets on the development of preferential use of fat rather than carbohydrates for fuel during exercise; on risk of sustained elevation of the stress hormone, cortisol; on insulin resistance and inflammation; and on weight control.   The evidence shows that a high fat diet does promote the use of fats as fuel during exercise, potentially beneficial in warding off disabling glycogen depletion during prolonged exercise.  However both types of diet are associated with risks of sustained elevation of cortisol, insulin resistance and chronic inflammation.  Particular components of each type of diet, specifically high glycaemic index (GI) carbohydrates which produce a rapid rise in insulin after ingestion, and omega-6 fatty acids, which are pro-inflammatory, are associated with high risk.  With regard to weight control, the evidence indicates that low fat and low carbohydrate diets are equally effective.   This post examines the evidence for effects on the ultimate outcomes: race performance, health and longevity.

Performance

When it comes to evidence regarding the effect of nutrition on performance, there are conflicting findings.   In a meta-analysis of 20 studies comparing the effects of high fat with high carbohydrate diet on endurance exercise performance, Erlenbusch and colleagues found that averaged across all studies, subjects consuming a high-carbohydrate diet exercised significantly longer until exhaustion, but there were substantial differences between the findings of different studies, probably reflecting differences in the subjects studied and the design of the study.  The benefit of the high carbohydrate diet was relatively large in studies of untrained subjects, but there was very little difference between the two types of diet in studies of trained athletes.  In light of the fact that endurance training itself increases capacity for utilization of fats as fuel, it is plausible that in hitherto untrained subjects, a relatively brief period of high fat consumption is inadequate to produce a substantial capacity for fat utilization, so maximizing efficiency of glucose utilisation might be of greater value in such subjects.

There are some noteworthy studies that have reported greater benefit for a high fat diet in trained athletes.  An early study from Tim Noakes lab in Capetown compared the effects of  2 weeks of high fat (70%) low carbohydrate (7%) diet with a high carbohydrate (74%), low fat (12%) diet in trained cyclists.  The high fat diet led to higher fat utilization and improved performance at moderate exercise intensity, without deterioration of performance at high intensity.   The importance of starting an endurance event with well stocked glycogen stores suggests that greater benefit might be obtained for a periodized nutritional strategy in which high fat diet is followed by a brief period of carbohydrate loading.   A subsequent study from the Capetown lab using the nutritional periodization strategy found that high-fat consumption for 10 days prior to carbohydrate loading was associated with an increased utilization of fat, a decreased reliance on muscle glycogen, and improved time trial performance in a 20 Km time-trial following 150 minutes of medium intensity cycling.

Other studies of trained athletes reported equivocal results.   Carey and colleagues tested the effect of fat adaptation using a nutritional periodization strategy, on performance during a one hour time trial following 4 hours of aerobic cycling.   As expected the fat adaptation resulted in increased fat utilization.  Power output was 11% higher during the time trial and distance covered was 4% greater, but this effect was not statistically significant.  Nonetheless, in 5 of the 7 cyclists, the improvement in performance after fat adaptation was substantial, raising the possibility that the number of subjects was too small to provide adequate statistical power to test for a performance benefit.

Yet other studies indicated no benefit and perhaps even harm from the fat adaptation strategy.  A further study from the Capetown lab by Haverman and colleagues compared 100km cycling time trial performance and also 1 Km sprint performance following 6 days of high fat consumption and 1 day of carbohydrate loading with performance following 6 days of high carbohydrate consumption and 1 day of carbohydrate loading.   The anticipated enhancement of fat utilization was observed, but there was no significant difference between diets in 100-km time-trial performance, while 1-km sprint power output was significantly worse after the high fat diet.  The investigators concluded that despite increasing fat utilization, the strategy of high fat diet followed by carbohydrate loading compromised high intensity sprint performance.  This raises the possibility that the increased fat utilization might reflect impaired ability to use carbohydrates rather than an enhanced ability to utilize fats.

Thus, the tide of evidence has turned against the hope that fat adaptation produced by a period of one or two weeks of high fat consumption might be a worthwhile strategy for improving endurance performance.  In contrast, this strategy might actually impair high intensity performance – an issue that is potentially of some importance even in events lasting several hours in which surges or hills might play a part in race outcome.   The evidence does not rule out the possibility that some individuals might enjoy an improvement in endurance performance, but at this stage, the evidence does not justify a general recommendation of this strategy.

Perhaps improvement in performance from rather drastic dietary adjustments over a period of a few weeks is not the issue of greatest importance to the endurance athlete, for whom training is an undertaking extending over many months or years.  Rather, the question of greater importance is the effect of long term nutrition on long term health.  Although no studies have examined the long term effects of long term nutrition in endurance athletes, recent evidence has provided increasing clarity regarding the optimum diet for long term health in the general population.

Long term health and life expectancy

We will focus on evidence related to heart health because heart disease is the greatest cause of mortality in the general population and in addition there is some evidence that extensive endurance training and racing might in fact increase the risk of cardiovascular disease in athletes.  Furthermore, most evidence suggests that a healthy diet for the heart minimises risk of cancer, though there are instances where foods that appear healthy for the heart have been linked to increased risk of cancer.  Although depression is associated with only a modest risk of premature death, it is the illness causing the greatest degree of disability world-wide (according to the World Health Organization).  Furthermore, mental state is of substantial importance in athletic performance.  Therefore, I will also briefly address the evidence regarding the association between diet and depression.

Cardiovascular disease

In a recent comprehensive review of nutritional recommendations for cardiovascular disease prevention Eilat-Adar and colleagues found that both low fat and low carbohydrate diets are a healthy alternative to the typical Western diet.  They note that low carbohydrate diets are associated with lower levels of potentially harmful tryglycerides and with higher levels of beneficial cholesterol in high density lipoprotein (HDL).   Low-carbohydrate diets, which include 30%–40% of calories from carbohydrates and are low in saturated fat but high in mono-unsaturated fat, were found to be safe in healthy and overweight individuals at follow-up for up to 4 years.   We will return to the controversial issue of saturated fat later.  Eliat-Adar also found good evidence that Mediterranean diets, which include high consumption of fruit, vegetables and legumes, together with moderatley large amounts of fish but less red meat, may improve quality and life expectancy in healthy people, as well as in patients with diabetes, and heart disease.  Mediterranean diets are preferable to a low-fat diet in reducing triglyceride levels, increasing HDL cholesterol, and improving insulin sensitivity.

A meta-analysis of trials by the Cochrane Collaboration – an organization which does extremely rigorous and conservative reviews of medical treatments – also concluded that evidence suggests favourable effects of the Mediterranean diet on cardiovascular risk factors, though with their usual caution, they stated that more trials are needed.

One trial that warrants special mention is the Spanish Prevención con Dieta Mediterránea (PREDIMED) trial, in which 7,216 men and women aged 55 to 80 years were randomized to 1 of 3 interventions: Mediterranean diets supplemented with nuts or olive oil and control diet. During a follow-up period of near to five years, nut consumption was associated with a significantly reduced risk of all-cause mortality.  Subjects consuming more than 3 servings/week of nuts had a 39% lower mortality risk.  A similar protective effect against cardiovascular and cancer mortality was observed.

With regard to the issue of saturated versus unsaturated fats, a recent re-analysis of the large and well conducted West Sydney Heart study found that replacing dietary saturated fat with omega- 6 linoleic acid, for subjects with known cardiovascular disease, actually led higher all-cause death rate, and higher death rate from coronary heart disease and cardiovascular disease.  The authors also performed a new meta-analysis of previous studies and found that the pooled data also provided a strong trend towards higher death rate when saturated fat was replaced by omega-6 linoleic acid.  This finding is contrary to the prominent advice to substitute polyunsaturated fats for saturated fats in worldwide dietary guidelines for reducing risk of coronary heart disease.   The most plausible explanation is that the increased death rate is due to the pro-inflammatory effects of omega-6 fatty acids.

Cancer

The frequent reports in both popular press and the medical literature linking various foodstuffs to cancers of various types makes this topic a mine-field.  In part this situation reflects the heterogeneity of cancer and the multiplicity of different factors that might contribute to the cause in different cases.  Nonetheless, in general, the evidence indicates that diets that are healthy with regard to weight control and cardiovascular outcome tend to be associated with lower risk of cancer.  For example, a recent large review found that adherence to the Mediterranean diet was associated with lower risk of certain cancers, especially cancers of the digestive tract, consistent with the finding from the PREDIMED study mentioned above.  However, in light of the fact that a key difference between typical Western diets and the Mediterranean diet is the  larger relative amount of omega-3 fats in the Mediterranean diet, it is noteworthy that some studies have reported that omega-3 fats are associated with increased rate of prostate cancer, while others have reported a decreased rate.  This should encourage caution against simplistic conclusions that a food item is invariably healthy in all amounts and all circumstances.

Depression

Many studies using relatively low quality methodology to assess diet and/or mental state have reported an association between adherence to a ‘healthy’ diet and decreased risk of depression.  More recently, several studies have addressed this issue using more rigorous methodology.  A meta-analysis by Psaltopolou and colleagues of studies examining the association between Mediterranean diet and risk of various neurological and mental disorders found that the Mediterranean diet was associated with a decrease in risk of depression of approximately 30%.  This reduction was very similar in magnitude to the reduction of risk of stroke and for cognitive impairment.  However, association cannot establish cause, and it is possible that other life-style factors associated with adherence to a healthy diet account for the better physical and mental health.   The most conclusive evidence comes for randomized controlled trials in which individuals are randomly allocated to different diets.  In the PREDIMED trial, the group who were allocated to the Mediterranean diet augmented with extra nuts experienced a 20% lower rate of depression over a period of 3 years, compared with those on a low fat diet.  This was not a statistically significant reduction.  However, in those who had type 2 diabetes, the Mediterranean diet with extra nuts produced a 40% reduction in occurrence of depression which was significant.  Thus the balance of evidence does suggest that a Mediterranean diet augmented by nuts produces a reduction which is significant at least in those who already show other evidence of adverse metabolic effects.

Synthesis

There is overwhelming evidence that diet plays a large role in health and longevity, and after many years of confusing debate, there is emerging clarity about the type of diet that is healthiest.  This is neither a high fat/low carbohydrate Paleo diet nor a low fat/high carbohydrate diet.  Rather, a substantial body of evidence suggests a Mediterranean diet is preferable, especially when augmented with extra nuts.

There is some variability between studies in what is taken to be the Mediterranean diet, but the consistent features include high consumption of fruits, vegetables and legumes (beans, nuts, peas, lentils); low consumption of red meat and meat products but substantial consumption of fish; near equal proportions of omega-3 and omega-6 fats;  moderate consumption of milk and dairy products; and low  to moderate red wine consumption.   The status of grains and cereals is ambiguous. The Mediterranean diet adopted in PREDIMED included a high consumption of grain and cereals. In general, whole grains and cereals appear healthy though gluten sensitivity is an issue for at least some individuals.

While the evidence for the Mediterranean diet is largely based on studies of the general population with emphasis on heart health, rather than being focused on athletes, the disconcerting evidence that male athletes who have run numerous marathons over a period of many years are at risk of atherosclerosis (as discussed in detail in my post of 30th May, 2012) suggests that a ‘heart-healthy’ diet should be a high priority for endurance athletes.

When it comes to endurance performance, there is no clear evidence in favour of any particular diet.  However the consistent evidence that a high fat/low carbohydrate diet promotes preferential utilization of fats during exercise appeared promising at first.  It is disappointing that this apparently beneficial adaptation is not reflected in enhanced performance, even in ultra-endurance events.  On contrast, there is actually evidence that it can harm high intensity performance, such as 1 Km cycling time trial performance.   However, the fact that at least some individuals do appear to show an endurance performance benefit from a high fat diet (followed by brief duration carbohydrate loading) as observed in the  study by Carey and colleagues, makes me reluctant to dismiss the potential value of at least moderately high fat consumption.  One crucial issue is to identify why the clear evidence of improved fat utilization does not generally lead to enhanced performance.  It appears that the fat adaptation strategy, at least in the form of a rapid increase in proportion of fat to a quite high level over a periods of a few weeks, is in some way harming the utilization of carbohydrates as much as it might be improving the utilization of fats.

In my opinion, one candidate mechanism by which high fat consumption might harm carbohydrate metabolism in muscle is the elevation of cortisol associated with the fat adaptation strategy used in the studies.  One immediate effect of high cortisol is the decrease in accessibility of the glut4 transporter molecules that transport glucose into muscle.   Furthermore, sustained elevation of cortisol can produce a decrease in sensitivity of glucocorticoid receptors that mediate the various effects of cortisol, including its anti-inflammatory effects, thereby possibly leading somewhat paradoxically to chronic inflammation.  This is speculation on the basis of what is known about mechanisms rather than direct evidence of beneficial or harmful effects in practice.  Nonetheless, it appears to me plausible that a gradual introduction of a higher proportion of fats, at least up to the modest levels in the Mediterranean diet, over a more sustained period might produce promote preferential utilization of fat during exercise in a manner that translates into improved endurance performance.

In light of the evidence that glycogen depletion during training can enhance training effects, I consider that during normal training, consumption of carbohydrates is potentially counter-productive, in most instances.  Exceptions might include high intensity sessions; very prolonged sessions; or for the purpose of testing the planned strategy for race day in the final few long runs of marathon/ultra-marathon preparation.  However, the need to start an endurance event with glycogen stores well stocked suggests that at least a brief period of carbohydrate loading, and ingestion of carbohydrates during long events, is highly desirable.

In summary, I consider that the emerging evidence provides strong support for the proposal that the optimum nutrition for most endurance athletes is a Mediterranean diet, but with carbohydrate loading immediately prior to long races.

Paleo v High Carbohydrate diet: the evidence for metabolic differences affecting health and endurance performance.

December 13, 2013

The recent surge of interest in the Paleo diet, based on the speculation that evolution has equipped humankind to thrive on a diet relatively rich in fat and low in carbohydrate, has added new spice to the long-standing debate over the optimum proportion of fat and carbohydrate in our diets.  This debate is of substantial importance for anyone seeking to live a long and healthy life, and is of particular importance for endurance athletes who subject their bodies to the rigours of extensive training and require those long-suffering bodies to function with peak efficiency on race day.  There are five related mechanisms by which diet is likely to affect health, longevity, response to training and race performance.  These five mechanisms are: the capacity to utilise fat in preference to carbohydrate; minimization of sustained elevation of cortisol; avoidance of chronic inflammation; prevention of insulin resistance; and control of body weight.  In the post, I plan to examine the evidence regarding the influence of the proportions of fat and carbohydrate in the diet on these five related mechanisms.  In my final post in this series, I will examine the evidence for effects on the ultimate outcomes: race performance, health and longevity.

The metabolic challenges facing the endurance athlete

I will start with a brief review of an issue covered in my most recent post: the metabolic challenges that the runner faces if glycogen becomes depleted in the final stages of a marathon.   The body’s paramount goal in these circumstances is ensuring adequate glucose to fuel the activity of the brain.  Secretion of the stress hormone, cortisol, increases dramatically, with three immediate consequences: cortisol promotes gluconeogenesis in the liver thereby replenishing glucose; it inhibits the function of the glut4 transporters that transport glucose across the cell membrane into muscle cells and other peripheral tissues; and it promotes beta-oxidation of fatty acids which become the main energy source for muscle.  This response averts disaster for the brain, but it is not ideal.

Not only does excessive elevation of cortisol have potential adverse long term effects, but there are immediate undesirable consequences.  Reliance of fat as the main fuel for muscle have a dampening effect on power output because fat metabolism requires oxygen making it difficult to  exceed the limit achievable aerobically, but in addition, as we saw from an examination of the central role that the Krebs cycle plays in both catabolic and anabolic processes, there are multiple other metabolic consequences.  Reduced production of pyruvate from glucose in muscle makes it necessary to utilize glutamine to keep the level of the intermediate metabolites of the Krebs cycle topped up.  Muscle is the main source of glutamine for other organs and body systems. Depletion of glutamine in muscle leads to a fall in blood levels of glutamine which has adverse effects in the gut, liver, kidney and immune system.  In the gut, glutamine serves a diverse range of essential metabolic functions.  In the liver, glutamine is a major source of the oxaloacetate required for gluconeogenesis, when low levels of glycogen limit the generation of oxaloacetate via pyruvate derived from glucose.   In the kidney, glutamine is the source of ammonia needed for the excretion of acids.  Glutamine also plays crucial catabolic and anabolic roles in the cells of the immune system, and a fall in glutamine will exacerbate the direct adverse effect of cortisol on the immune function.  Although the body appears prepared to tolerate some loss of immune function during vigorous exercise, overall, it is undesirable to allow glutamine level to fall too far.  Therefore, not only is it crucial to start an endurance event with well-stocked glycogen stores, but one of the key goals of endurance training is developing the capacity to utilise fats in preference to glucose at aerobic paces thereby avoiding a state of serious glycogen depletion in long races.

Effect of nutrition on capacity to metabolise fats during exercise.

Many studies, reviewed by Burke and Hawley, demonstrate that a high fat diet promotes the utilization of fats during exercise.   However, on race day, the endurance athlete requires not only a well developed ability to utilise fats, thereby minimising the depletion of glycogen stores, but also  needs to start the race with glycogen stores fully topped up. Fortunately, the evidence reviewed by Burke an Hawley indicates that switching from a high fat diet to a high carbohydrate diet in the period immediately before the race does not undermine the ability to utilise fats.   As an illustration, Staudacher and colleagues demonstrated that a short term high fat diet (6 days; 69% fat) followed by a high carbohydrate diet in the day preceding exercise produced a 34% enhancement of ability to utilise fats during submaximal cycling in a group of highly trained endurance athletes, whereas 6 days of high carbohydrate intake (70% carbohydrate) followed by a further high carbohydrate day, resulted in a 30% reduction in fat utilization.  This has encouraged the hope that a  “fat adaptation” strategy in which a high-fat, low-carbohydrate diet is consumed for up to 2 weeks during normal training, followed by high-carbohydrate diet during a brief taper in the few days before a key race, might improve performance.

However despite the consistent evidence that such nutritional periodization can achieve the desired enhancement of fat utilization, there is less clear evidence of enhancement of race performance.  This might simply be that may other factors influence performance, so large, well-controlled studies are required to allow any benefit for this nutritional strategy to emerge clearly from the inconsistencies due to other sources of variance.  Alternatively, it might be that this nutritional strategy has hidden adverse effects.  For example, it is plausible that the nutritional strategy might upset hormonal balance is an adverse manner.  I will return to the issue of effects on performance in my next post, but first we need to consider the possible effects of nutrition on hormones such as cortisol and insulin.

Effect of nutrition on sustained cortisol levels

The balance of evidence indicates that a very low intake of carbohydrate and high fat consumption, for either a few days or for longer periods, leads to sustained elevation of cortisol.  For example, Langfort and colleagues compared the effects of 3 days of a high fat and protein diet (50% fat, 45% protein and 5% carbohydrates) with three days of a mixed diet.  They observed no difference in maximal aerobic capacity, but did observe a significant increase in both adrenaline and cortisol before and after exercise.  Furthermore, several studies have shown sustained elevation of cortisol after longer periods of high fat diet.  For example, in a comparison of three different diets, each administered for 4 weeks to overweight young adults, Ebbeling and colleagues found that twenty-four hour urinary cortisol excretion was highest with the low-carbohydrate, high fat diet (10% from carbohydrate, 60% from fat, and 30% from protein.)  Similar effects are seen with more moderate amounts of fat.  For example, in a study of runners, Venkatraman and colleagues observed greater pre-test cortisol after 4 weeks at 40% fat compared with 15% fat, but in other respects, the outcome tended to be more favourable, including greater time to exhaustion in the 40% fat group.

The type of fat might matter: in a study of Spanish women, García-Prieto and colleagues found that high saturated fat intake was associated with an unfavourable loss of the normal daily variability in cortisol levels while women who dietary pattern was closer to the Mediterranean diet, with high consumption of monounsaturated fatty acids, showed healthy regulation of cortisol levels.  However, it perhaps important to emphasize at this point that when it comes to other long term health outcomes (which we will examine in the next post) there is relatively little evidence that saturated fats are especially harmful. There is little basis for the long-standing demonization of saturated fats in comparison with unsaturated fats.  The recent pressure by the UK government on the food industry to reduce saturated fat content of foods is scarcely justified.

Nonetheless, the association between a high proportion of dietary fat and sustained elevation of cortisol is a potential concern.  Epidemiological studies demonstrate that sustained high cortisol levels may promote adiposity, insulin resistance, and cardiovascular disease. For example, in a 6-year prospective, population-based study of older adults, individuals in the highest third of 24-hour cortisol excretion had a 5-fold increased risk of cardiovascular mortality, compared with the lowest third.

Insulin resistance

One of the important mechanisms in adverse long term cardiovascular outcome is insulin resistance, the cardinal feature of type 2 diabetes.  The claims regarding the relative harmfulness of fats and carbohydrates in regard to insulin resistance remain a source of controversy.  Consistent with the evidence that high blood and tissue levels of fatty acids are associated with insulin resistance a substantial body of historical evidence indicates that high fat diet impairs glucose tolerance.  On the other hand, ingestion of carbohydrate leads to increased levels of blood glucose which triggers insulin release, which in turn can result in insulin resistance.  It is likely that the answer is not to be found simply in the proportion of energy derived from carbohydrate or fat, but rather in the type of carbohydrate or fat.  In the case of carbohydrates, it is likely that high glycaemic index (GI) foods promote insulin resistance.  Brand-Miller and colleagues demonstrated that in lean young adults, a meal with a high glycemic load (the mathematical product of the amount of carbohydrate by the glycemic index of the carbohydrate-containing foods) result in higher insulin concentration, than a meal with similar total calories but low glycemic load.  At least in individuals at genetic risk, high insulin secretion promotes insulin resistance.  Consistent with this evidence suggesting that a diet based on low glycaemic load might reduce insulin resistance in those at risk, Barnard and colleagues demonstrated that a low fat vegan diet with a high proportion of low GI carbohydrates improved the control of blood glucose in individuals with type 2 diabetes more effectively than a low carbohydrate diet.

A large body of evidence, reviewed by Grimble and colleagues, reveals an association between insulin resistance and chronic inflammation.  Grimble concludes that the evidence regarding the direction of the causal relationship favours chronic inflammation as a trigger for chronic insulin insensitivity.

 

Inflammation

Endurance athletes are at particular risk of chronic inflammation, in part on account of the repeated trauma to muscle and other connective tissues associated with training, making the effect of nutrition on inflammation a crucial issue.   The issue is complex.  On the one hand, as discussed in my recent post on inflammation, a high carbohydrate load can promote inflammation due to the release of the pro-inflammatory fatty acid arachidonic acid in association with insulin secretion from the pancreas.  Furthermore, some carbohydrates, especially cereals containing gluten, can impair the lining of the gut, leading to chronic inflammation.  However, a high fat diet also carries risk.  Omega-6 fatty acids are pro-inflammatory, making it important to have a good balance of omega-3 and omega-6 fats in the diet, yet the typical Western diet is much richer in omega-6 fats.

Thus, both a high carbohydrate diet and a high fat carry risk of inflammation, with evidence suggesting that identifiable components of these diets account for much of the risk.  High  GI carbohydrates and a -6 to omega-3 fats appear to generate the greatest risk.    The traditional Mediterranean diet, containing a moderately high level of fat with near equal proportions of omega-3 ad omega-6 fats; and vegetables with a relatively low glycaemic index, appears to offer a near optimum combination.  In a comprehensive review association of dietary patterns with inflammation and the metabolic syndrome (whose key feature is insulin resistance), Ahluwalia and colleagues concluded that healthy diets such as the Mediterranean diet can reduce inflammation and the metabolic syndrome.

Body weight

While control of body weight is one of the major preoccupations of dieting non-athletes, it is not usually the main preoccupation among endurance athletes simply because endurance training itself promotes weight loss.  Nonetheless, even a modest excess of weight has serious implications for endurance race performance, because the energy required to accelerate the body to compensate for the inevitable braking during every stride, and to elevate the centre of mass in order to become airborne, is proportional to body mass.  Therefore, the weight of any body tissue that is not performing a useful purpose is a handicap.  However, the issue of what tissues perform a useful purpose for the endurance athlete is not entirely straightforward.  Muscle that does not contribute to propulsion might be a handicap, while at least some fat is required to sustain balanced hormonal function.  The ideal weight for endurance athletes is likely to vary between individuals, but observation of elite athletes suggests it is likely to correspond to a body mass index range between 20 and 23.  Alternatively, since excess fat is likely to be a greater handicap than excess muscle, a body fat percentage in the range 5-11 percent for males and a somewhat higher proportion for females, might be a more relevant guide.

While an excess of the ratio of total calories consumed to total calories expended is an important factor in determining the likelihood of weight gain, there has been much debate about the relative merits of low fat or low carbohydrate diets.  A recent meta-analysis of 23 trials including almost three thousand participants concluded that both types of diet improved weight and metabolic risk factors, with no significant differences between the two in the reductions in body weight or waist circumference.  Nonetheless there were slight but significant differences in some of the metabolic risk factors, with low carbohydrate diets producing a potently healthier increase in high density lipoprotein cholesterol and reduction in triglycerides, but a lesser reduction in potentially harmful low density lipoprotein cholesterol.  The authors concluded that low-carbohydrate diets are at least as effective as low-fat diets at reducing weight and improving metabolic risk factors.

Summary

Nutrition does indeed have an appreciable impact on the five metabolic mechanisms that are likely to influence both endurance performance and long term health.  There is unequivocal evidence that a high fat diet produces an increase in the utilization of fats in preference to carbohydrate, which is potentially beneficial for endurance performance.  However, there is also evidence that consumption of a high fat diet over a period ranging from a few days to 4 weeks, results in a sustained increase on cortisol, which is potentially harmful in the medium and long term.

Nutrition also plays an important role in insulin resistance and inflammation.  For these two issues, the type of fat or carbohydrate appears to be especially important.  High GI carbohydrates and high total glycaemic load promote inflammation and insulin resistance.  However high levels of fat in blood and in body tissues are associated with insulin resistance, while omega-6 fats are pro-inflammatory.  With regard to weight control, either low fat or low carbohydrate diets can be effective.

Overall, these observations do not provide any simple answer to the question of the optimum proportion of fat to carbohydrate, but do suggest that both fats and carbohydrates can carry risks.  It is noteworthy that much of the evidence demonstrating adverse effects is based on studies in which there was an abrupt change to a high proportion of either fat or carbohydrate. In the next post, we will examine the evidence regarding the influence of proportion of fats and carbohydrates on endurance performance and on long term health, before finally drawing practical conclusions based on a synthesis of the evidence.

Paleo v High Carbohydrate diet: the background to the evidence

December 5, 2013

In this post and the next, my aim is to explore the contentious issue of the optimum of balance of fat and carbohydrate in an endurance runner’s diet, focussing on evidence for the effects on training adaptation, on running performance, and on overall health.  In my previous two posts I have addressed specific aspects of nutrition: the advantages and disadvantage of training in a fasted state; and nutritional strategies to minimise risk of chronic inflammation.  Both of those topics are relevant to the current discussion, so I will start by summarising the conclusions from those posts.

The evidence regarding training in the fasted, glycogen depleted state leads to the conclusion that it  is likely to enhance capacity to utilise fats, which is advantageous for an ultra-marathoner and perhaps also for marathoners.  Under some circumstances, it might also produce enhancement of aerobic enzymes.  However, a high fat diet abolishes these advantages of training in the fasted state.  Furthermore, training in a glycogen depleted state increases the risk of excessive elevation of cortisol during either intense or prolonged training sessions.  Overall, I do not think the benefit justifies the risks, especially as much of the benefit might be obtained from increasing the amount of fat in the diet

With regard to inflammation, while acute inflammation promotes tissue repair after training, chronic inflammation is not only associated with the overtraining syndrome but also carries a serious risk of long term cardiovascular disease.  The evidence indicates that two worthwhile nutritional strategies are minimization of high Glucose Index carbohydrates (which promote a spike of insulin which can be associated with release of arachidonic acid, which is pro-inflammatory)  and the consumption of approximately equal proportions of non-inflammatory omega 3 and pro-inflammatory omega 6 fats.  Thus, the need to avoid chronic inflammation indicates which carbohydrates and which fats are healthy, but does not address the question of the optimum proportion of fat to carbohydrate.  In recent decades, many endurance athletes have favoured a high carbohydrate diet but in recent time, the high fat/ high protein Paleo diet has attracted attention, based on the speculation that our primitive ancestors adapted via evolution to such a diet.

Why is the debate so controversial?

The advocates of a high intake of carbohydrates, and advocates of low carbohydrate/high fat diet such as the Paleo diet can each assemble evidence, from both anecdotes and from systematic scientific study to support their cases. Resolution of the debate is elusive because the evidence appears contradictory.  The reason why the evidence is confusing becomes clear when you examine the complexity of the network of metabolic processes, including the catabolic processes by which fuel stores and body tissue are broken down to produce energy, and the anabolic processes by which tissues are repaired, strengthened and develop increased metabolic capacity.   There are multiple pathways by which a particular metabolic goal can be achieved.  This allows flexibility, but the choice of a particular fuel source, or a particular source of building material for anabolic processes has diverse knock-on effects.  In many instances, the stimulation or inhibition of a particular metabolic pathway depends on the release of a particular hormone, and the relevant hormones can have diverse effects extending beyond the immediate metabolic goal.  Genes, past training experiences and diet all influence the outcome.  Therefore it is not surprising that evidence from studies of the effects of diet on small groups of individuals give differing results depending on the features of those individuals, Conversely, attempting to apply conclusions from epidemiological studies of large populations to an individual might be misleading.  However, the picture is not hopeless. I think that sound, though nonetheless tentative, conclusions can be drawn from the existing evidence. Some understanding of the inter-locking networks of catabolic and anabolic pathways helps in achieving a sensible application of these conclusion to one’s own situation.

Catabolic and anabolic processes

Successful training demands a balance between catabolism: the break-down of carbohydrates, fats or proteins to yield the energy required to fuel muscle contraction, and also the process of autophagy, required to remove debris from cells; and anabolism: the building of body tissues to repair damage suffered during training, build new tissue, and develop increased metabolic capacity.  Although the details of the biochemical pathways are complex, the broad outline is fairly easy to grasp, provided one avoids being bamboozled by the names of the molecules.

Figure 1. The Krebs cycle. All three main types of fuel, carbohydrates, fats and proteins feed into the cycle. The major output is hydrogen(attached to the molecule NAD), which provides electrons to the electron transport chain, thereby generating ATP. In addition, several important anabolic pathways begin as branches from the cycle.

Figure 1. The Krebs cycle. All three main types of fuel, carbohydrates, fats and proteins feed into the cycle. The major output is hydrogen (attached to the coenzyme NAD), which provides electrons to the electron transport chain, thereby generating ATP. In addition, several important anabolic pathways begin as branches from the cycle.

Figure 1 illustrates the cardinal role in both catabolism and anabolism played by the cyclic pathway known as the Krebs cycle, named after Hans Krebs, the biochemist who delineated it.  For our present purpose, there are two important things to observe in this map of the metabolic pathways.  First, the catabolic pathways by which the three major types of fuel (carbohydrate, fats and proteins) are burned to generate energy, converge onto the Krebs cycle. Training that produces an increase in the enzymes that carry out the biochemical transformations that make up the Krebs cycle will increase the capacity to utilise any one of the three types of fuel, but the question of which fuel is selected in particular circumstances depends on availability of the raw material and on the hormonal milieu.     Secondly, some of the key anabolic pathways which produce amino acids, (the building blocks of proteins) and also many other substances essential for various bodily functions, begin as off-shoots of this cyclic pathway.

The enzymes that carry out the reactions of the Krebs cycle are located in mitochondria, the sub-cellular powerhouses in which the energy rich molecule ATP is produced as a result of oxidation of fuel. The cycle starts with the combination of a molecule containing 4 carbon atoms, oxaloacetate, with a fuel fragment containing 2 carbon atoms, known as an acetyl group, which has been generated by the first steps in the catabolism of carbohydrate, fat or protein.  The combination of the 4-carbon oxaloacetate with the 2-carbon acetyl group produces citrate, which contains 6 carbon atoms.    The citrate then enters a series of eight chemical transformations catalysed by enzymes.  In two of these transformations a carbon atom is removed and combined with oxygen to produce carbon dioxide.  By the time the original 6-carbon molecule completes the cycle it has been converted back to the 4-carbon oxaloacetate, and is ready to repeat the cycle.

There are several important outputs from the cycle.   Most important for the role of the Krebs cycle in the generation of energy is the transfer of hydrogen atoms (carried by the coenzyme, NAD) to a complex of enzymes known as cytochromes, which are the key components of a system known as the electron transport chain.  The hydrogen atoms feed electrons into this chain thereby providing the energy to create the high energy molecule ATP from its precursor ADP.  ATP fuels virtually all of the energy-demanding activities of the cell, including muscle contraction.

Furthermore various other metabolic pathways branch off from intermediate stages in the Krebs cycle.  Several of these pathways result in the synthesis of amino acids.  These are required not only for the building of proteins but also serve many other roles.  One of the most important is glutamine, highlighted in blue in figure 1.  Glutamine is the most abundant amino acid in the body.  It is mainly produced in muscle, but serves as a key fuel for the cells lining the gut.  It also plays a key role in the transmission of long-range communication within the brain.  However glutamine can also be synthesized in the brain, so the brain is not critically dependent on muscle for glutamine, but it is of interest that glutamine is the one amino acids that can cross the blood-brain barrier.   Glutamine is also required to fuel cells of the immune system. During intense exercise, the spin-off pathway that produces glutamine cannot cope with demand and glutamine levels fall.  It is possible that the decreased availability of glutamine is one factor leading to increased susceptibility of marathon runners to minor respiratory infections. However, there is no convincing evidence that glutamine supplements reduce the prevalence of colds in marathoners.

Nonetheless, one important consequence of the spin-off of glutamine is that the Krebs cycle gets depleted of some of its intermediates, and oxaloacetate has to be topped-up if the cycle is to be sustained.  This can be achieved by the direct conversion of pyruvate (high-lighted in red in figure 1) to oxaloacetate.  Thus, even when fat is the main source of fuel entering the Krebs cycle, a contribution from pyruvate is required to top-up the cycle.  Pyruvate also serves as the beginning point for the synthesis of several amino acids. Pyruvate is produced from glucose via glycolysis.  The multiple key roles of pyruvate illustrate the essential role for glucose, even when the muscle cell is deriving most of its energy from fat.

When oxygen supply is inadequate, the Krebs cycle slows down and pyruvate is converted to lactate.  During the generation of pyruvate from glucose via glycolysis, each molecule glucose yields only the 2 molecules of ATP (plus two molecules of NADH which can transfer electrons into the electron transport chain, each generating an additional 2 molecules of ATP), in contrast of the total of 36 molecules of ATP produced by the full sequence of glycolysis, the Krebs cycle and electron transport along the electron transport chain.

Although not all details are shown in figure 1, the intermediate metabolites of the Krebs cycle can also act as the starting point in the synthesis of many other substances that play a key role in the biochemical processes that occur in cells, and can also act as the precursors for the synthesis of glucose and fats.

In summary, the Krebs cycle lies at the centre of a complex network of catabolic and anabolic processes.  As mentioned above, the network of pathways offers the flexibility provided by alternative ways of meeting its metabolic needs.  Energy can be derived from various different sources; and there are alternative ways of synthesising the molecules required to replenish fuel stores after training; to repair the body; to increase strength by augmenting muscle and other connective tissues; and to increase metabolic capacity by synthesis of enzymes.

The response to glycogen depletion

As an illustration of the ways in which the body typically deploys these pathways to deal with particular circumstances, let us consider the situation facing an endurance runner when glycogen supplies begin to run out – the infamous ‘bonk’ that typically occurs in the final 10 Km of a marathon.

Glycogen is the storage form of carbohydrate from which glucose is released.  Even if we are mainly burning fat, muscle requires some glucose to feed into the glycolytic pathway to ensure a reasonable supply of pyruvate, necessary for keeping the Krebs cycle topped up to replace the keto-glutarate that is diverted to produce glutamine.  By this stage of the race, glutamine is becoming depleted, yet is needed to keep the cells of the gut wall functioning well, and also to help the kidney to maintain acid-base balance.   But even more importantly, the brain needs glucose for fuel because the brain has very few other options for providing energy.  So the body’s highest priority is maintaining adequate glucose levels to supply the brain.

When glycogen stores become seriously depleted, the tendency for blood glucose to fall stimulates cortisol release.  This was illustrated in a study by Tabata and colleagues  in which healthy young men exercised to exhaustion following a 14 hour fast.   Both ACTH (which promotes cortisol release from adrenals) and cortisol itself, were increased.  Cortisol stimulates the synthesis of glucose (from pyruvate and oxaloacetate) via the process known as gluconeogenesis (see figure 1) in the liver.  At this stage of a marathon, the main source of the pyruvate is likely to be lactate generated in muscle and transported via the blood to the liver.   Alternatively, glutamine might be converted to ketoglutarate and thence to oxaloacetate.

Because the priority is supplying the brain, not the muscles, cortisol inhibits the transport of glucose into peripheral tissues, including muscle, by keeping the glucose transporter molecules away for the cell surface.     The increased level of cortisol is likely to result in further reduction of liver glycogen, because cortisol facilitates the action of adrenaline in promoting breakdown of glycogen.  It is noteworthy, that under other circumstances, cortisol can facilitate the action of insulin in synthesis of glycogen, but that is unlikely to apply in states of serious glycogen depletion since the body’s priority will be maintaining blood glucose.

Because cortisol has acted to decrease the transport of glucose into muscle cells, the major input of fuel to the Krebs cycle in muscle must come from fats.  There are two pathways by which fats can generate the acetyl groups that keep the Krebs cycle revolving and producing energy: beta oxidation that splits the two-carbon acetyl group from long fatty acid chains, and the production of ketones.  Beta-oxidation is stimulated by cortisol.  Furthermore, when liver glycogen levels are low, fats are converted to ketones in the liver, whence they are released into the blood stream.   In both the brain and muscle, ketones can generate the acetyl groups required to maintain the energy supply.

Thus the body has a substantial capacity to ensure that the brain is supplied with glucose, and in extremis, with ketones. However this is achieved at the price of the elevation of cortisol.   As discussed previously, Skoluda and colleagues have demonstrated endurance athletes tend to have sustained high levels of cortisol.  In the long term this can lead to many adverse effects, including immune suppression, and also, somewhat paradoxically, chronic inflammation, probably mediated by a decrease in sensitivity of glucocorticoid receptors that mediate the effects of cortisol.

Thus one of the major needs of the endurance runner is enhancement of the capacity to utilise fats in preference to glucose before marked depletion of glycogen occurs.  Both training itself and diet can help achieve this.  In the next post, we will examine the evidence regarding the effects of diet not only on modulating the effects of training, but also on long term health.


Follow

Get every new post delivered to your Inbox.

Join 65 other followers