This is part 6 of my BSMPG series where I will be sharing my notes and thoughts from this year’s Boston Sports Medicine seminar hosted at Northeastern University. I hope that this series stimulates some thought and debate, and that you get some value out of this information.
Previous parts of the series include:
Presentation 6: Sam Coad
Sam Coad is a disgusting individual: at 23 years of age he has already worked with Brisbane Lions AFL, Gold Coast Titans NRL, started a PhD, worked as performance manager at University of Michigan football, and is on his way to even bigger things.
After I swallowed my self hatred at how old I am and how little I have achieved in my career relative to Sam, I attended his solid presentation on monitoring recovery in elite athletes. Here is what he had to say:
- The three primary methods of monitoring recovery are load, fatigue and performance. Load- how much work are they doing? Fatigue- what are the physiological effects of that load? Performance- what are the performance implications of that fatigue?
- We have to look at both internal and external monitoring tools to get a full picture of athlete readiness/fatigue. One without the other only gives half the story.
- Primary areas of concern for load based monitoring are volume, intensity and density. Volume: the quantity of work. Intensity: the rate of work. Density: total work divided by time. (Intensity and density sound similar but I interpret intensity to mean total volume of work at high intensity e.g. high intensity metres, whereas density probably refers to something like metres per minute).
- There is a lot of value in looking at the relationship between planned load and actual load. In the Michigan data, whenever the performance team started to go off script in terms of weekly training load, they paid the price with poor performance on the field.
- Beware positional differences. There can be big differences between positions in certain variables despite being near identical in others. For example one position may match another for total metres covered, but a greater percentage of these may be high intensity metres. This has physical implications. Likewise a “normal” load for one position may be very high or very low for another.
- The easiest place to start with GPS is to assess positional competition demands. Understanding the demands of the game allows us to better structure training to replicate match activity (vital for realisation of physiological adaptation in game specific context, development of CNS pacing strategies, desensitisation to fatigue, development and refinement of game tactics).
- Key GPS variables Sam looks at with his athletes: total volume, low intensity volume, high intensity volume, high:low intensity volume.
- Snapshots of data don’t really have any value. It’s like trying to guess the plot of a movie from one frame of film. The true value of monitoring data lies in assessing an athlete relative to themselves last day, last week, last month etc.
- Remember to include standard deviations when providing mean data. A big jump in match activity is a lot more worrying if you’re playing a position or sport where there is usually very little weekly change. If your positional demands vary wildly on a weekly basis it is less of a worry.
- The goal of a load monitoring system is to provide real time information which can be used to make constant small changes to the training programme to ensure the athlete receives the right stimulus, at the right time, in the right amount.
- Physiological fatigue monitoring can take a few different forms: cardiac (various HR measures), central nervous (HRV, reaction time, psychological profiling), neuromuscular (tests of strength, power or reactivity), pulmonary (various).
- The primary value of internal physiological monitoring lies in understanding an individuals unique response to a training stressor and plotting the course of their recovery. This allows us to better individualise training loads and recovery strategies to create more predicable, consistent performance.
- Physiological monitoring also allows us to examine the relationship between external training loads and their physical effects.
- To be truly effective, there needs to be less than a 6 hour turnaround between collecting data and using the information it gives us.
- Performance monitoring is a big topic. But a good place to start is try and understand the differences for the variables you can measure between and winning and losing performances.
- Also, what is the relationship if any between training or physiological data for the week preceding a match, and the data generated during that match (certain key performance variables or the outcome of the match itself).
- Once we have this information we can then begin the process of trying to develop predictive models to better understand and control performance. See my write up of Roman Fomin’s presentation for more information on this.
- Ultimately analytical software is the ideal monitoring solution. The sheer volume of data generated by a comprehensive monitoring programme, especially in team sports, makes it extremely difficult to see patterns in the numbers.
A series of questions to ask when trying to implement monitoring strategies:
- What is the problem we want to solve?
- Is the technology we are looking at valid? It is reliable?
- What other alternatives are on the market?
- Will this technology be viable for a long time or will it be redundant in a year or two?
- Will it be easy to educate players and staff about how to use this technology? It is useless if only one guy on the team is smart enough to know how to use it.
- Can you and will you use the information it generates to inform training decisions? If the answer is no, save your money.
- This is not a question but always think “evolution” rather than “revolution”. Incremental improvement is the name of the game.
- There should be 3 layers to reporting of data. The most simple is delivered to the head coach, who needs the bare minimum of information to make the right decisions. The next level up is distributed to the performance staff, who need a higher level of detail to inform their training decisions. The highest level of detail goes to the sport scientist/analyst responsible for managing the data, who needs the whole picture.
Lessons I learned from the presentation
I need to be better linking my internal and external monitoring data, examining the relationship between them. I also want to progress to a point where we can develop a predictive model for rugby performance.
It also made me think harder about sharing performance data with other members of the performance staff, and the format in which the information is delivered to them and head coaches.