Training load responses modelling and model generalisation in elite sports

This study aims to provide a transferable methodology in the context of sport performance modelling, with a special focus to the generalisation of models. Data were collected from seven elite Short track speed skaters over a three months training period. In order to account for training load accumulation over sessions, cumulative responses to training were modelled by impulse, serial and bi-exponential responses functions. The variable dose-response (DR) model was compared to elastic net (ENET), principal component regression (PCR) and random forest (RF) models, while using cross-validation within a time-series framework. ENET, PCR and RF models were fitted either individually (MI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M_{I}$$\end{document}) or on the whole group of athletes (MG\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M_{G}$$\end{document}). Root mean square error criterion was used to assess performances of models. ENET and PCR models provided a significant greater generalisation ability than the DR model (p=0.018\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p = 0.018$$\end{document}, p<0.001\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p < 0.001$$\end{document}, p=0.004\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p = 0.004$$\end{document} and p<0.001\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p < 0.001$$\end{document} for ENETI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ENET_{I}$$\end{document}, ENETG\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ENET_{G}$$\end{document}, PCRI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$PCR_{I}$$\end{document} and PCRG\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$PCR_{G}$$\end{document}, respectively). Only ENETG\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ENET_{G}$$\end{document} and RFG\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$RF_{G}$$\end{document} were significantly more accurate in prediction than DR (p<0.001\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p < 0.001$$\end{document} and p<0.012\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p < 0.012$$\end{document}). In conclusion, ENET achieved greater generalisation and predictive accuracy performances. Thus, building and evaluating models within a generalisation enhancing procedure is a prerequisite for any predictive modelling.

1. To present and discuss the help of regularisation and dimension reduction methods in regards of the generalisation concept. 2. To model athletic performances using robust models to the high dimensionality and multicollinearity and to investigate the key factors of the short-track speed skating performance.

Materials and methods
Participants. Seven national elite Short-track speed skaters (mean age 22.7 ± 3.4 years old; 3 males, body mass of 71.4 ± 9.4 kg, and 4 females, body mass of 55.9 ± 3.9 kg) voluntary participated to the study. Each athletes experienced the 2018 Olympic Winter Games in PyeongChang, South Korea ( n = 2 ) or were preparing the Olympics Games of Pekin, China ( n = 7 ). The whole team was trained by the same coach, responsible for training programming and data collection. Mean weekly volume of training was 16.6 ± 2.5 hours. Data were collected over a three months training period without any competition, interrupted by a two weeks break and beginning one month after resuming training for a new season. Participants were fully informed about data collection and written consent was obtained from them. The study was performed in agreement with the standards set by the declaration of Helsinki (2013) involving human subjects. The protocol was reviewed and approved by the local research Ethics Committee (EuroMov, University of Montpellier, France). The present retrospective study relied on the collected data without causing any changes in the training programming of athletes.
Data set. Dependent variable: performance. Participants performed each week standing start time trials ( distance = 166.68 meters equal 1.5 lap) after a standardised warm-up. At the finish line, timing gates system (Brower timing system, USA) recorded individual time trial performance in order to ensure a high standard of validity and reliability between measures 44,45 . A total of n = 248 performances were recorded for an average of 35.4 ± 2.23 individual performances. The performance test being a gold standard for the assessment of acceleration ability 46 , athletes were all familiar with it prior to the study. In the sequel, let Y ⊂ R be the domain of definition of such a performance and Y ∈ Y the continuous random variable. In this context, each observation y j ∈ Y can be referenced by both its athlete i and its day of realisation t as y i,t . Independent variables. Independent variables stem from various sources, which are summarised in Table 1. In the sequel, let X ⊂ R d with d ∈ N be the domain of definition of the random variable X = [X 1 , . . . , X d ] ∈ X . The variable X is thus defined as a vector composed of the independent variables detailed hereafter. First, {X 1 } refers to the raw training loads (TL, Fig. 1c), calculated from on-ice and off-ice training sessions (see details on Supplementary material Appendix 1). Then, {X 2 , X 3 } represent two aggregations of daily TL. Those aggregations come from the daily training loads w(t)-also known here as X 1 -convoluted to two transfer functions adapted from Philippe et al. 28 , which are denoted g imp (t) and g ser (t).
The associated impulse response G imp (t) reflects the acute response to exercise (e.g. fatigue). It is defined as where τ I is a short time constant equals to 3 days in this study (Fig. 1a). Respectively, the response G ser (t) describes a serial and bi-exponential function reflecting training adaptations over time. It is defined as www.nature.com/scientificreports/ The time delay for the decay phase to begin only after the growth phase is given by the constant TD. Here, TD = 4τ G . Both τ G and τ D are the time constants of respectively the growth phase and the decline phase with τ G = 1 day and τ D = 7 days (Fig. 1b). Note that the time constants τ I , τ G , τ D were averaged and based on empirical knowledge and previous findings 13 . Hence, for a given athlete, Note that the symbol * denotes the convolution product. Similarly, some characteristics components of each session were aggregated. This encompasses Rate of Perceived Exertion (RPE) {X 4 , X 5 } , averaged power {X 6 , X 7 } , maximal power output {X 8 , X 9 } , relative intensity {X 10 , X 11 } , session duration {X 12 , X 13 } and session density {X 14 , X 15 } . Also, for each session ice quality {X 16 } and rest between two consecutive sessions {X 17 } were considered. Since some models may benefit from time through autocorrelated performances y i,t , the preceding performance y i,t−k with k = 1 was included as predictor, denoted {X 18 } . Finally, athlete {X 19 } was considered excepted for individually built models.
Applied to the observed data of this study a data set of n = 248 observations of performances associated with 19 independent variables was obtained (see Table 1). To formalise, allowing that X × Y ∼ f with f a function of density, the built data set is a sample S = {(x j , y j )} j≤n ∼ f n .
Modelling methodology. Formally, the goal is to find a function h : X → Y which minimises the generalisation error In practice the minimisation of R is unreachable. Instead, we get a sample set S = (x i , y i ) i≤n ∈ X × Y and note the empirical error as The objective becomes to find the best estimate ĥ = argmin h∈H Re(h) with H the class of function that we accept to consider.
Here, four family of models are evaluated in this context. With the exception of the DR, all models were individually and collectively computed ( h I and h G , respectively).
Reference: variable dose-response. The time-varying linear mathematical model developed by Busso 13 was considered as the model of reference. Formally and according to the previously introduced notation, this model is a function h ( busso) : X 1 → Y . It describes the training effects on performance over time, y(t), from the raw training loads X 1 . TL are convoluted to a set of transfer functions g apt (t) and g fat (t) , relating respectively to the aptitude and to the fatigue impulse responses as with τ 1 and τ 2 two time constants. Combined with the basic level of performance y * of the athlete, the modelled performance is with k 1 and k 2 (t) being gain terms. The later is related to the training doses by a second convolution to the transfer function with τ 3 a time constant. Since is defined as k 2 (t) = k 3 (w * g fat' )(t) where k 3 is a gain term, one may note that k 2 (t) increases proportionally to the training load and decay decreases exponentially from this new value. From discrete convolutions, the modelled performance can be rewritten as . The five parameters of the model (i.e. k 1 , k 3 , τ 1 , τ 2 and τ 3 ) are fitted by minimizing the residual sum of squares (RSS) between modelled and observed performances, such as , and www.nature.com/scientificreports/ where t ∈ T being the day in which the performance is measured. A non-linear minimisation was employed according to a Newton-type algorithm 47 . Unlike this model of reference, the next presented models take benefit from the augmented data space X * = X \ X 1 .
Regularisation procedures. Elastic net. In highly dimensional contexts, multivariate linear regressions may lead to unsteady models by being excessively sensitive to the expanded space of solutions. To tackle this issue, cost functions penalising some parameters on account of correlated variables exist. On one side, Ridge penalisation reduces the space of possible functions by assigning a constraint to the parameters, thus minimising their amplitude to almost null values. On the other side, Least Absolute Shrinkage and Selection Operator (LASSO) penalisation has the capacity to fix parameters coefficient to null. The ENET regularisation combines both Ridge and LASSO penalisation 39 . Hence, the multivariate linear model h (enet) : X * → Y is with x ∈ X * the predictors, β ∈ R d the parameters of the model and ǫ t the error term. The regularisation stems from the optimisation of the objective where α ∈ [0, 1] denotes the mixing parameter which defines the balance between the Ridge regularisation and the LASSO regularisation. denotes the impact of the penalty with → ∞ . For α = 0 and α = 1 , the model will use a ridge and a lasso penalisation, respectively. Thus, for α → 1 and a fixed value of , the number of removed variables (null coefficients) increases with monotony from 0 to the LASSO most reduced model. The model was adjusted by hyper-parameters α and during the model selection, being part of the CV process (as described below).
Principal component regression. In this multivariate context with potential multicollinearity issues, principal component analysis aims to project the original data set from X * into a new space X * of orthogonal dimensions called principal components. These dimensions are built from linear combinations of the initial variables. One may use the principal components to regress the dependent variable: also known as Principal Components Regression (PCR). The regularisation is performed by using as regressors only the first principal components which retain the maximum of variance of the original data, by construction. In our study and according to the Kaiser's rule 48 , p principal components with an eigenvalue higher than 1 were retained and further used in linear regression.
Such a model, h (pcr) :X * → Y , can be defined as a linear multivariate regression over principal components as with x ∈X * \{X * p+1 , . . . ,X * d } the predictors, β ∈ R p the parameters of the model and ǫ t the error term. In addition to being a regularisation technique by using a subset of principal components only, PCR also exerts a discrete shrinkage effect on the low variance components (the lower eigenvalue components), nullifying their contribution in the original regression model.
Random forest. Random Forest model consists of a large number of regression trees that operate as an ensemble. RF is random in two ways, (i) each tree is based on a random subset of observations and (ii) each split within each tree is created based on a random subset of candidate variables. The overall performance of the forest is defined by the average of predictions from the individual trees 49 . In this study, random subset of variables and number of trees were the two hyper-parameters for adjusting the model within the model selection. The model is a function h (rf) : X * → Y.

Time series cross-validation and prediction.
Since we aim at predicting daily skating performances such as non-independent and identically distributed random variables, the time dependencies have to be accounted for in the cross-validation procedure. It ensures information from the future are not used to predict performances of the past. Hence, data were separated-respectively to the time index-into one training data set for time series CV (80% of the total data) and the remaining data for an unbiased model evaluation (evaluation data set). In this procedure, a model selection occurs first with the search of hyper-parameters values that minimise the predictive model error over validation subsets. The model selection is detailed in Algorithm 1. www.nature.com/scientificreports/ Algorithm 1 iteratively evaluates a class of functions H , in which each function h (i) differs from its hyperparameters values. A time ordered data set S is partitioned into training and validation subsets ( S train and S valid , respectively). For each partition k with k ∈ {1, ..., K} , h (i) functions are fitted on the incremental S train and evaluated on the fixed S valid subset that occurs after the last element of S train . Once h (i) functions are evaluated on K partitions of S, a function h (i * ) that provides the lowest and averaged root mean square error (RMSE) among validation subsets defines an optimal model denoted h * .
Model evaluation. Afterwards and for each partition of S, h * is adjusted on new time ordered training subsets S ′ train which combines both S train and S valid . Then, the generalisation capability of h * is evaluated on fixed length subsets of evaluation data S eval , saved for that purpose. This procedure refers to the so-called "evaluation on a rolling forecasting origin" since the "origin" at which the forecast is based rolls forward in time 50 . Note that the DR is only concerned by the model evaluation step since it has no hyper-parameters to be optimised in the model selection phase.
Statistical analysis. For any model, the goodness of fit according to linear relationships and to performance were described by the coefficient of determination ( R 2 ) and the RMSE criterion respectively. Their generalisation ability is described by the difference between RMSE computed on each training and evaluation data. The prediction error was reported through the Mean Absolute Error (MAE) between observations and predictions. After checking normality and variance homogeneity of the dependant variable by a Shapiro-wilk and a Levene test respectively, linear mixed models were performed to assess the contribution of each class of model over the modelling error rate. Inter and intra subject variability over athletic performances modelling have been considered through random effects. Repeated measure ANOVAs were performed in order to assess the effect of the model class and population over the response, the effect size being reported through η 2 statistic. Multiple pairwise comparison of errors between the model of reference and the other models were performed using Dunnett's post-hoc analysis. Significance threshold was fixed to p < 0.05 . For linear mixed models, unstandardised regression coefficients β are reported along with 95% confidence interval (CI) as a measure of effect size. Models computation and statistical analysis were conducted with R statistical software (version 4.0.2). The DR model was computed with personal custom-built R package (version 1.0) 51 .

Results
Through the times series CV, models provided heterogeneous generalisation and performance prediction. Distributions of RMSE per model are illustrated in Fig. 2. Models generalisation. Mixed Table 2. Accordingly, a significant model class effect on prediction errors was reported ( p < 0.001, η 2 = 0.18 ∈ [0.15, 0.21] 95% CI ). Computing models over a larger population (i.e. group-based models) showed only a trend in favour of groupbased models over the errors response rate ( p = 0.146).
Distributions of RMSE on data used for model evaluation have shown heterogeneous variance between models. The greatest standard deviations were found for DR I and PCR G with σ = 0.053 and σ = 0.062 respectively. The ENET, PCR I and RF models provided more consistent performances with lower standard deviations comprised within [0.023; 0.027] and [0.012; 0.017] intervals for individual and group computed models, respectively. Note that the greatest errors on evaluation data were systematically attributed to one particular athlete. In average, predictions made from this athlete led to greater RMSE than ones made from other athletes ( p < 0.001 , β diff = 0.22 [0.163, 0.286] 95% CI ). Mean values of R 2 indicated that weak linear relationships between performance and predictors were identified by models ( R 2 ∈ [0.150; 0.206] ). The highest averaged R 2 value but also the greatest standard deviations were reported for DR I models ( R 2 = 0.206 ± 0.093 ). However, significant differences of averaged R 2 were only found for ENET I , RF  Table 3.
Predictions made from the two most generalisable models-ENET G and PCR G -and the reference DR I illustrate the sensitivity of models for a representative athlete (Fig. 3). Performances modelled from DR I model were relatively steady and less sensitive to real performance variations. Standard deviation calculated on data used for model evaluation supported such a smooth prediction with σ = 0.015 , σ = 0.071 and σ = 0.062 for DR i , PCR G and ENET G , respectively. Regarding ENET G , the greatest standardised coefficients were attributed to the auto-regressive component (i.e. the past performance) such as β = 0.469 , followed by the athlete factor and then impulse and serial bi-exponential aggregations. For regression, PCR G used the three first principal components explaining 52.3%, 16.5% and 7.6% of the total variance, respectively. Details about models' parameters as well as principal component compositions are available on Supplementary material Appendix 2. www.nature.com/scientificreports/ Discussion. In the present study, we provided a modelling methodology that encompasses data aggregation relying on physiological assumptions and model validation for future predictions. Data were obtained from elite athletes, able of improving their performance by training and being very sensitive to physical, psychological and emotional states. The variable dose-response model 13 was fitted on individual data. It was compared to statistical and machine-learning models fitted on individual and on overall data: ENET, PCR and RF models. Cross validation outcomes revealed significant heterogeneity in performances of models, even though the differences remain small regarding the total time of skating trials (see Table 3). The main criterion of interest, generalisation, was significantly greater for both ENET and PCR models than DR I model. One can explain this result by the capabilities of the statistical models to better catch the underlying skating performance process using up to 19 independent variables when associated with regularisation methods. Conversely, the DR I model relies on two antagonistic components strictly based on the training load dynamics. It does not deal with any other factors that may greatly impact the performance (e.g. psychological, nutritional, environmental, trainingspecific factors) 12,18,52 . Thus, such a conceptual limit can be overtaken by employing multivariate modelling that may result in a greater comprehension of the training load -performance relationship, for the purpose of future predictions 9,12 . To date, only a recent study from Piatrikova et al. 53 extended the former Fitness-Fatigue model framework 3 to account for some psychometric variables as model inputs. Despite the authors reported an improved goodness of fit for this multivariate alternative, attributing impulse responses to these variables might question the conceptual framework behind the model.
Distributions of RMSE from training and evaluation data sets allow us to establish a generalisation model ranking ( Table 2). Linear models computed on overall data offer a better generalisation. This finding is essential because by handling the bias-variance trade-off, models are more suited for capturing a proper underlying function that maps inputs to the target even on unknown data. Hence, it allows further physiological and practical interpretations from the models such as the remodelling process of skeletal muscle involved by exercise, dynamically represented by exponential growth and decay functions 28 . Besides, this result might be partly explained by the sample size. It is well known that statistical inference on small samples leads to bad estimates and consequently to bad performances in prediction 54,55 . A greater sample size obtained by combining individual Table 2. Summary of models pairwise comparisons for generalisation and prediction abilities. β diff represents the marginal mean difference of the RMSE distribution between the DR model and its comparison. www.nature.com/scientificreports/ data led to more accurate parameter estimates, being more suitable for sport performance modelling 12 . That is particularly important to consider when we aim to predict a very few discipline specific performances throughout a season. However, predicting non-invasive physical quality assessments which can be daily performed (e.g. squat jumps and its variations for an indirect assessment of neuromuscular readiness 56 , short sprints) may be an alternative for small sample size issues. In our case, standing start time trials over 1.5 laps allowed for the coach to evaluate underlying physical abilities of the skating performance, several times a week. Also, regularisation tends to stabilise parameters estimators and favour generalisation of the models. For instance, multicollinearity may occur in high-dimensional problems and stochastic models generally suffer from such a conditioning. One would note that the ENET and PCR models attempt to overcome these issues in their own way by (i) penalising or removing features-or both-that are mostly linearly correlated and (ii) by projecting the initial data space onto a reduced space, which is optimised to keep the maximum of variance of the data from linear combinations of the initial features. Both approaches limit the number of unnecessary-or noisy-dimensions. In contrast, in this study non-linear machine learning models ( RF I and RF G ) expressed a lower generalisation capability than linear models even when models combine data from several athletes. We believe that such models may be powerful in multidimensional modelling but require an adequate data set with, in particular, ones with a sufficient sample size. Otherwise, model overfitting may occur at the expense of inaccurate predictions on unknown data. As reported previously and with the exception of PCR G , models were more accurate in prediction than DR I ( Table 3). The large averaged RMSE as well as large standard deviations provided by the DR I among performance criteria tend to agree with the literature, since the model is prone to suffer from a weak stability and ill-conditioning raised by noisy data that impact its predictive accuracy 9,10 . This evokes that linear relationships between the two components "Aptitude"-"Fatigue" and the performance are not clear. However, because of a lack of cross-validation procedures on impulse response models and particularly the DR employed in our study, our results cannot be validly compared with the literature. Despite lower standard deviations of R 2 reported by ENET and PCR models, the weak averaged R 2 values suggest that linear models can only explain a few part of the total variance. Note that all linear models are concerned (including the DR I ), since the differences in averaged R 2 between models are relatively small and only significant for ENET I , RF G and PCR G models. Therefore and if the data allow it (i.e. a sufficient sample size and robustly collected data), non-linear models may still be effective and should be considered during the modelling process.
The sensitivity of models according to gains and losses of performances differed between the two most generalisable models-ENET G and PCR G -and the reference DR I (Fig. 3). Such differences can be explained by the influence of variables that may affect performance, other than training loads dynamic (e.g. ice quality the day of www.nature.com/scientificreports/ performance, cumulative training loads following a serial and bi-exponential function, the last known performance) or a DR I model failure in parameter estimates regarding to the variability of the data. Indeed, parameters estimates of ENET G supported that since changes in skating performance were mostly explained through the past performance, weighted by individual properties and to a lesser degree by training related parameters. The PCR G used a different approach for the same purpose and greatly relied on training related aggregations as well as environmental and training programming variables (see Appendix 2). However, this applied example does not inform us about neither the generalisation ability of models nor accuracy of predictions because it concerns only a particular set of data, where the selected models (i.e. with optimal hyper-parameters) are trained on the first 80% of data and evaluated on the 20% remaining data. In addition, since model estimates greatly depend on the sample size, we might expect significant different estimates with more data (particularly for ENET G ). This study presents some limits. The first one concerns the data we used and particularly the criterion of performance: standing start time trials few times a week during an approximately 3-months period. Even though being a very discipline specific test in which athletes are familiar and being conducted in standardised conditions, each test requires high levels of arousal, buy-in, motivation and technique. Therefore, psychological states and cognitive functions monitoring such as motivation and attentional focus 57,58 should have been done prior performing each trial. A concrete example is provided through the Fig. 3, where ENET G greatly penalised the training correlated features and kept the influence of the auto-regressive component predominant over other features. This may be the consequence of either an inference issue due to the relative small sample size, or a lack of informative value of training related features that do not allow to explain changes in skating performance. Also, both reasons support models failure in predicting skating performances of one particular athlete, who showed significant greater errors of prediction. It emphasises the importance of measuring the "right" variables for performance modelling purposes, in particular if the sport-specific performance involves various determining factors.
Secondly, the time series cross-validation presented here has a certain cost, most notably when only few data are available (e.g. when models are individually computed). The rolling origin re-calibration evaluation performed as described by Beirgmer et al. 59 implies a model training only on a incremental sub-sequence of training data. Hence, the downsized sample size of the first training sub-sequences may cause model failure in parameter estimates and consequently, an increase of prediction errors. Then, training and evaluation data sets present some dependencies. In order to evaluate models on fully independent data, some modifications of the current CV framework exist at the expense of withdrawing even more data in the learning procedure. According to Racine 60 , the so-called hv -block cross-validation is one of the least costly alternative to the CV used in our study, requiring a certain gap between each training and validation subsets. However, due to a limited sample size, we voluntary chose to not adapt the original CV framework described in Algorithm 1. Nonetheless, we recommend researchers and practitioners to consider such alternatives in case of significant dependencies and when sample size is sufficient.
Finally, backtesting was performed in order to evaluate model performances on historical data. From a practical point of view, models are able to predict the coming performance following a given feature of data known until day t. However, the contribution of training load responses modelling also concerns training after-effects simulations over a longer time frame. Having identified a suitable model, practitioners may pinpoint key performance indicators-specific to the discipline of interest-and confront model estimates to field observations. Then, simulations of these independent variables within their own distributions would allow practitioners and coaches to simulate changes in performance following objective and subjective measures of training loads, and any performance factors that are monitored. Conditional simulations that consider known relationships between independent variables (e.g. relationships between training load parameters) 61,62 may improve the credibility of simulations.
The modelling process presented so far constitutes a part of a decision support system (DSS), from issue and data understanding to evaluation of the modelling results 63 . Supported by a deployment framework that makes models usable by all, DSS helps technical, medical staffs in the training programming and scheduling tasks 64 throughout a systemic and holistic approach of a complex problem, such as athletic performance 65 . Besides, the technological improvement of sports wearable sensors and underpinning available data for quantifying and characterising exercise foster the development of DSS in individual and team sports.

Conclusion
In this study, we provided a transferable modelling methodology which relies on the evaluation of models generalisation ability in a context of sport performance modelling. The mathematical variable dose-response model along Elastic net, principal component regression and random forest models were cross-validated within a time series framework. Generalisation of the DR model was outperformed by ENET and PCR models, though our results may not be directly compared with the literature. The ENET model provided the greatest performances both in terms of generalisation and accuracy in prediction when compared to the DR, PCR and RF models. Globally, increasing sample size by computing models on the whole group of athletes led to more performing models than the individually computed ones. Yet, our results should be interpreted in the light of the models used. In our study, we foster the use of regularisation and dimension reduction methods for addressing high dimensionality and multicollinearity issues. However, other models could stand valuable for athletic performance modelling (e.g. mixed-effect models for repeated measures, generalised estimating equations since there are possible unknown correlation between outcomes, autocorrelation and cross-correlation functions for time-series analysis).
The methodology highlighted in our study can be reemployed whatever the data, with the aim of optimising elite sport performance through training protocols simulations. Beyond that, we believe that model validation is a requisite for any physiological and practical interpretation for the purpose of making future predictions. www.nature.com/scientificreports/ Further researches that involve training session simulations and model evaluations in forecasting would highlight the relevance of some model families for training programming optimisation.