The game of rugby has been played for over a century and yet its intricacies are still not fully understood. The key ingredient coaches are seeking is what can be added to a team’s make-up that will result in an increase in that teams level of playing success. The objective of this study is the exploration of the biomechanical aspects of movement in a rugby context specifically looking at the stages before, during and after contact. The hypothesis is that the optimal use of running lines in rugby union will lead to more successful breaches in the opposition’s defensive lines thus an increase in linebreaks will occur. In order to make a comparison based on scientific research principles, nine matches played during the 2001 season was compared with nine matches played during the 2002 season. For each match played in the 2001 and 2002 seasons the total number of linebreaks achieved in a match was calculated. In addition the total number of linebreaks achieved in the 2002 season was further subdivided into the specific categories of intervention in order to determine which intervention had the biggest impact on the total number of linebreaks achieved. By means of video footage of the matches played notational analysis was performed and information was gathered in order to gain data for further evaluation. The actions regarding the execution of the linebreaks were evaluated manually in respect of the intervention that was imposed during the coaching of the team during the 2002 season. Without exception a comparison between similar teams played during both seasons indicated that the total number of linebreaks achieved during the 2002 season was much higher than when the team competed against similar opposition during the 2001 season. The aggregate numbers indicated a significant increase in linebreaks from the 2001 to 2002 season. This conclusion was achieved by means of a simple T-test. Firstly an applied F-test test was done to determine whether the two samples had equal variances or not. Under the null hypothesis we assume that the variances of the two samples are equal, while the alternative states that the two samples have different variances. A value for the test statistic that is greater than the critical value will lead to a rejection of the null hypothesis. The test statistic was calculated and evaluated against the F (8,8) = 2.59 critical value on a 5% level of significance. The value of 15.921 is greater than the critical value of 2.95 and therefore the null hypothesis cannot be accepted, concluding that the two samples do not have equal variances. We then proceeded to test whether the 2002 average linebreaks are significantly higher than the average linebreaks achieved in the 2001 season. Under the null hypothesis the two sample averages are equal. Under the alternative, the 2002 average is higher than the 2001 average. In contrast to normal T-tests this specific test was a one-sided upper or right hand test due to the fact that we are testing whether the one average is greater and not equal to the other. Therefore, we would only reject the null hypothesis of equal sample averages if the test statistic were greater than the appropriate critical value. The calculated test statistic is 4.4827 and was evaluated against the t 0.05,9 = 1.833 critical value. Once again we cannot accept the null hypothesis. Therefore we can conclude that the average of the total linebreaks made during the 2002 season is statistically greater than the average of the total linebreaks made during the 2001 season. The results of this study therefore indicate that the new techniques incorporated into the coaching of the team in 2002 did positively influence the number of linebreaks when compared to the 2001 season.
Dissertation (MA (Human Movement Science))--University of Pretoria, 2005.