This is the second in a series of posts that examines the key aspects of designing a cognitive training study. Post one considered the type of training programme that a researcher might design. But what does success look like for a training study? It is important to establish this during the design phase, to ensure appropriate tests are in place.
The obvious answer is that success is seen when there are improvements in performance compared to a control group. Both accuracy and response times might be important here. Improved accuracy is important in determining ability to carry out the task, but response times might be informative about underlying mechanisms. Increased speed might indicate improved automaticity or efficiency, while reduced speed might indicate greater thought prior to a response, or the use of a new strategy.
Gains in performance are most likely to be seen in the task that is being practised throughout the programme. If the training is computerised, performance can be tracked during each session, measuring both accuracy and speed. Plotting a learning curve of performance throughout the training might help to identify the number of sessions that were necessary to elicit meaningful change. We might also look for improvements in a task very similar to the trained task, indicating near transfer.
More importantly though, the hope is that improvements occur beyond the task being trained, in academic performance, demonstrating far transfer. One step further, we might also find long-term effects, whereby those who underwent training see sustained gains in academic performance. This is the holy grail of cognitive training research. Ultimately, this is of course one of the aims of our field – to improve education. But is academic improvement enough to call a training study successful? And what if a study doesn’t produce gains in academic performance?
A training study that shows no improvement in academic performance is not necessarily unsuccessful, as it may inform our cognitive theories. It might tell us that individual differences in that cognitive ability do not affect academic performance in the way we thought they did. This is clearly still a useful outcome, and will lead to new questions and hypotheses. Conversely, a training programme that has led to academic improvement might not be able to tell us anything new if the causal processes have not been considered. This may occur if many approaches have been incorporated into one study, and individual effects can't be teased apart. Training studies should be considered a tool, to help establish underlying mechanisms of learning.
Finally, it is important to think carefully about the precise aim of the study. Perhaps a strategy-based working memory training programme has been designed to improve maths performance. It may be that an overall aim of the project is to improve maths as a means of encouraging more pupils to take up maths-related subjects later in their educational careers. In this case, success might also be measured in terms of maths anxiety. The training programme may not have shown transfer to maths performance, but it may have reduced maths anxiety through providing new strategies, and this in turn might lead to the desired impact of higher enrolment to maths-related courses.
Considering what counts as success and what success means while designing the programme will help to crystallise the aims and hypotheses of the study. This will benefit in the selection of the tests used to measure success, so that the results can inform our mechanistic understanding.
Part one, on types of cognitive training, can be found here.
This post was informed by this highly recommended article: