My latest post for the Blog on Learning and Development has gone live! This post is the fourth in my mini-series on evidence in the classroom, and considers whether brain training for children can have any impact in the classroom. You can read the post here.
My first post, on bringing scientific evidence in the classroom, introduces educational neuroscience and can be found here.
My second post, on neuromyths in education, can be found here.
My third post, on identifying what works in education, can be found here.
Stay tuned - my next post will be about mindsets in education.
My latest blog post has appeared on the BOLD blog, on the topic of randomised control trials (RCT) in education. In this post I introduce the concept of an RCT in the classroom, and consider the challenges of this approach.
The fifth biennial EARLI SIG22 Neuroscience and Education conference will take place from Monday 4th to Wednesday 6th June 2018, in central London hosted by the Wellcome Trust.
I am very excited to be on the organising committee for this conference, and we have a really exciting programme planned. Keynotes include Paul Howard-Jones, Heidi Johansen-Berg, and Robert Plomin. We have also planned plenty of time for discussion of new ideas and issues in the field.
We have just opened up abstract submission for poster presentations, which will be open until 8th January 2018.
Find out more about the call for abstracts and the conference here.
"Neuroscience for Teachers provides a comprehensive, up-to-date introduction to the key issues, debates, challenges, methods and research findings in the field of educational neuroscience. It is written accessibly and contains everything that a teacher needs to know about neuroscience, describing where this knowledge comes from. Most fascinating are the tips given to teachers, which are very clearly drawn from the evidence base as it currently stands. This has the added bonus of making Neuroscience for Teachers a useful resource for researchers who carry out related work but may be stumped when considering how their work impacts upon education.
Whether the reader is a teacher or a scientist, they will come away with a deep understanding of the educational neuroscience knowledge base, how we got there, and how we might use this information in the classroom."
I'm really excited to have the first of a mini series of blog posts published on the BOLD website. BOLD is the Blog On Learning and Development. In the mini series I will be posting about evidence in the classroom, and the first post is a short introduction to educational neuroscience. I really enjoyed writing the post and found it quite an eye-opening experience - I realised shortly after I started writing that I was defining educational neuroscience almost entirely by defending it from common criticisms. This must be a habit I've got into since it is so widely criticised. I managed to start again and re-write it in a much more positive way, so I hope my enthusiasm for the field comes across!
You can read the first post on educational neuroscience here.
My second post, on neuromyths, can be read here.
My third post is about randomised control trials in education, and can be read here.
Previous posts in this series considered the differences between
process- and strategy-based training, and what success might look like in a
training study. Another key design feature to think about is the inclusion of an
adequate control group. Including a control group seems obvious, since we need
to make sure that gains are linked to the training rather than to normal
development. But designing a study with a good control is challenging.
A control group might be matched to the training group on key
characteristics, such as general cognitive ability and age, only differing in
that they do not receive the training – a ‘business as usual’ control. While
this seems like a sensible option, it is important to consider what the control
is doing while the other group receives training. Is the control group doing
normal reading practice while the training group get their reading
intervention? Or perhaps the training group is receiving extra reading help
while the control group have already left school for the day. This is clearly
an important distinction that will affect the conclusions that can be drawn
from the results.
One way to counter these challenges is to include an active control
group. In this scenario, the control group is again matched on key
characteristics to the training group. However, the control group also receive
some training, just not in the skill that is trying to be developed. In this
case, the control group could be given something very different to the reading training
group, like a maths intervention, both taking place after school so as not to
interfere with normal schooling. This would mean that any gains seen in the reading
training group cannot be down to the effect of simply taking part in a piece of
research, which might involve working on fun computer programmes or with
researchers and cause a spike in engagement at school.
But is this a fair comparison? Is it very surprising if pupils improve
their reading skills after doing some more reading? If we really want to find
out what causes the change, we need an even closer match for the control group –
for example a similar reading intervention that does not train the key
ingredient that is thought to lead to improvement (e.g. phonics). Now if we see
an improvement in the training group, we can be fairly sure that there is
something special about the phonics training that led to gains.
The use of different types of control group is associated with
recruitment challenges that should also be taken into account. Unsurprisingly,
many teachers and parents are opposed to their children being put into the control
group, which can mean that fewer pupils sign up to take part. One clever way
around this is to use a cross-over ‘wait list’ control group, where half of the
pupils are in the training group and half are in the control group for one
phase of the study, then they switch for the second phase. Everyone receives
the training at some point, and it is still possible to compare training to
A final option is to include no control group. This might be appropriate
when the aim is to see which individuals respond best to the training. For instance,
do those with better working memory improve more with phonics training than
those with poorer working memory? To answer this question, a group with a large
variation in working memory skills could take part, with no control group. In
this example, the outcome will be able to tell us something useful about the
mechanisms of learning in the absence of a control group.
There is no single right answer when it comes to choosing what the control
group does. This will vary between studies and should be thought about very
carefully before commencing the study, depending on what the research question
Part one on types of cognitive training can be found here, and part two
on success in cognitive training can be found here.
Last week I was at a large conference of over 2,000 delegates. While I was there, an article was published in the Guardian, lamenting the huge expense and exclusivity of such conferences, which can be too costly for early career researchers to attend. I was lucky enough to have my trip paid for, but I wondered how many people were unable to attend due to finances, or how many people had forked out their own money to be there.
The Guardian article, written by two academics, highlighted the increasingly extravagant social programmes. If social events are included in the cost of the conference, researchers may wonder why their registration fee was not better spent elsewhere. On the other hand, if the social event is an added extra that is paid for, those with less money (due to any number of factors, including being an early career researcher, or from a poorer country) may opt out and miss important opportunities for networking.
The article also questioned whether or not conferences really deliver what they intend to. A survey of delegates at conferences in the water sector found that only 2% found conferences useful and cost-effective. So even when researchers can afford to get to a big conference, is it worth the effort and expense? Last week I found myself wondering what the added value was of attending conference talks compared to reading the latest papers.
I think there are a number of things that can be done to improve these international conferences. Keynote speakers often get their expenses paid, yet they are typically not the ones who are most in need of financial help. Keynote speakers who have access to conference funds could be encouraged to pay towards their own expenses, so that money can be directed more towards those in need.
Conferences could offer more in the way of online engagement to reach those who are not present. Conference tweeting is now very common, but this can be hard to follow from afar, so a move towards more formal online discussions, and video streaming, would be welcomed. Conferences could take place less regularly, particularly when there are many conferences that overlap in their themes. Within my field of research, educational neuroscience, there have been discussions about whether or not societies that typically hold separate conferences could run a joint event. This way, delegates would not have to choose which conference to attend in a given year.
Finally, conferences should aim to be better value for money. Researchers often attend a conference for a few days, and present just one talk or one poster. Multiple submissions could be encouraged, particularly to encourage early career researchers to discuss their ideas. Many conferences only allow submissions from those who have results at the time of submission. This excludes work that is finalised during the intervening months, and prevents discussion of new project ideas that are not yet underway. Opening up discussions to proposed work, which will most likely require different formats of conference session, would enable peers to help shape future research.
I will continue thinking about these issues over the coming months, as I am co-organising an upcoming conference. My aim is to encourage early career researchers to become more involved, and to provide settings for discussions of issues and ideas outside of the usual talk and questions format. While it is implied that these discussions will happen during coffee breaks and social events, I believe that these discussions should take a prominent role in conferences. The expertise present should be capitalised on so that researchers can work together to consider how best to address issues and move the research field forward.
I am working with researchers at the UCL Institute of Education, The University of Sheffield, and The University of Nottingham on a project looking at the skills involved in learning science in primary school. We are interested in finding out the views of primary school teachers on this topic. If you are a primary school teacher in the UK please fill in our short survey about this by going to this link. The study has received ethical approval from the UCL Institute of Education (ethics number REC 972). Please get in touch with me directly if you have any questions about this survey, by email at email@example.com.
I am really pleased to be featured on the Broad Inquiry website. The Broad Inquiry project hosts profiles of women in science, technology, engineering, and maths. It aims to showcase the interesting work that women are doing, while providing some information about what life is like as a scientist. Any woman in STEM is eligible to sign up to be featured, so I encourage everyone else to do the same. Find out more here.
In March, an open letter in the Guardian, led by Professor Bruce Hood, aimed to raise awareness of the myth of learning styles. Learning styles refers to the idea that individuals have preferences for learning in certain domains (auditory or visual for example), and learn better when information is presented in their preferred domain. A summary of the (lack of) evidence for this approach can be found on the Centre for Educational Neuroscience website, in the centre's series on neuromyths.
The open letter sparked debate in The Psychologist magazine, when Professor Rita Jordan responded. Jordan questioned the evidence presented, championed an individualised approach to education, and suggested that giving lectures to teachers about the myth was not helpful. Hood, on behalf of all co-signatories, responded in turn, emphasising that the original letter referred to a general educational approach, and arguing that giving talks to teachers might help them to recognise pseudoscience.
I decided to write to The Psychologist too. Firstly, I wanted to make clear that those of us who argue against learning styles are not calling for a depersonalised approach to education. There may be some important negative effects of teaching according to learning styles: that students do not get to practice other ways of learning, and that they may miss out on material that is better learnt another way. Surely educators should be challenging pupils to improve in all domains. Arguing against learning styles is therefore not arguing against the notion of individualisation, rather it is arguing against the use of this specific approach which may be detrimental.
I also wanted to advocate for increased discussion between researchers and teachers. Scientists giving lectures to educators is one way in which knowledge can be exchanged, but of course there are other approaches too. Collaborations between teachers and researchers are increasingly common, and anything that encourages communication between both groups is to be commended and encouraged.
Finally, it's important to remember that the adoption of learning styles is not cost-free. Schools pay out large sums of money to have someone tell them how to utilise this approach in their classrooms. Given the lack of money in schools, this could certainly be spent better elsewhere. As the original open letter argued: "any activity that draws upon resources of time and money that could be better directed to evidence-based practices is costly and should be exposed and rejected".
This is the second in a series of posts that examines the key aspects
of designing a cognitive training study. Post one considered the type of
training programme that a researcher might design. But what does success look like for a
training study? It is important to establish this during the design phase, to
ensure appropriate tests are in place.
The obvious answer is that success is seen when there are improvements
in performance compared to a control group. Both accuracy and response times
might be important here. Improved accuracy is important in determining ability
to carry out the task, but response times might be informative about underlying
mechanisms. Increased speed might indicate improved automaticity or efficiency,
while reduced speed might indicate greater thought prior to a response, or the
use of a new strategy.
Gains in performance are most likely to be seen in the task that is
being practised throughout the programme. If the training is computerised,
performance can be tracked during each session, measuring both accuracy and
speed. Plotting a learning curve of performance throughout the training might
help to identify the number of sessions that were necessary to elicit meaningful
change. We might also look for improvements in a task very similar to the
trained task, indicating near transfer.
More importantly though, the hope is that improvements occur beyond the
task being trained, in academic performance, demonstrating far transfer. One
step further, we might also find long-term effects, whereby those who underwent
training see sustained gains in academic performance. This is the holy grail of
cognitive training research. Ultimately, this is of course one of the aims of
our field – to improve education. But is academic improvement enough to call a
training study successful? And what if a study doesn’t produce gains in
A training study that shows no improvement in academic performance is
not necessarily unsuccessful, as it may inform our cognitive theories. It might
tell us that individual differences in that cognitive ability do not affect
academic performance in the way we thought they did. This is clearly still a
useful outcome, and will lead to new questions and hypotheses. Conversely, a training programme that has led to academic improvement might not be able to tell us anything new if the causal processes have not been considered. This may occur if many approaches have been incorporated into one study, and individual effects can't be teased apart. Training studies should be considered a tool, to help establish underlying mechanisms of learning.
Finally, it is important to think carefully about the precise aim of
the study. Perhaps a strategy-based working memory training programme has been
designed to improve maths performance. It may be that an overall aim of the
project is to improve maths as a means of encouraging more pupils to take up
maths-related subjects later in their educational careers. In this case,
success might also be measured in terms of maths anxiety. The training
programme may not have shown transfer to maths performance, but it may have
reduced maths anxiety through providing new strategies, and this in turn might
lead to the desired impact of higher enrolment to maths-related courses.
Considering what counts as success and what success means while
designing the programme will help to crystallise the aims and hypotheses of the
study. This will benefit in the selection of the tests used to measure success,
so that the results can inform our mechanistic understanding.
Part one, on types of cognitive training, can be found here.
Part three, on control groups, can be found here.
This post was informed by this
highly recommended article:
Every so often I get email requests for further information about
educational neuroscience. I thought it would be handy to compile a list of resources
and links that might be of interest. Many of my suggestions are London and UK
based, so please bear this in mind and do share anything else you have come
across further afield.
The Education and
Neuroscience online group was set up by Lia Commissar of the Wellcome Trust,
and aims to facilitate links and partnerships between teachers and researchers.
There are opportunities here for sharing files, events and blog posts, as well
as for getting involved with forum discussions.
In 2015, I’m a
Scientist, supported by the Wellcome Trust, ran an online event where
teachers could ask questions of, and engage in discussion with, scientists who
research learning. Although the event is no longer open to questions, it’s a
great resource for scrolling through or searching for questions that might
be of interest.
The npj Science of
Learning community is linked to the journal in that it aims to fulfil
the journal’s aim to foster discussions across disciplines related to the
science of learning. There are sections that relate to opinions, events, news, and
the latest findings, and there are articles suitable for students, teachers,
Learnus is a community that
aims to bring research into the classroom. Learnus hold free lectures throughout
the year, and held their first conference in early 2017. Learnus also offer a
free workshop for
teachers, to increase awareness of the relevance of neuroscience to the
In 2014, the Wellcome Trust published the
results of a teacher and parent survey that aimed to establish their views
of how neuroscience can influence education.
In the same year, the Education Endowment Foundation (EEF) published
a review of educational interventions that are informed by neuroscience. The
EEF also have a handy toolkit
that indicates the cost, strength of evidence, and impact of a range of
interventions, although these are not necessarily based on neuroscience.
Neuroscience discusses methods (e.g. neuroimaging, computational modelling)
and findings (e.g. relating to language, mathematics, executive functions),
considering the relevance to education.
for Genes presents findings from genetics that are relevant to education,
and discusses what individual differences in genetics means for educational
Blogs BOLD (blog on learning and development) is run by the Jacobs Foundation. It hosts authors who are scientists, journalists, policymakers, and practitioners.
ThInk is an educational neuroscience blog from the Wellcome Trust. Societies
The International Mind, Brain, and Education Society (IMBES) aims to further our knowledge, as well as create and identify useful resources. IMBES also holds a conference roughly every two years that attracts researchers from around the world as well as teachers.
Special Interest Group 22 (Neuroscience and Education) is part of a wider organisation, the European Association for Research on Learning and Instruction (EARLI). A group conference is held every two years, and in the intervening years there are EARLI conferences that bring together all special interest groups.
Flux is a developmental cognitive neuroscience society that encourages translational research in education and other fields.
Cognitive training is a hot topic in educational neuroscience. Can
training a certain cognitive function lead to gains in academic performance? This
is an exciting question for researchers who (a) want to see real-world impact
of their research, and (b) aim to use training as a tool to further inform
their theories. But what makes a good training study, and what are the key aspects
to be considered throughout the design process? This is the first in a series
of posts that examines the key aspects of designing a cognitive training study.
An important consideration is the type of training programme. Will the programme
provide practise of difficult tasks (process-based training), or will it train
a new strategy to bring to the task (strategy-based training)? Repetition of a
task through process-based training may lead to increased automaticity and
efficiency, while a new method learnt through strategy-based training may
enable a toolkit approach where students can choose the best tool for each problem.
Taking one example, a training programme might aim to improve working
memory, since this is known to be important for many academic outcomes. Process-based
training would see the student practise working memory tasks, perhaps in an
adaptive programme that gets harder or easier depending on performance. On the
other hand, strategy-based training would provide explicit explanations of how to perform in the task.
A third approach, that can be considered a type of strategy-based
training, is to train metacognitive knowledge. This time, the student might be
given a mechanistic explanation of why
working memory is so important in their academic studies. They might be
explicitly told when to use working
memory. Here the aim is not necessarily to train working memory, but rather to train
the use of working memory within a certain context. Perhaps a student has
adequate working memory but has not previously considered its use in this subject domain.
Metacognitive training might allow the student to identify when working memory
is needed, and to implement an appropriate strategy.
In the process- and strategy-based approaches we would expect to see an
improvement in working memory. This in turn might lead to improvements in academic performance. Conversely we might not expect any working memory
improvement through a metacognitive approach, but we might nonetheless see an
Perhaps then, the most effective approach would be to combine all three
of the above: Provide repetition of the task, train specific strategies, and
increase metacognitive awareness of the cognitive functions involved in a task
or subject domain. In terms of educational outcomes, this might be the most
likely to show an impact. The challenge for the researcher is that in providing
all three, we are no closer to discovering what the “active ingredient” causing
The final consideration in choosing the type of training study is that
different methods may be effective for different learners. Perhaps some
students require the process- and strategy-based training to improve their
baseline working memory ability, while other students already have very good
working memory but might benefit from metacognitive training to help them
identify when to use this ability. Therefore the type of training programme
might depend on the population that the programme targets.
These considerations highlight the importance of designing the training
programme from a cognitive theory. While the ultimate aim of educational neuroscience
is of course to improve education, as scientists, researchers must use their
theory to choose the best type of training programme to answer a particular
question. Simply providing training and hoping for a positive outcome is not
enough. The outcome must drive theory forward, and the training programme must
be carefully designed to enable this. For the scientist, an important result is
one that can tell us about the mechanism behind change, rather than one that
shows improvement without a good theory about why this change occurred.
This post was informed by this highly
Head over to the Centre for Educational Neuroscience blog to see a summary of the day from Alex Hodgkiss who was the main organiser of the event. The day was a huge success, with almost 100 people in attendance, and almost a further 100 on the waiting list! There is huge appetite for new educational neuroscience research findings, from a wide range of individuals. We had researchers, teachers, and representatives from charities and organisations in attendance.
The whole event was filmed, so we will be making the videos available soon. I hope that the enthusiasm for the day will translate to more events like this, and lead to more conversations between teachers and researchers. Thank you to everyone who attended for your active engagement.
In 2016, Jeffrey Bowers of the University of Bristol published
a paper entitled “The practical and principled problems with educational
neuroscience”. In this paper, Bowers described what he saw as key issues in the
field, ultimately arguing that neuroscience cannot help education. In March
2017, Bowers spoke at UCL’s Language and
Cognition Seminar Series, where he argued the main points from his paper.
As a proponent of educational neuroscience, I took this
opportunity to hear Bowers speak in person about what he feels the main
problems with educational neuroscience are. Bowers started his argument by
stating that educational neuroscience has two core claims, which are that
teachers who know about the brain will be more effective, and that neuroscience
will suggest new forms of teaching. Bowers stated that neuroscience-based
claims about education are self-evident, that work that is helpful for
education is mislabelled as educational neuroscience, and that educational
neuroscience work is misguided. Bowers took the example of brain training games
to show that sometimes educational neuroscience is wrong. He used a version of Dorothy
Bishop’s table of the possible effects of reading interventions to argue
that understanding brain change associated with interventions is pointless.
Like Bishop, Bowers also argued that psychology is key if we want to inform
education. Bowers finished his talk by explaining that he is not opposed to
neuroscience, nor to the science of learning, but neuroscience should not
pretend to help education when it can’t.
The description of educational neuroscience that Bowers
described did not match my own experiences within the field. Educational
neuroscience is not about neuroscientists conducting research and then imparting
their knowledge to teachers. Rather it is about bringing together
neuroscientists, educators, psychologists, geneticists, and those from any
science that is related to education, and collaborating. Most of the people I
know who conduct educational neuroscience research are indeed psychologists, so
the notion of an educational neuroscientist who does not engage with psychology
does not match reality. Psychology and neuroscience go hand in hand, and for
me, the core claim of educational neuroscience is that an interdisciplinary,
scientific, approach can better explain, and thus enhance, teaching and
learning. The researchers in this field that I have come across do not pretend that their work is relevant to
education: they actually work closely with teachers (many researchers
themselves are also ex-teachers) from the outset of a project, to ensure it is relevant for education.
In terms of teachers’ understanding of neuroscience, the aim
is not to simply tell teachers how the brain works. Part of the mission of
educational neuroscience is to enable teachers to digest new neuroscientific
research themselves. As Bowers mentioned in his talk, brain training games can
often present themselves as being based on scientific findings. While Bowers
saw this as an example of bad educational neuroscience, educational neuroscientists
see it as their duty to inform educators of the perils of these expensive,
sometimes predatory programmes. Tackling myths is one of the items on the
agenda for an educational neuroscientist. Rather than simply passing on
neuroscience findings to teachers, the aim is to enable teachers to access and
Opponents of educational neuroscience, such as Bowers,
sometimes point to examples of studies that claim to be educational
neuroscience, and show that they have not yet impacted on teaching. Educational
neuroscience is a young field, and the expectation for droves of findings to help
teachers will not be met for some time. The six Wellcome
Trust and Education Endowment Foundation funded projects show how the field
currently works in reality. These large-scale projects bring together
scientists and educators, collaborating, discussing, designing and carrying out
research that is both scientifically rigorous and interesting and useful for
teachers. There is no pretence that the work is relevant for education, because
the involvement of educators ensures that this is a key priority from the
outset. Educational neuroscience is often characterised as neuroscientists
adding an impact statement to their funding application that states “… and this
might help education”. This is a mischaracterisation of the people I know who
work in this field, who are genuinely concerned with using an evidence-based
approach to improve teaching and learning.
Bowers, J. S. (2016). The practical and principled problems
with educational neuroscience. Psychological
Review, 123, 600-612.
Howard-Jones, P A., Varma, S., Ansari, D., Butterworth, B.,
De Smedt, B., Goswami, U., Laurillard, D., Thomas, M. S. C. (2016). The
principles and practices of educational neuroscience: Comment on Bowers. Psychological Review, 123, 620-627.
I recently spoke to some teachers who were
new to the concept of educational neuroscience (or mind, brain, and education),
and its aim to bring a scientific approach to education. I was surprised that
this is still new to some educators, so for me it was a reminder of the
importance of keeping up our efforts to communicate with teachers.
The teachers I spoke to, having learnt a
bit about the field, were keen to find out about the latest research findings
and how they might impact on the classroom. One teacher said she would never
implement any new project in school without evidence to back up its
However, I feel that there are some
expectations of teachers that researchers are not ready to meet. It is
therefore essential that communications between researchers and teachers are honest
and that researchers highlight the extent to which translations from science to
the classroom are realistic. In particular, teachers were looking for a manual to
explain the neuroscience behind various behaviours of their students, and how
best to respond to this behaviour. The first problem here is that we are still far from
a full understanding of the neuroscience behind all of the different types of
behaviours that a teacher may witness in the classroom. The second issue is a
belief that it is neuroscience that can explain these behaviours and what to do
about them. While neuroscience may have some explanatory power, educational
neuroscience aims to bring together all fields of science that are relevant to
teaching and learning. It is likely that the best solutions don’t come directly
from neuroscience, but may come from other types of research such as cognitive
psychology or educational research at the system level. Indeed some prefer the
label mind, brain, and education (or MBE) to educational neuroscience, in
describing the bringing together of sciences that describe teaching and
learning. Further, what works in the lab, or a handful of schools, may not generalise to
another school: research findings are likely to be highly contextual.
A further resource that teachers thought would
be useful was a place for them to pose their own research questions for scientists to
conduct research on. While educational neuroscience researchers value and seek collaboration
with teachers in their studies, it is likely that researchers already have
their topics of investigation (often with funding attached) and are in fact
looking for teachers who have aligned interests. There is also the fact it
often takes a couple of years to run a decent study and generate useful
results, plus many related studies may be required to answer the specific
question that a teacher has.
The enthusiasm from this group of teachers
new to educational neuroscience was encouraging, but as a field we must be
careful about managing expectations. At the moment, the endeavour of
educational neuroscience should be about collaboration: working together,
sharing resources and findings, developing a common language. While teachers
may not be able to submit their questions for scientists, they certainly can
work with researchers to help shape the design of the research. Hopefully
working in this manner will lead to benefits for teachers in the long run, even
if there is no immediate payoff. For the teacher who vowed to only implement
evidence-based school changes, this is a noble aim, but the evidence base is
not there yet. Perhaps the best solution for the time being is to try things
out, but be wary of being too prescriptive, and monitor changes. The hope is that
one day, we will have the evidence that teachers seek. But this will require many
years of close collaboration between educators and researchers, working
together to try to improve teaching and learning for everyone.
Over on my npj Science of Learning blog, I have started a series of interviews with researchers in Educational Neuroscience. The aim of the interviews is to showcase the work of those who are working hard to bridge the gap between scientific research and the classroom. I also hope they will demonstrate the realistic goals of researchers: nobody in this field is claiming to be able to transform education with some brain scans. I hope you'll agree that there is some great work happening at the moment, in the incremental manner that science works in.
Su, Alex and Mike all used to be teachers, which I think makes them ideal researchers in this field - they have a deep understanding of the education system, the reality of school life, and pressures faced by teachers.
Registration has now opened for a day conference organised by PhD students at the Centre for Educational Neuroscience. It will take place on Friday 17 March 2017, at the Wellcome Trust. The conference is supported by the Bloomsbury Doctoral Training Centre and the Wellcome Trust, and is dedicated to discussing the progress our field has made and challenges for the future.
We will have a keynote from Prof Gaia Scerif, and talks on educational outcomes from Dr Dénes Szücs, Dr Sinead Rhodes, and Dr Michelle Ellefson. We are also delighted to have updates on five of the neuroscience and education projects funded by the Wellcome Trust and Education Endowment Foundation. The day will include plenty of discussion, including a poster session over lunch and a wine reception at the end.
We are inviting poster presentations on work related to the field of educational neuroscience, and will award poster prizes for outstanding work that bridges the gap between neuroscience and the classroom. Since there are a limited number of spaces available, preference will be given to those who submit a poster abstract.
Find a detailed programme below, and please sign up for a place here: goo.gl/xYhtPX
Places will be confirmed by 10 February.
Programme for the day
09:30-10:55 Keynote: Prof Gaia Scerif
10:55-11:15 Coffee break
11:15-13:00 Educational Outcomes: Dr Dénes Szücs, Dr Sinead Rhodes, Dr Michelle Ellefson
13:00-14:00 Lunch (included) and poster presentations
14:00-15:30 Wellcome Trust and Education Endowment Foundation projects:
of executive functions in subjects across the curriculum has led many
researchers to consider how training executive functions might improve
performance in these subjects. We already know that it is possible to train
executive functions, such as working memory or inhibitory control, but the key
question is whether this improvement transfers to academic subjects that
require these skills. So far, there is little research indicating successful transfer,
but the field is moving towards an approach of training the executive function
within the academic subject of interest: in contrast to simply training the
executive function. This is more likely to be fruitful because it requires more
explicit use of the skill in a new setting. Part of the training in this case
might be simply raising awareness that a particular skill is useful within that
discussed issue in training research is the possible unintended consequences of
training. A recent
paper by Matzen and colleagues found that performance on a recognition
memory task decreased following working memory training in adults. Here, the training
was not within a subject domain, but was a typical adaptive working memory programme,
aiming to improve both verbal and spatial working memory. The authors hypothesised
that there would be near-transfer to other working memory tasks, and far-transfer
to a recognition memory task. In fact, while performance on the baseline
working memory tasks increased, the training led to no near-transfer and lower
performance on the far-transfer recognition memory task.
For me, the
most interesting aspect of this paper is that participants were asked about
their memory strategies. Analysis of this data suggests that participants who
had received the working memory training were using less effective strategies: they
were using the strategies learned during training but these were not effective
in the recognition memory task. Ironically then, participants did show far-transfer to a new task, but it did not have the anticipated positive effect. The authors suggest that future studies include a larger battery of
tasks to examine whether decreased performance is simply a quirk of specific
measures or a genuinely concerning effect.
I think this
finding further highlights the need for training to be explicit and occur within
the subject domain. Explaining clearly to participants why a skill is being
trained, and why it is useful within the context might help to guard against
implicit unwanted transfer to other tasks. In my own research, I am considering
an inhibitory control training programme within the context of science and
maths. Explaining the mechanism through which inhibitory control impacts
science and maths reasoning, alongside the subject-embedded inhibitory control training programme, might increase awareness of participants’ strategy
use and allow selection of the appropriate strategy for the task at hand. Considering
the limited transfer effects in the literature, this may be more beneficial
than training inhibitory control in isolation and looking for transfer to
science and maths.
I will bear in mind the possibility that any training could have unintended negative
consequences, and consider what these might be and how they could be measured and
combatted. If we continue to find decreased performance in untrained areas,
this will raise important questions about why and when we should implement
training programmes, weighing up the benefits in one domain against the detriment to others.
for the Matzen paper:
Matzen, L. E.,
Trumbo, M. C., Haass, M. J., Hunter, M. A., Silva, A., Stevens-Adams, S. M.,
Buning, M. F., & O’Rourke, P. (2016). Practice makes imperfect: Working
memory training can harm recognition memory performance. Memory & Cognition,44,