Monday, 16 October 2017

EARLI SIG 22 Neuroscience and Education conference

The fifth biennial EARLI SIG22 Neuroscience and Education conference will take place from Monday 4th to Wednesday 6th June 2018, in central London hosted by the Wellcome Trust.

I am very excited to be on the organising committee for this conference, and we have a really exciting programme planned. Keynotes include Paul Howard-Jones, Heidi Johansen-Berg, and Robert Plomin. We have also planned plenty of time for discussion of new ideas and issues in the field.

We have just opened up abstract submission for poster presentations, which will be open until 8th January 2018.

Find out more about the call for abstracts and the conference here.

Latest post on BOLD blog - neuromyths in education

My latest blog post for the BOLD blog is now live, and the topic this time is neuromyths in education. What exactly are neuromyths and what impact might they have in the classroom?

Read the post here.

Tuesday, 3 October 2017

New book: "Neuroscience for teachers"

Neuroscience for Teachers: Applying research evidence from brain science is a new book from Richard Churches, Eleanor Dommett, and Ian Devonshire. I was lucky enough to read and review the book before publication, and here are my comments:

"Neuroscience for Teachers provides a comprehensive, up-to-date introduction to the key issues, debates, challenges, methods and research findings in the field of educational neuroscience. It is written accessibly and contains everything that a teacher needs to know about neuroscience, describing where this knowledge comes from. Most fascinating are the tips given to teachers, which are very clearly drawn from the evidence base as it currently stands. This has the added bonus of making Neuroscience for Teachers a useful resource for researchers who carry out related work but may be stumped when considering how their work impacts upon education.

Whether the reader is a teacher or a scientist, they will come away with a deep understanding of the educational neuroscience knowledge base, how we got there, and how we might use this information in the classroom."

Wednesday, 20 September 2017

New educational neuroscience blog series on BOLD

I'm really excited to have the first of a mini series of blog posts published on the BOLD website. BOLD is the Blog On Learning and Development. In the mini series I will be posting about evidence in the classroom, and the first post is a short introduction to educational neuroscience. I really enjoyed writing the post and found it quite an eye-opening experience - I realised shortly after I started writing that I was defining educational neuroscience almost entirely by defending it from common criticisms. This must be a habit I've got into since it is so widely criticised. I managed to start again and re-write it in a much more positive way, so I hope my enthusiasm for the field comes across!

You can read the first post on educational neuroscience here.

Tuesday, 19 September 2017

Designing a cognitive training study: Control groups

Previous posts in this series considered the differences between process- and strategy-based training, and what success might look like in a training study. Another key design feature to think about is the inclusion of an adequate control group. Including a control group seems obvious, since we need to make sure that gains are linked to the training rather than to normal development. But designing a study with a good control is challenging.

A control group might be matched to the training group on key characteristics, such as general cognitive ability and age, only differing in that they do not receive the training – a ‘business as usual’ control. While this seems like a sensible option, it is important to consider what the control is doing while the other group receives training. Is the control group doing normal reading practice while the training group get their reading intervention? Or perhaps the training group is receiving extra reading help while the control group have already left school for the day. This is clearly an important distinction that will affect the conclusions that can be drawn from the results.

One way to counter these challenges is to include an active control group. In this scenario, the control group is again matched on key characteristics to the training group. However, the control group also receive some training, just not in the skill that is trying to be developed. In this case, the control group could be given something very different to the reading training group, like a maths intervention, both taking place after school so as not to interfere with normal schooling. This would mean that any gains seen in the reading training group cannot be down to the effect of simply taking part in a piece of research, which might involve working on fun computer programmes or with researchers and cause a spike in engagement at school.

But is this a fair comparison? Is it very surprising if pupils improve their reading skills after doing some more reading? If we really want to find out what causes the change, we need an even closer match for the control group – for example a similar reading intervention that does not train the key ingredient that is thought to lead to improvement (e.g. phonics). Now if we see an improvement in the training group, we can be fairly sure that there is something special about the phonics training that led to gains.

The use of different types of control group is associated with recruitment challenges that should also be taken into account. Unsurprisingly, many teachers and parents are opposed to their children being put into the control group, which can mean that fewer pupils sign up to take part. One clever way around this is to use a cross-over ‘wait list’ control group, where half of the pupils are in the training group and half are in the control group for one phase of the study, then they switch for the second phase. Everyone receives the training at some point, and it is still possible to compare training to control.

A final option is to include no control group. This might be appropriate when the aim is to see which individuals respond best to the training. For instance, do those with better working memory improve more with phonics training than those with poorer working memory? To answer this question, a group with a large variation in working memory skills could take part, with no control group. In this example, the outcome will be able to tell us something useful about the mechanisms of learning in the absence of a control group.

There is no single right answer when it comes to choosing what the control group does. This will vary between studies and should be thought about very carefully before commencing the study, depending on what the research question is.


Part one on types of cognitive training can be found here, and part two on success in cognitive training can be found here.

Tuesday, 5 September 2017

Shaking up the academic conference

Last week I was at a large conference of over 2,000 delegates. While I was there, an article was published in the Guardian, lamenting the huge expense and exclusivity of such conferences, which can be too costly for early career researchers to attend. I was lucky enough to have my trip paid for, but I wondered how many people were unable to attend due to finances, or how many people had forked out their own money to be there.

The Guardian article, written by two academics, highlighted the increasingly extravagant social programmes. If social events are included in the cost of the conference, researchers may wonder why their registration fee was not better spent elsewhere. On the other hand, if the social event is an added extra that is paid for, those with less money (due to any number of factors, including being an early career researcher, or from a poorer country) may opt out and miss important opportunities for networking.

The article also questioned whether or not conferences really deliver what they intend to. A survey of delegates at conferences in the water sector found that only 2% found conferences useful and cost-effective. So even when researchers can afford to get to a big conference, is it worth the effort and expense? Last week I found myself wondering what the added value was of attending conference talks compared to reading the latest papers.

I think there are a number of things that can be done to improve these international conferences. Keynote speakers often get their expenses paid, yet they are typically not the ones who are most in need of financial help. Keynote speakers who have access to conference funds could be encouraged to pay towards their own expenses, so that money can be directed more towards those in need.

Conferences could offer more in the way of online engagement to reach those who are not present. Conference tweeting is now very common, but this can be hard to follow from afar, so a move towards more formal online discussions, and video streaming, would be welcomed. Conferences could take place less regularly, particularly when there are many conferences that overlap in their themes. Within my field of research, educational neuroscience, there have been discussions about whether or not societies that typically hold separate conferences could run a joint event. This way, delegates would not have to choose which conference to attend in a given year.

Finally, conferences should aim to be better value for money. Researchers often attend a conference for a few days, and present just one talk or one poster. Multiple submissions could be encouraged, particularly to encourage early career researchers to discuss their ideas. Many conferences only allow submissions from those who have results at the time of submission. This excludes work that is finalised during the intervening months, and prevents discussion of new project ideas that are not yet underway. Opening up discussions to proposed work, which will most likely require different formats of conference session, would enable peers to help shape future research.

I will continue thinking about these issues over the coming months, as I am co-organising an upcoming conference. My aim is to encourage early career researchers to become more involved, and to provide settings for discussions of issues and ideas outside of the usual talk and questions format. While it is implied that these discussions will happen during coffee breaks and social events, I believe that these discussions should take a prominent role in conferences. The expertise present should be capitalised on so that researchers can work together to consider how best to address issues and move the research field forward. 

Wednesday, 2 August 2017

UK primary school teacher survey

I am working with researchers at the UCL Institute of Education, The University of Sheffield, and The University of Nottingham on a project looking at the skills involved in learning science in primary school. We are interested in finding out the views of primary school teachers on this topic.

If you are a primary school teacher in the UK please fill in our short survey about this by going to this link. The study has received ethical approval from the UCL Institute of Education (ethics number REC 972).

Please get in touch with me directly if you have any questions about this survey, by email at abrook07@mail.bbk.ac.uk.

Tuesday, 27 June 2017

Broad Inquiry

I am really pleased to be featured on the Broad Inquiry website. The Broad Inquiry project hosts profiles of women in science, technology, engineering, and maths. It aims to showcase the interesting work that women are doing, while providing some information about what life is like as a scientist. Any woman in STEM is eligible to sign up to be featured, so I encourage everyone else to do the same. Find out more here.

Monday, 12 June 2017

The myth of learning styles

In March, an open letter in the Guardian, led by Professor Bruce Hood, aimed to raise awareness of the myth of learning styles. Learning styles refers to the idea that individuals have preferences for learning in certain domains (auditory or visual for example), and learn better when information is presented in their preferred domain. A summary of the (lack of) evidence for this approach can be found on the Centre for Educational Neuroscience website, in the centre's series on neuromyths.

The open letter sparked debate in The Psychologist magazine, when Professor Rita Jordan responded. Jordan questioned the evidence presented, championed an individualised approach to education, and suggested that giving lectures to teachers about the myth was not helpful. Hood, on behalf of all co-signatories, responded in turn, emphasising that the original letter referred to a general educational approach, and arguing that giving talks to teachers might help them to recognise pseudoscience.

I decided to write to The Psychologist too. Firstly, I wanted to make clear that those of us who argue against learning styles are not calling for a depersonalised approach to education. There may be some important negative effects of teaching according to learning styles: that students do not get to practice other ways of learning, and that they may miss out on material that is better learnt another way. Surely educators should be challenging pupils to improve in all domains. Arguing against learning styles is therefore not arguing against the notion of individualisation, rather it is arguing against the use of this specific approach which may be detrimental.

I also wanted to advocate for increased discussion between researchers and teachers. Scientists giving lectures to educators is one way in which knowledge can be exchanged, but of course there are other approaches too. Collaborations between teachers and researchers are increasingly common, and anything that encourages communication between both groups is to be commended and encouraged.

Finally, it's important to remember that the adoption of learning styles is not cost-free. Schools pay out large sums of money to have someone tell them how to utilise this approach in their classrooms. Given the lack of money in schools, this could certainly be spent better elsewhere. As the original open letter argued: "any activity that draws upon resources of time and money that could be better directed to evidence-based practices is costly and should be exposed and rejected".

Thursday, 27 April 2017

Designing a cognitive training study: Success

This is the second in a series of posts that examines the key aspects of designing a cognitive training study. Post one considered the type of training programme that a researcher might design. But what does success look like for a training study? It is important to establish this during the design phase, to ensure appropriate tests are in place.

The obvious answer is that success is seen when there are improvements in performance compared to a control group. Both accuracy and response times might be important here. Improved accuracy is important in determining ability to carry out the task, but response times might be informative about underlying mechanisms. Increased speed might indicate improved automaticity or efficiency, while reduced speed might indicate greater thought prior to a response, or the use of a new strategy.

Gains in performance are most likely to be seen in the task that is being practised throughout the programme. If the training is computerised, performance can be tracked during each session, measuring both accuracy and speed. Plotting a learning curve of performance throughout the training might help to identify the number of sessions that were necessary to elicit meaningful change. We might also look for improvements in a task very similar to the trained task, indicating near transfer.

More importantly though, the hope is that improvements occur beyond the task being trained, in academic performance, demonstrating far transfer. One step further, we might also find long-term effects, whereby those who underwent training see sustained gains in academic performance. This is the holy grail of cognitive training research. Ultimately, this is of course one of the aims of our field – to improve education. But is academic improvement enough to call a training study successful? And what if a study doesn’t produce gains in academic performance?

A training study that shows no improvement in academic performance is not necessarily unsuccessful, as it may inform our cognitive theories. It might tell us that individual differences in that cognitive ability do not affect academic performance in the way we thought they did. This is clearly still a useful outcome, and will lead to new questions and hypotheses. Conversely, a training programme that has led to academic improvement might not be able to tell us anything new if the causal processes have not been considered. This may occur if many approaches have been incorporated into one study, and individual effects can't be teased apart. Training studies should be considered a tool, to help establish underlying mechanisms of learning.

Finally, it is important to think carefully about the precise aim of the study. Perhaps a strategy-based working memory training programme has been designed to improve maths performance. It may be that an overall aim of the project is to improve maths as a means of encouraging more pupils to take up maths-related subjects later in their educational careers. In this case, success might also be measured in terms of maths anxiety. The training programme may not have shown transfer to maths performance, but it may have reduced maths anxiety through providing new strategies, and this in turn might lead to the desired impact of higher enrolment to maths-related courses.

Considering what counts as success and what success means while designing the programme will help to crystallise the aims and hypotheses of the study. This will benefit in the selection of the tests used to measure success, so that the results can inform our mechanistic understanding.


Part one, on types of cognitive training, can be found here.
Part three, on control groups, can be found here.

This post was informed by this highly recommended article: