Project description
Isn’t it amazing how humans can go from barely understanding a single word when we are toddlers to being able to read Shakespeare? To untangle the complex processes that underpin reading, researchers have conducted so-called visual word recognition experiments, in which participants have to decide whether a string of letters is a word (e.g. safe) or a non-word (e.g. hafe). The faster and more accurately participants respond, the easier a word is understood. While this experimental paradigm has enjoyed tremendous success, key questions remain. Chief amongst them is the following: given a string of letters, can we predict how quickly participants respond to it?
In this project, we will make inroads in answering this question by recognising that visual word recognition is a multilayer network problem. Participants integrate information across different word properties such as spelling and pronunciation to make their decisions. Crucially, these properties do not exist in isolation. Seeing a word, i.e. recognising its spelling, automatically activates its pronunciation. Moreover, it is well documented that seeing a word like cat also activates words with similar spelling and pronunciation such at hat, car, cap and fat. Put differently, different words interact with each other across multiple properties during visual word recognition, which renders visual word recognition a multilayer network problem. Consequently, we can use the powerful machinery of network science and nonlinear dynamical processes to investigate how different mechanisms give rise to experimental data, and in doing so, bring us closer to unravel the cognitive processes that underpin visual word recognition.
The project will be undertaken in close collaboration with Prof Kathy Conklin in the School of English, Centre for Research in Applied Linguistics.