The computer vision lab (CVL) at Nottingham currently comprises four academic members of staff, and a growing number of postdoctoral researchers and PhD students. We have a particular focus on bioimage analysis and plant phenotyping, and understanding health from human behaviour.  We have regular lab/journal club meetings, and often work with other researchers and groups from the School and university.

 

Prof. Andrew French has developed approaches for cell, root, leaf, canopy and field-scale image analysis. He is interested in ML and AI approaches to computer vision, and current research is exploring the use of synthetic data to increase the power of AI models in plant science, and using generative AI as a means to help with synthetic data creation. He has experience with multiple modes of imaging, including micro-CT, hyperspectral, and MRI.  He is particularly interested in integrating AI into a wider range of domains in the biosciences.

Dr. Michael Pound has developed a broad range of plant image analysis algorithms and tools, based both on classical computer vision and, today, cutting edge AI approaches including CNNs, Transformers, and 3D reconstruction approaches.  He is also developing AI to build microscopes that automatically adjust to sample and optical aberration. He has extensive training and outreach experience, including being a multi-million view presenter on the YouTube channel ComputerPhile.

Dr. Valerio Giuffrida has developed several AI-based approaches to plant image analysis. He has developed an award-winning leaf counting ML approach for Arabidopsis plants. Other works are focussed on the analysis of plant roots and more recently microscopy plant cell analysis. Currently, he is focussed on developing accessible-to-everyone plant data analysis tools. He is also involved with the PhenomUK-DRI as co-lead of the Digital Infrastructure Strand. He is the lead organiser of the CVPPA 2024 workshop to be held in conjunction with ECCV 2024.

Dr. Joy Egede’s research focuses on understanding human behaviour via analysis of audio-visual and biosignals using machine learning and computer vision techniques. Currently, she works developing automated methods for detecting and interpreting medical conditions from expressive human behaviour, as well as designing models for user interfaces that adapt content delivery based on social signals read off the user. This includes projects relating to the automatic objective assessment of comorbid mental health issues and pain, and using virtual agents to deliver health advice to mothers and mothers-to-be in sub-Saharan Africa.