School of Computer Science

Image of Ansgar Koene

Ansgar Koene

Casma Senior Research Fellow, Faculty of Science



2014 - now Senior Research Fellow University of Nottingham, HORIZON Digital Economy Research institute, Nottingham, UK Project: Citizen Centric Social Media Analysis (CaSMa) Institute Director: Prof. Derek McAuley

2014 - now Visiting Research Associate University of Birmingham, Department of Psychology, Birmingham, UK Project: (1) CogLaboration (EU FP7); (2) Behavior Informatics Porject; (3) CN-CR Advanced Computational Methods guest lecturer on data mining Lab leader: Prof. Alan Wing

2012 - 2014 Research Fellow University of Birmingham, Department of Psychology, Birmingham, UK Project: (1) CogLaboration (EU FP7); (2) Behavior Informatics Porject; (3) Computational Social Science network Lab leader: Prof. Alan Wing

2010 - 2012 Research Scientist RIKEN, Brain Science Institute, Wako, Japan Lab: Integrated Theoretical Neuroscience Lab leader: Dr. Hiroyuki Nakahara

2008 - 2010 Post-doctoral Research Associate University of Sheffield, Department of Psychology, Sheffield, UK Project: ICEA: Integrating Cognition, Emotion and Autonomy (EU FP6) Lab leader: Prof. Tony Prescott

2007 - 2008 Post-doctoral Research Fellow National Taiwan University, Department of Psychology, Taipei, Taiwan Project: Context effects on perception and activity in area V4 Lab leader: Prof. Chien-Chung Chen

2006 - 2007 Research Scientist ATR, Computational Neuroscience Unit, Humanoid Robotics and Computational Neuroscience group, Kyoto, Japan Project: Biologically inspired multi-modal sensory integration and gaze shift control in a Humanoid Robot Lab leader: Dr. Gordon Cheng

2006 - 2006 Post-doctoral Research Fellow University College London, Department of Psychology, London, UK Project: Additivity of visual salience from colour, motion and orientation contrast Lab leader: Prof. Zhaoping Li

2004 - 2006 Post-doctoral Research Fellow University College London, Department of Psychology, London, UK Project: The role of neuronal synchrony in multi-modal integration Lab leader: Prof. Alan Johnston

2002 - 2003 Post-doctoral Researcher INSERM Unite 534, "Espace et Action" Lyon, France Project: Gaze movement modelling: simulation of behavioural and neurophysiological data Lab leader: Dr. Denis Pelisson

Expertise Summary

Computational Neuroscience

Reinforcement learning models of DA neuron activity during decision making Systems level modelling of rate-coded neural networks - saccade generation (SC-Cerebellum-brain-stem) - size perception (visual cortex V1-V2-V4) - structure-from-motion (visual cortex MT-IT) Biomimetic robot navigation using Hippocampus, Amygdala & Basal Ganglia models Modelling of bio-mechanical systems (oculomotor system, 3-link 3D arm) Robot simulations using Webots (Cyberbotics inc.)


Visual & Audio-visual psychophysics experiments Collection and analysis of eye movement data fMRI experiments on visual perception Arm & full body motion capture (Qualisys & Polhemus) Qualitative behaviour rating (Human-Human & Human-Robot Interaction) Web-based experiments (questionnaires & dynamic decision making tasks) Human-Robot Interaction experiments

Computational skills

Website administration using Content Management Systems (Joomla) Wiki-site administration Dynamic web-development (web-based experiments) using JavaScript, PhP, Html5 Matlab, Python, C & C++ programming Statistical data analysis (Matlab statistics toolbox) Data Mining techniques fMRI data processing (SPM)

Teaching Summary

2013- 2015: Advanced Computational Methods, three one-hour lectures on Data Mining methods for the Computational Neuroscience and Cognitive Robotics (CN-CR) MSc at University of Birmingham,… read more

Research Summary

The CaSMa project is an ESRC funded project at the Horizon Digital Economy Research hub of the University of Nottingham, UK. The aim of the project is to develop facilities and approaches that are… read more

Recent Publications

2013- 2015: Advanced Computational Methods, three one-hour lectures on Data Mining methods for the Computational Neuroscience and Cognitive Robotics (CN-CR) MSc at University of Birmingham, Birmingham, UK

2012 Matlab & Simulink course for the SyMoN lab

2009 Contribution to MSc in Cognitive and Computational Neuroscience teaching, two one-hour lectures on Internal models in Motor Control as part of the "Current Issues in Systems Neuroscience" module, University of Sheffield, Psychology department, Sheffield, UK

2008 Barcelona Cognition, Brain and Technology summer school tutorial on "Internal Models in Motor Control", Barcelona, Spain

2003 Informal introductory course on modeling methods for PhD students and staff of the lab at INSERM Unite 534, Lyon, France

1999 - 2000 Teaching Assistant for General Linear Signal Processing course (ALSV), Physics department, Utrecht University, Utrecht, the Netherlands

1999 - 2000 Teaching Assistant for Digital Image Processing course (BEEL), Physics department, Utrecht University, Utrecht, the Netherlands Student (co-)supervision

Current Research

The CaSMa project is an ESRC funded project at the Horizon Digital Economy Research hub of the University of Nottingham, UK. The aim of the project is to develop facilities and approaches that are sensitive to the personal nature of human (social media) data and promote responsible innovation in the capture, analysis and use of human data for understanding the ways people use social media and what this means for individuals and society and understanding social phenomena and events expressed in social media by drawing upon social media as a critical, and timely, source of information.

Past Research

CogLaboration: Real World Human-Robot Collaboration

The EU FP7-ICT project "CogLaboration" aimed to develop "Real World Human-Robot Collaboration: From the Cognition of Human-Human Collaboration to the Cognition of Fluent Human-Robot Collaboration". As part of the CogLaboration project at the University of Birmingham SyMoN lab I used Motion Capture experiments to measure the forces, movements and interactions between humans performing simple object transfer tasks. Based on the data we collect from Human-Human interactions our CogLaboration partners produced a prototype robot-arm control system for Robot-Human object transfer interaction. This, in turn, will contribute to the development of service robots capable of safe and direct interaction with human at home or at work.

Reinforcement learning of temporal context effects

At the Integrated Theoretical Neuroscience lab of the RIKEN Brain Science Institute I worked on the data analysis and computational modeling of Dopamine Neuron activity that was recorded in tasks where monkeys learned the temporal pattern of inter-trial reward probability changes (Nakahara et al, Neuron, 41, 269-280, 2004; Bromberg-Martin et al, Neuron, 67, 499-510, 2010).

ICEA: Integrating Cognition Emotion and Autonomy

As part of the ICEA project at the ABRG I am involved in computational modelling of the Basal Ganglia circuit for action selection as well as combining this circuit with models of the hippocampus and amygdalar (developed by other labs that are part of the EU ICEA project) to control a simulated rat (ICEAsim). The different models written in various programming languages (C, Phython, Matlab) and the Webots robot simulation environment are combined using Brahms. Short videos of the simulated rat performing a plus-maze goal directed navigation task are available here [ Observer_POV][Rat_POV] The movements where the rat freezes at the end of the maze arms are when the rat is receiving its reward. The duration of freezing corresponds to the amount of reward being received.

Computational model of contour integration

In this project we are modelling the lateral interactions between V1 hypoercolumns to produce human-like contour integration (i.e. filling in of gaps between co-linear line segments). My role in this project is as advisor and proof-reader while the actual work is done by Zong-En Yu (PhD. student at NTU in Taiwan).

Context effects on size perception: Psychophysics

Perceived size depends on a combination of retinotopic size (the size of the projection of an object on the retina), prior knowledge concerning the sizes of this type of object and the visual context that the object is presented in. Visual context can provide cues concerning the distance to the object as well as visual references such as objects of known size relative to which judgements can be made. In this study we used psychophysical measurements to determine how different texture backgrounds affect the perceived size of a test disk. This study was done at the National Taiwan University with Prof. Chien-Chung Chen.

Computational model of texture context effects on size perception

In this project I developed a model of size perception that combined bottom-up retinotopic size detectors with top-down contextual modulation in order to replicate and explain the data from our psychophysics experiments. This study was done at the National Taiwan University with Prof. Chien-Chung Chen. fMRI study of background context effects on size perception

Using the same basic stimuli that we used in the psychophysics experiments, we employed an event-related fMRI paradigm to record brain activity related to context induced decrease in perceived stimulus disc size. This study was done at the National Taiwan University with Prof. Chien-Chung Chen.

Computational model of visual attention modes: mode selection and interaction

In this project (a collaboration with Dr. J. Moren at ATR, Japan) we aim to develop a visual attention system, for use in humanoid robots, that combines 'scene scanning', 'target tracking' and 'reflexive panic like' attention modes.

Computational modelling of rapid (saccadic) eye movement control

Development of models of the cerebellar circuit involved in saccade generation based on the experimental data collected by Dr. Denis Pelisson and Dr. Laurent Goffart.

Active audio-visual perception in a humanoid robot: reflex gaze shifts and attention driven saccades

Full awareness of sensory surroundings requires active attentional and behavioural exploration. In visual animals, visual, auditory and tactile stimuli elicit gaze shifts (head and eye movements) to aid visual perception of stimuli. Such gaze shifts can either be top-down attention driven (e.g. visual search) or they can be reflex movements triggered by unexpected changes in the surroundings. We are developing an active vision system with focus on multi-sensory integration and the generation of desired gaze shift commands that will be part of the sensory-motor control of the humanoid robot CB.

Salience from combined feature contrasts, evidence for feature specific interaction suggestive of V1 mechanisms

A target can be salient against a background of distractors due to a unique feature such as color (C), orientation (O), or motion direction (M) or combinations of them. Using subjects' reports comparing saliencies between two stimuli, Nothdurft (Vision Research, 2000,40:1183-1201) found that combining features increases salience. Since salience serves visual selection rather than discrimination, reaction times (RT) provide a more direct measure. Krummenacher et al.(J. Experimental Psychology: Human Perception & Performance, 2002,28(6):1303-1322) measured RTs for detecting targets unique in O, C or combination O+C, revealing that O+C requires shorter RTs than predicted by a race model, which models RT as an outcome of a race between two independent decision processes on O or C only. We measured RT to locate targets unique in O, C, or M or their combinations. Significant (by t-test) violation of the race model by shorter RTs was found in O+M and C+O but not C+M. These results are consistent with some V1 neurons being conjunctively selective to O+M, others to C+O but almost none to C+M (Horwitz & Albright, Journal of Vision, 2005,5:525-533; Li, Trends in Cognitive Science ,2002,6:9-16). Comparing shortest RTs in the single versus double feature conditions corroborated this finding.

Cross-modal interactions in audio-visual perception

Audio-visual search experiments revealed that audio-visual synchrony detection follows the pattern of an attentive serial process and not a pre-attentive parallel process. This study was done in collaboration with Dr. S. Nishida and Dr W. Fujisaki of NTT, Japan.

We investigated if apparent enhancement of perceived colour contrast when a visual stimulus is coincident with an auditory tone (Sheth and Shimojo, 2004) is due to sensory level interaction between the visual and auditory modalities or due to interaction at the decision level.

We showed a significant improvement in the signal discrimination ability of subjects when the signal is presented in two modalities as compared to uni-modal signals. The bi-modal signal discrimination improvement was found to persist regardless if the signals in the two modalities are presented simultaneously or sequentially. For uni-modal signals in contrast it was found that there was no improvement in signal discrimination ability even when presentation of the signal was repeated.

How does the brain integrate signals from different sensory sources within or between modalities to form a coherent percept of the environment?

This work was done in Prof. Alan Johnston's lab as part of the Human Frontier Science Program funded project: 'The role of neural synchrony in multi-modal integration'.

Modelling of bi-stable perception (slant rivalry)

Raymond van Ee's ground at the Helmhotlz Institute recently investigated how much depth is perceived when subjects view a slanted plane in which binocular disparity and monocular perspective provide opposite slant information. Using a metrical experimental paradigm it was found that for small cue conflict observers perceived the slant of the plane as an average of the perspective and disparity specified slants. When the cue conflict was larger, however, observers experienced bi-stability. In a following experiment they measured the time course of percept changes during bi-stability in slant perception and the effect of voluntary control by the subjects. During this experiment four situations were tested: natural, hold perspective, hold disparity and speed up. By comparing the normalized histograms of frequencies of percept durations belonging to the different instructions the effect of voluntary control could clearly be seen both in the shift of the peak in the distribution and in the mean percept duration.

Traditional bottom-up competition models of bi-stability assume a 'binary' process in which the percept must choose one of two alternatives. The transition from an averaging regime to a bi-stable regime as a function of cue conflict is therefore inherently incompatible with traditional competition based bi-stability models.

We therefore developed a new model that uses a combination of spatial activity maps (for the averaging) and winner-take-all competition (for the bi-stability). The effect of voluntary control is included in the model as a top-down process that primes the neurons corresponding to the instructed shift in attention such that they have a heightened responds.

Mechanics of the oculo-motor system and its consequences for eye movement control

The topic of my PhD research was the control of saccadic eye movements. This work was subdivided into four subtopics: 1. Cause of kinematic differences during centrifugal and centripetal saccades; 2. Quantification of saccadic signal modification as a function of eye orientation; 3. Properties of 3D rotations and their relation to eye movement control; 4. Errors resulting from the use of eye plant models that treat agonist-antagonist muscle pairs as a single muscle. Fuzzy-Neural Networks for automated rule extraction from data sets

Initialization and structure learning in fuzzy neural networks for data-driven rule-based modelling

Gradient-based optimization was used to fit a fuzzy-neural-network model to data and a number of techniques were developed to enhance transparency of the generated rule base: data-driven initialization, similarity analysis for redundancy reduction, and evaluation of the rules contributions. The initialization uses flexible hyper-boxes to avoid redundant and irrelevant coverage of the input space. Similarity analysis detects redundant terms while the contribution evaluation detects irrelevant rules. Both are applied during network training for early pruning of redundant or irrelevant terms and rules, excluding them from further parameter learning (training). The method was tested on a standard example from the literature, the 'Rosenbrock Valley'.

School of Computer Science

University of Nottingham
Jubilee Campus
Wollaton Road
Nottingham, NG8 1BB

For all enquires please visit: