image001.png

Vision Scientists! 

With ARVO and VSS cancelled and most of us working from home and under strict social distancing guidelines, we would like to invite vision scientists worldwide to join us for a weekly vision science virtual coffee break.

Come and join us on Zoom next Wednesday, 25th March.

Topic: Vision Science in times of social distancing

Time: Mar 25, 2020 03:00 pm London

08:00 am Pacfic

11:00 am Eastern

03:00 pm UK

04:00 Central European

Join Zoom Meeting

https://unibas.zoom.us/j/661371046


Posted
AuthorGunnar Schmidtmann

The processing of compound radial frequency patterns

Gunnar Schmidtmann, Frederick Kingdom, Gunter Loffler

Radial frequency (RF) patterns can be combined to construct complex shapes. Previous studies have suggested that such complex shapes may be encoded by multiple, narrowly-tuned RF shape channels. To test this hypothesis, thresholds were measured for detection and discrimination of various combinations of two RF components. Results show evidence of summation: sensitivity for the compounds was better than that for the components, with little effect of the components’ relative phase. If both RF components are processed separately at the point of detection, they would combine by probability summation (PS), resulting in only a small increase in sensitivity for the compound compared to the components. Summation exceeding the prediction of PS suggests a form of additive summation (AS) by a common mechanism. Data were compared to predictions of winner-take-all, where only the strongest component contributes to detection, a single channel AS model, and multi-channel PS and AS models. The multi-channel PS and AS models were modelled under both Fixed and Matched Attention Window scenarios, the former assuming a single internal noise source for both components and compounds or different internal noise sources for components and compounds respectively. The winner-take-all and single channel models could be rejected. Of the remaining models, the best performing one was an AS model with a Fixed Attention Window, consistent with detection being mediated by channels that are efficiently combined and limited by a single source of noise for both components and compounds.

Schmidtmann, G., Kingdom, F. A. A., & Loffler, G. (2019). The processing of compound radial frequency patterns. Vision Research161, 63–74. [PDF] [PubMed]

Posted
AuthorGunnar Schmidtmann
Cover: Jennings et al. (2019), Vision Research, 161, 12-17

Cover: Jennings et al. (2019), Vision Research, 161, 12-17

Detection of distortions in images of natural scenes in mild traumatic brain injury patients.

Ben J. Jennings, Gunnar Schmidtmann, Fabien Wehbé, Frederick Kingdom, Reza Farivar

Mild traumatic brain injuries (mTBI) frequently lead to the impairment of visual functions including blurred and/or distorted vision, due to the disruption of visual cortical mechanisms. Previous mTBI studies have focused on specific aspects of visual processing, e.g., stereopsis, using artificial, low-level, stimuli (e.g., Gaussian patches and gratings). In the current study we investigated high-level visual processing by employing images of real world natural scenes as our stimuli. Both an mTBI group and control group composed of healthy observers were tasked with detecting sinusoidal distortions added to the natural scene stimuli as a function of the distorting sinusoid's spatial frequency. It was found that the mTBI group were equally as sensitive to high frequency distortions as the control group. However, sensitivity decreased more rapidly with decreasing distortion frequency in the mTBI group relative to the controls. These data reflect a deficit in the mTBI group to spatially integrate over larger regions of the scene.

Jennings, B. J., Schmidtmann, G., Wehbé, F., Kingdom, F. A. A., & Farivar, R. (2019), Detection of distortions in images of natural scenes in mild traumatic brain injury patients. Vision Research, 161, 12-17 [PDF] [PubMed]

Posted
AuthorGunnar Schmidtmann

I have been awarded the  VISTA Distinguished Visiting Scholar Award and will stay for a research project at York University (Toronto) from  May 24th to June 14th.

Vision: Science to Applications (VISTA) is a collaborative research programme hosted by York University and funded by the Canada First Research Excellence Fund (CFREF, 2016-2023). My host Dr Ingo Fruend (https://www.yorku.ca/ifruend/) and I will be working on aspects of shape perception and the development of an optimisation algorithm for perimetry. I will also attend the Centre for Vision Research International Conference on Predictive Vision (http://www.cvr.yorku.ca/conference2019). 

More information on the VISTA research programme can be found below and here: http://vista.info.yorku.ca.

CFREF.png
Posted
AuthorGunnar Schmidtmann

Abstract

Faces provide not only cues to an individual's identity, age, gender and ethnicity, but also insight into their mental states. The ability to identify the mental states of others is known as Theory of Mind. Here we present results from a study aimed at extending our understanding of differences in the temporal dynamics of the recognition of expressions beyond the basic emotions at short presentation times ranging from 12.5 to 100 ms. We measured the effect of variations in presentation time on identification accuracy for 36 different facial expressions of mental states based on the Reading the Mind in the Eyes test (Baron-Cohen et al., 2001) and compared these results to those for corresponding stimuli from the McGill Face database, a new set of images depicting mental states portrayed by professional actors. Our results show that subjects are able to identify facial expressions of complex mental states at very brief presentation times. The kind of cognition involved in the correct identification of facial expressions of complex mental states at very short presentation times suggests a fast, automatic Type-1 cognition.

Schmidtmann, G., Jordan, M., Loong, J.T., Logan, A.J., Carbon, C.C., & Gold,I. Temporal processing of facial expressions of mental states. BioRxiv 602375; doi: https://doi.org/10.1101/602375 [PDF]

Posted
AuthorGunnar Schmidtmann

Schmidtmann, G., Jennings, B. J., Sandra, D. A., Pollock, J., & Gold, I. (2019). The McGill Face Database: validation and insights into the recognition of facial expressions of complex mental states. BioRxiv, 586453. https://doi.org/10.1101/586453 [PDF]

The McGill Face Database: validation and insights into the recognition of facial expressions of complex mental states

Current databases of facial expressions of mental states typically represent only a small subset of expressions, usually covering the basic emotions (fear, disgust, surprise, happiness, sadness, and anger). To overcome these limitations, we introduce a new database of pictures of facial expressions reflecting the richness of mental states. 93 expressions of mental states were interpreted by two professional actors and high-quality pictures were taken under controlled conditions in front and side view. The database was validated with two different experiments (N=65). First, a four-alternative forced choice paradigm was employed to test the ability of participants to correctly select a term associated with each expression. In a second experiment, we employed a paradigm that did not rely on any semantic information. The task was to locate each face within a two-dimensional space of valence and arousal (mental state - space) employing a "point-and-click" paradigm. Results from both experiments demonstrate that subjects can reliably recognize a great diversity of emotional states from facial expressions. Interestingly, while subjects' performance was better for front view images, the advantage over the side view was not dramatic. To our knowledge, this is the first demonstration of the high degree of accuracy human viewers exhibit when identifying complex mental states from only partially visible facial features. The McGill Face Database provides a wide range of facial expressions that can be linked to mental state terms and can be accurately characterized in terms of arousal and valence.

Posted
AuthorGunnar Schmidtmann