Dr. Dietrich Albert currently is Professor of Psychology at University of Graz (UniGraz), Senior Scientist at Graz University of Technology, Knowledge Management Institute (TUG-KMI), and Key Researcher at the Know-Center (Graz). At UniGraz, Department of Psychology, he is the head of the Cognitive Science Section (CSS) since 1993. In the preceding years he served on the faculties of the Universities of Göttingen, Marburg, Heidelberg, and in 2001/02 Hiroshima. His research topics cover several areas, including learning and memory, psychometrics, anxiety and performance, psychological decision theory, computer based tutorial systems, values and behaviour. D. Albert's actual focus in R&D is on knowledge and competence structures, their applications, and empirical research. By working with psychologists, computer scientists, and mathematicians several academic disciplines are represented within his research team. Beside national activities, his expertise in European R&D projects is documented by several projects since FP5.
Currently he is also the Chair of the Board of Trustees of the Institute for Psychology Information (ZPID), Germany, and a member of several scientific advisory boards.
Microadaptivity - non-invasive competence assessment in complex learning situations
The concept of microadaptivity was introduced in the context of game-based learning where classical adaptivity, i.e. the personalised selection or sequencing of learning objects, proved insufficient. Microadaptivity uses learners' observed activities within complex learning situations immediately for updating the learner model and, subsequently, for better adapted learning support.
Competence-based Knowledge Space Theory (CbKST) has proven its usability as a theoretically well-founded basis for adaptive learning in the classical sense. Combining CbKST with the concept of problem space, we get an equally well-suited basis for realising microadaptivity.
A core component of microadaptivity is the assessment of the learner's competence state. Real-time requirements within the game context required the development of assessment procedures which are computationally much less demanding than assessment procedures already existing in the knowledge space community.
Tamim Asfour is senior research scientist at the Karlsruhe Institute of Technology (KIT). He received his diploma degree in Electrical Engineering in 1994 and his PhD in Computer Science in 2003 from the University of Karlsruhe. He is leader of the Humanoid Research Group at the Institute for Anthropomatics at the Karlsruhe Institute of Technology. In 2003 he was awarded with the Research Center for Information Technology (FZI) price for his outstanding Ph.D. thesis on sensorimotor control in humanoid robotics and the development of the humanoid robot ARMAR. He is leading the system integration tasks in the German Humanoid Robotics Project (SFB 588) and the development team of the ARMAR robots. He is scientific leader and coordinator of the projects R1 and R6 in the SFB 588, co-coordinator of the European integrated project PACO-PLUS, scientific manager of the European integrated project PACO-PLUS. His major research interest is humanoid robotics including action learning from human observation, goal-directed imitation learning, dexterous grasping and manipulation, active vision and active touch, whole-body motion planning, cognitive control architectures, system integration, robot software and hardware control architecture, motor control and Mechatronics.
Actions speak louder than words
The design of cognitive situated robots, which should be able to learn to operate in the real world and to interact and communicate with humans, must model and reflectively reason about their perceptions and actions in order to learn, act, predict and react appropriately. Our hypothesis is that such capabilities can only be attained by embodied agents and requires the simultaneous consideration of perception and action.
In this talk, we will present the concept of Object-Action Complexes (OAC) which has been introduced by the European project PACO-PLUS (www.paco-plus.org) to emphasize the notion that objects and actions are inseparably intertwined and that categories are therefore determined (and also limited) by the action a cognitive agent can perform and by the attributes of the world it can perceive. Entities “things” in the world of a robot (or human) will only become semantically useful objects through the action that the agent can/will perform on them.
Object-Action Complexes (OACs) are proposed as a universal representation enabling efficient planning and execution of purposeful action at all levels of the cognitive architecture. OACs combine the representational and computational efficiency for purposes of search (the frame problem) of STRIPS rules (Fikes 1971) and the object- and situation-oriented concept of affordance (Gibson 1950, Sahin 2007) with the logical clarity of the event calculus (Kowalski et al. 1986, Steedman 2002). Affordance is the relation between a situation, usually including an object of a defined type, and the actions that it allows. While affordances have mostly been analyzed in their purely perceptual aspect, the OAC concept defines them more generally as state-transition functions suited to prediction. Such functions can be used for efficient forward-chaining planning, learning, and execution of actions represented simultaneously at multiple levels in an embodied agent architecture.
Results on using OACs within the architecture of humanoid robots ARMAR-IIIa and ARMAR-IIIb operating in human-centered environments will be presented and discussed.
- 2008: PhD in Cognitive Neuroscience, Dartmouth College, USA
- 2004: MSc in Cognitive Psychology, University of Otago, NZ
- 2001: BA in Psychology & Dance, Pomona College, USA
- Academic Appointments
- 2010 - present: Assistant Professor, Behavioral Sciences Institute and Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen
- 2008-10: Postdoctoral Research Fellow with Wolfgang Prinz and Simone Schütz-Bosbach, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig
- 2008: Visiting Postdoctoral Fellow with Antonia Hamilton, School of Psychology, University of Nottingham, UK
- Selected Awards
- 2008-10: Alexander von Humboldt Postdoctoral Fellowship
- 2006-08: NIH Predoctoral Research Fellowship
- 2002-03: Fulbright Fellowship to New Zealand
- Selected Publications
- Cross, Mackie, Wolford & Hamilton (in press). Contorted and ordinary static body postures in the human brain. Experimental Brain Research: Special Issue on Body Representation.
- Cross, Hamilton, Kraemer, Kelley & Grafton (2009). Dissociable substrates for body motion and physical experience in the human action observation network. European Journal of Neuroscience, 30(7), 1383-1392.
- Cross, Kraemer, Hamilton, Kelley & Grafton (2009). Sensitivity of the action observation network to physical and observational learning. Cerebral Cortex, 19(2), 315-326.
- Cross, Hamilton & Grafton (2006). Building a motor simulation de novo: Observation of dance by dancers. NeuroImage, 31(3), 1257-1267.
Bending Bodies, Acrobatic Feats, and Dancing Robots: A Look at Complex Action Perception in the Human Brain
When we watch another person reach for a cup of coffee, run to catch a train, or dance the tango, we instantly understand what each person is doing. It is hypothesized that we understand other people's actions by covertly imagining observed actions within our own brains and bodies, an idea known as action resonance. Neurophysiological work with monkeys and neuroimaging work with humans lends additional credence to this notion through demonstration of a set of brain regions that respond in a similar manner to performed, observed, and imagined actions: the so-called 'mirror system'. One line of research with expert dancers and athletes has demonstrated differential responses within this system when observing movements with which on has a high degree of physical expertise, thus demonstrating the ability of these brain regions to dynamically respond to experience. However, many questions remain unexplored with regard to how we understand actions with which we have no physical or visual experience. Through a series of recent neuroimaging investigations, we explore the flexibility and adaptability of the human mirror system when processing complex human and non-human (robotic) actions. We recorded participants' neural responses with functional magnetic resonance imaging whilst they observed complex gymnastic, robotic toy, human robot dancing and manipulated Lego™ movement sequences, testing specific predictions about physical and visual experience, and the interaction of form and motion with the different experiments. We found that brain regions both within and beyond the mirror system respond to different features of actions that are unlike what an observer can perform. Overall, our findings suggest that we use similar brain regions for understanding actions that are both like and unlike those we can perform with our own bodies, and that robotic actions are especially salient inputs to the mirror system.
Dr. Dietrich Dörner
Institute for Theoretical Psychology,
University of Bamberg, Germany
- Born: 28.9.1938,
- Study of Psychology (+ Neurophysiology, + Logics) in Kiel.
- Professor in Düsseldorf, Giessen and Bamberg (since 1979).
- Fellow of the 'Wissenschaftskolleg' (Institute for Advanced Studies) in Berlin 1982/83,
- Leibniz-Award of the Deutsche Forschungsgemeinschaft (German Research Council) in 1986,
- Director of the Max Planck-Project Group for Cognitive Anthropology 1989-1991,
- Fellow of the 'Hanse Kolleg' (Institute for Advanced Studies 2003/2004.
Research on Thinking and Problem Solving, Research on an Integrated Model of Human Emotion, Motivation and Cognition (the Psi-Model). Computer-Simulation of this Model.
How is it possible to simulate motivations and emotions on a computer, although, as everybody knows, emotions are not computable?
No cognitive process, no thinking and reasoning, happens without being accompanied by emotions, which often have a strong impact on the modus of thinking. Thinking itself mostly generates rather strong emotions (often enough for instance anger or fear to make a fool of oneself). I shall show, that it is possible to formalize emotions and to run them on a computer. For this purpose it is necessary to make assumptionms about the human motivational system, as emotions are triggered and modified especially by the need for competence ("Self-efficacy"), by affiliative needs and by the need for certainty.
Marc Ernst studied Physics in Heidelberg and Frankfurt/Main. In 2000 he received his Ph.D. degree at the Max Planck Institute for Biological Cybernetics for investigations on human visuomotor behavior. For this work he was awarded the Attempto-Prize (2000) from the University of Tübingen and the Otto-Hahn-Medaille (2001) from the Max Planck Society. Starting in 2000, he spent almost 2 years as a postdoc at the University of California, Berkeley working with Prof. Martin Banks on psychophysical experiments and computational models investigating the integration of visual-haptic information. End of 2001, Marc Ernst returned to the Max Planck Institute and became principle investigator of the Sensorimotor Lab in the Department of Prof. Heinrich Bülthoff. Beginning 2007 he then became leader of the Independent Max Planck Research Group on Human Multisenory Perception and Action. The group is interested in human multimodal perception, sensorimotor integration and men-machine interaction. The group participates in several international collaborative grants, including the EU Project ImmerSence and THE, investigating human-human and human-machine interaction, and a HFSP project focusing on perceptual learning. Furthermore, Marc Ernst was coordinating the 6th Framework IST European Project "CyberWalk", which developed an omnidirectional treadmill in order to enable natural free walking through Virtual Environments. Marc Ernst is Vice President and founding member of the Eurohaptics Society and Vice Chair of the IEEE Technical Committee on Haptics. He is member of IEEE, the Robotics and Automation Society, and the Vision Science Society. Furthermore, Marc Ernst is member of the Advisory Council of the International Association for the Study of Attention and Performance.
We use all our senses to construct a reliable percept representing the world with which we interact. The view we take in the Independent Max-Planck Research Group for Human Multisensory Perception and Action is that in many aspects of behaviour, motor actions and multisensory processing are inseparably linked and therefore have to be studied in a closed action/perception loop. We believe that human perception and action is tailored to the statistics of the natural environment and when the environment changes our perceptions will follow these changes through the process of adaptation minimizing potential costs during interaction. In the neural processing such statistics will represent itself in probability distributions. We follow Hermann von Helmholtz in our belief that human perception is a problem of inference, for which the sensory data are often not sufficient to uniquely determine the percept. Thus, prior knowledge has to be used to constrain the process of inference from ambiguous sensory signals. A principled way to describe the combination of prior knowledge with sensory data in a probabilistic way is the Bayesian Framework. Therefore, we regularly use this Bayesian Framework to construct "ideal observer" models-models that use the available information in the most optimal way, provided some task and cost function. These models can then be used as a benchmark against which human performance can be tested. To do so in the Multisensory Perception and Action Group we use quantitative psychophysical and neuropsychological methods together with Virtual Reality techniques. Quantitative psychophysical methods are important to best determine the relevant perceptual parameters minimizing uncertainty and unknowns. Virtual Reality is important because it provides us with a tool to precisely control the perceptual situation that are investigated, while at the same time it allows for a degree of interaction, which is necessary for studying the action/perception loop. Often, however, today's Virtual Reality techniques and Human-Computer Interaction devices are not sufficiently developed to be readily used in the study of human perception and action. Therefore, some of our work concentrates on the development of human-machine interfaces. This is mostly done in the framework of European projects. For example, the European Projects Touch-Hapsys and ImmerSence focuses on the development of haptic interaction devices whereas the European project CyberWalk has the goal to develop an omni-directional treadmill for enabling near-natural locomotion in Virtual Reality.
Multisensory Perception: From Integration to Remapping
The brain receives information about the environment from all the sensory modalities, including vision, touch and audition. To efficiently interact with the environment, this information must eventually converge to form a reliable and accurate multimodal percept. This process is often complicated by the existence of noise at every level of signal processing, which makes the sensory information derived from the world unreliable and inaccurate. There are several ways in which the nervous system may minimize the negative consequences of noise in terms of reliability and accuracy. Two key strategies are to combine redundant sensory estimates and to use prior knowledge.
In this talk, I elaborate on how these strategies may be used by the nervous system to obtain the best possible estimates from noisy signals. I first describe how weighted averaging can increase the reliability of sensory estimates, which is the benefit of multisensory integration. Then, I point out that integration can also come at a cost of introducing inaccuracy in the sensory estimates. This shows that there is a need to balance the benefits and costs of integration. For this we use the Bayesian approach, which naturally leads to a continuum of integration between fusion and segregation. I further show how this approach can be used to model the breakdown of integration with increasing multisensory discordance. However, if the multisensory signals differ constantly over a period of time, because they may be consistently inaccurate, recalibration of the multisensory estimates will be the result. The rate of recalibration can be described using a Kalman-filter model, which can also be derived from the Bayesian approach. I conclude by proposing how integration and recalibration can be jointly described under this common approach.
Born in Ramat Gan, Israel, Prof. Tamar Flash was awarded B.Sc and M.Sc degrees in Physics from the Tel Aviv University in 1972 and 1976, respectively. In 1978, she was accepted into a Ph.D program in Medical Physics and Medical Engineering run by the Harvard-MIT Division of Health Sciences and Technology, and earned her Ph.D from the Massachusetts Institute of Technology in 1983. She continued her postdoctoral studies at MIT at the Department of Brain and Cognitive Science and the Artificial Intelligence Laboratory until 1985, when she returned to Israel to join the Weizmann Institute's Department of Computer Science and Applied Mathematics. She was appointed associate professor in 1991 and full professor in 1998.
She has been a visiting scientist at MIT (1991-1992), and the College de France, Paris (2002-2003). From 2004-2006, she served as Chair of her Department. She is the incumbent of the Dr. Hymie Moross Professorial Chair.
Her research interests include Computational Neuroscience, Motor Control and Robotics.
Motion planning, perception and compositionality: from behavior to computational models
Behavioral and theoretical studies have focused on identifying the kinematic and temporal characteristics of various movements ranging from simple reaching to complex drawing and curved motions. These kinematic and temporal features were quite instrumental in investigating the organizing principles that underlie trajectory formation. Similar kinematic constraints play also a critical role in visual perception of abstract and biological motion stimuli and in action recognition. In my talk I will review the results of several brain mapping and psychophysical studies aiming at identifying the neural correlates of the behavioral findings. I will also present a new theory of trajectory formation which is inspired by geometrical invariance. The theory proposes that movement duration, timing, and compositionality arise from cooperation among several geometries. Different geometries possess different measures of distance. Hence, depending on the selected geometry, movement duration is proportional to the corresponding distance parameter. Expressing these ideas mathematically, the theory led to concrete predictions concerning the kinematic and temporal features of both drawing and locomotion trajectories. In view of the theory's success in accounting for empirical observations, I will also discuss several of its implications concerning brain representations of motion, time and speed and the use of different mixtures of geometries in such representations. Finally I will talk also on compositionality at the level of joint motions and the use of the developed models to account for the kinematic patterns observed in body expression of emotion.
- Joachim Funke, born 1953 at Düsseldorf.
- Studies in philosophy, psychology, and German literature at the universities of Düsseldorf, Basel, and Trier.
- 1980 diploma in psychology, 1984 doctoral dissertation at Trier University, 1990 habilitation at Bonn University.
- Since 1997 full professor for psychology at Heidelberg University.
- Research interests: All types of higher cognition, like thinking, problem solving, intelligence, creativity.
Abstract: Carola Barth and Joachim Funke
Scientific Computing: Advantages for Psychological Research on Complex Problem Solving
Since the beginning of research about Complex Problem Solving (CPS) in the 80s, studies have tried to analyse the determinants of CPS performance. In order to do so, computer simulated scenarios have been used. They simulate a specific microworld and the problem solver has, for instance, to manage a small company.
Nevertheless it was always problematic to measure CPS in valid way. A valid CPS indicator must be driven from the optimal solution. With nonlinear systems, it was long assumed as impossible to do so. By using scientific computing and working together with Sebastian Sager and his team, we calculate the optimal solution for Tailorshop. Tailorshop is the Drosophila for problem solving researchers and one of the most frequently used CPS scenarios. By deriving a CPS performance indicator from the optimal solution we revealed some very interesting results. The data of a recent study, for instance, demonstrate that the experienced emotions as well as the ability to regulate emotions impact CPS performance. Moreover, we now work on analysing CPS data by looking closely at the CPS process. By analysing the CPS process with knowing the optimal solution we can detect when participants behave more optimal and we see what they exactly did well. This implies great advantages for CPS research which we will shortly review.
Martin A. Giese is Professor for Computational Sensomotorics at the University Clinic of Tübingen, Germany. His institute is part of the Hertie Institute for Clinical Brain Research and the Werner Reichardt Center for Integrative Neuroscience. He has studied Electrical Engineering and Psychology at the University of Bochum (PhD 1998). After a postdoc at the Center for Biological and Computational Learning (M.I.T.) he headed the HONDA Boston Research Laboratory. From 2001 to 2007 he headed the Laboratory for Action Representation and Learning (Hertie Institute for Clinical Brain Research, Tübingen). From 2007-2008 he was Senior Lecturer at the Department of Psychology, University of Bangor, UK. His research addresses neural mechanisms of movement and action recognition and related technical applications.
Analysis and modelling of dynamic emotional body expression by learning of movement primitives
Body movements are characterized by complex coordinated motor patterns. Even apparently simple behaviors, such as locomotion, can convey information about important social cues, such as the emotional states of others. At the same time, the efficient representation of such patterns is critical for many technical applications, such as computer graphics and robotics. In motor control it has been proposed that complex coordination patterns might be organized in terms of lower-dimensional components (synergies), which depend only on a limited number of degrees of freedom. Based on this idea, a new algorithm for the unsupervised learning of spatio-temporal primitives from complex body movements is presented. It is based on the combination of Independent Components Analysis (ICA) with the theory of time frequency tranformations, and it results in highly compact models for complex body movements. First, such models are used to uncover the critical features that determine the emotional style of body movements. The extracted features are validated in psychophysical studies on the perception of emotions from gait. Second, such models can be exploited to the generation of body expressions in real-time by mapping extracted trajectories on coupled dynamical primitives. The resulting nonlinear dynamical models are suitable for a systematic analysis and design of the system's stability properties using concepts from Contraction Theory.
Cleotilde (Coty) González is an Associate Research Professor at the Department of Social and Decision Sciences at Carnegie Mellon University. Her research work focuses on the study of human decision making in dynamic and complex environments. She is the founding director of the Dynamic Decision Making Laboratory where researchers conduct behavioral studies on dynamic decision making using Decision Making Games, and create technologies and cognitive computational models to support decision making and training. Coty is affiliated faculty with the Human Computer Interaction Institute, the Center for Cognitive Brain Imaging, the Center for Neural Basis of Cognition, all at Carnegie Mellon University, and with the Center for Research on Training at University of Colorado. She is part of the editorial board of the Human Factors Journal, and the Associate Editor of the track of Studies in Simulations and Synthetic Environments, of the Journal of Cognitive Engineering and Decision Making.
Instance-Based Learning Models of Decision Making
Instance-Based Learning Theory (IBLT) (Gonzalez, Lerch, & Lebiere, 2003) was developed to demonstrate the cognitive processes and mechanisms involved in dynamic decision making. Dynamic tasks are concerned with controlling a system that changes spontaneously and decays with time, inaction, and inappropriate decisions. Actions within these tasks are interdependent, so that future conditions depend on earlier actions. IBLT models characterize learning by storing in memory a sequence of action-outcome links produced by experienced events. This process increases knowledge and allows decisions to improve as experience accumulates in memory. Particular implementations of IBLT models of learning depend on the task representation, the cues that are important for that task, and the particular actions involved. However, the structure of a memory instance, the decision process, and the memory functions are generic. IBLT proposes a particular learning process that is free of fabricated and specific strategies chosen by a modeler. The IBLT also proposes a concrete set of needs for representation of information, and a subset of mechanisms borrowed from ACT-R, which are used for learning from experience and exploration. Thus, developing models within the IBLT reduces the number of decisions a modeler needs to make to determine how to represent the learning processes involved in the execution of a task. I will present the theory and demonstrate its new implementation into a computational tool for all to use.
Vincent Hayward received a Diplôme d'Ingénieur from Ecole Centrale de Nantes in 1978 and a Ph.D. in Computer Science in 1981 from the University of Paris XI. He was Postoctoral Fellow then Visiting Assistant Professor (1982) at Purdue University and joined CNRS, France as Chargé de Recherches (1983-86). In 1987, he joined the Department of Electrical and Computer Engineering at McGill University as adjunct, assistant, associate and then full professor in 2006. He was the Director of the McGill Center for Intelligent Machines from 2001 to 2004. Hayward co-founded spin-off companies, received several best paper and research awards. He is on editorial board of the ACM Transaction on Applied Perception and of the IEEE Transactions on Haptics and is a Fellow of the IEEE. As of 2008, he holds the "Chaire internationale d'haptique" at UPMC.
The many facets of the haptic sense
During mechanical interaction with our environment, we derive a perceptual experience which may be compared to that resulting from acoustic or optical stimulation. New mechanical stimulation delivery equipment capable of fine segregation of distinct cues at different length scales and different time scales now allows us to study the many aspects of haptic perception including its physics and mathematics, its biomechanics, and the computations that the nervous system must perform to achieve a perceptual outcome. This knowledge is rich in applications ranging from improved diagnosis of pathologies, to rehabilitation devices, to consumer electronics and virtual reality systems.
Since July 2008 Full Professor in Business and Organizational Psychology at the University of Duisburg-Essen, before assistant professor for Psychology at the University of St. Gallen (2002-2008) and research assistant at the Technical University Aachen (1996-2001). Diploma in Psychology at the Technical University of Aachen and Doctor's degree of Economics and Social Sciences at the University of Kassel, at the department of ergonomics and vocational training.
Research interests in training and learning in organizations, training for knowledge and skill acquisition in complex systems especially process control tasks, use of Cognitive Task Analysis for training design and simulator training, development of job performance aids based on Cognitive error analysis methods.
Using Hierarchical Task Analysis and Cognitive Error Analysis to design job performance aids for process control tasks
In the management of industrial and commercial operations, a primary objective should be that the tasks are carried out by operators according to the standards of safety and productivity. Training and non-training operator support in terms of "on the job aids" both serve this objective as they aim at ensuring operator competence. The studies presented in this presentation deal with operator support provided by artefacts in terms of manuals, procedural guides and decision aids. Advantages of using job aids include a) improving performance by minimizing the cognitive load required to memorize various aspects of the job b) especially when the task has to be conducted in stressful and demanding environments where critical items might be omitted from the task. In the research which is going to be presented in Heidelberg, a procedural aid and a decision-making aid for fault finding and repair were developed based on a hierarchical task analysis and a cognitive error analysis.
In a first study (n = 40), a new designed manual as a procedural aid for a simulated process control task CAMS (Cabin Air Management System) was developed based on a Hierarchical Task Analysis (HTA). It showed the advantage of the new designed manual over the existing one concerning motivation, self rated cognitive load measure, knowledge acquisition and diagnostic speed in diagnosing novel faults. But the post hoc analysis also revealed that the diagnosis of novel faults is still more difficult and leads to more errors than diagnosing practiced faults. We concluded that even the new designed manual does not support the fault finding of novel faults sufficiently enough. Therefore based on the results of the first study we assumed that the emphasis of the manual design was primarily and to strongly based on the HTA but may not sufficiently and comprehensively considered the results from the error analysis.
We therefore qualitatively reanalyzed four of our training experiment conducted with CAMS before. In all experiment as in the study presented here, the percent of diagnostic errors were counted. But for the reanalysis of the data we looked more deeply into the first choice of a repair option taken by the participants. The guiding question of the reanalysis was what the "nature" of a wrong diagnosis is and whether there is some kind of systematic error behind the likelihood of a wrong diagnosis. We were interested to understand in "how wrong" the diagnosis was and if the participants might have "failed intelligently". The aim was to analyse the quality of the errors made instead of a quantitative analysis. The results of the reanalysis showed, that in many cases participants were close to the correct diagnosis but missed to actively diagnose the small differences between certain types of faults. In most cases participants confused one fault with another fault which shares some indication or cues, but also differed in relevant aspects. These findings led us to the conclusion that participants failed to actively diagnose the small but important differences between possible fault indications and/or failed to combine several available cues and thus confused one fault with another one. In the second study (n = 40) we therefore assumed that a decision-making aid that supports participants in actively looking for cues and fault indications supports a correct diagnosis and helps to avoid to confuse faults with one another.
Results confirmed our assumption that the developed decision-making aid based in the cognitive error analysis supports operator performance in diagnosing novel faults. Advantages and disadvantages of methods such as HTA and Cognitive Error Analysis for designing job performance aids for complex tasks are discussed.
Katja Mombaur is Professor at the Interdisciplinary Center of Scientific Computing (IWR) at the University of Heidelberg since 2010, and leads the research group on Optimization in Robotics & Biomechanics as well as the IWR Robotics Lab. She also holds an Associate Researcher status at LAAS-CNRS Toulouse, where she spent two years as a visiting researcher in 2008 - 2010. She studied Aerospace Engineering at the University of Stuttgart, Germany, and the ENSAE in Toulouse, France, and got her Diploma in 1995. For the next two years she worked at IBM Germany. She received her Ph.D. degree in Applied Mathematics from the University of Heidelberg, Germany, in 2001. In 2002, she was a Postdoctoral Researcher at Seoul National University, South Korea. From 2003 - 2008 she worked as a lecturer and researcher at IWR, University of Heidelberg. Her research interests include modeling of complex mechanical systems and cognitive processes in biomechanics and robotics, as well as optimization and control techniques for fast motions.
How optimization can help to understand control and perception of human locomotion
Mathematical modeling and numerical simulation and optimization - i.e. classical tools of scientific computing – have proven to be very helpful to gain a better understanding of the complex control and perception mechanisms during human locomotion. There are different perspectives from which human locomotion can be investigated. One way is to study locomotion on a joint level taking into account the relative motions of all segments for which detailed multibody system models of the human body are required. Another perspective is to consider the navigation of the human as a whole in space and its perception of the environment for which simpler models and more global models can be used.
In this talk I give an overview about our current and previous research on model-based locomotion studies applying these two perspectives. An underlying assumption of our work is that motions of humans are optimal due to evolution and individual learning. The optimization criterion applied is of course not unique and depends heavily on the specific situation. We have developed inverse optimal techniques that are identify objective functions of dynamic systems from measurement information.
We present optimization studies of regular walking and running motions, of emotional body language during locomotion as well as navigation problems of humans in empty space and in interaction scenarios. We are also interested in objective functions generating artistic movements.
I currently hold the position of Lecturer in Experiemntal Cognitive Psychology in the Department of Psychology, School of Biological and Chemical Sciencesat Queen Mary University of London, and an Honorary Research Fellow of Cognitive, Perceptual and Brain Sciences, Division of Psychology and Language Sciences University College London and Insitute of Neurology University of London. Prior to this I held a position (2007-2009) as Lecturer in Cognitive Psychology at University of Surrey, and prior to that I was a Senior Research Fellow (2001-2007) at University College London.
My main research interests concern understanding the underlying mechanisms involved in learning, decision making, and problem solving in complex dynamic environments. Broadly, what these situations share in common is that a number of elements will vary from one point in time to another, not always reliably so, and not always as a direct consequence of the actions that we choose to make. In a recent review, I discuss the characteristics that make these situations complex, along with the psychological armoury we have to respond to the high degree of uncertainty that they generate (Osman, 2010). By examining a number of recent advances in cognitive psychology, cognitive science, neuropsychology, engineering and human factors research, much of my recent work (Osman, et al, 2008; Osman, 2010; Osman, in press) investigates the general principles these disciplines share in understanding how we control uncertainty.
The effects of positive and negative reward on different forms of dynamic decision making
Multiple Cue Probability Learning tasks (MCPL) and Complex Dynamic Control tasks (CDC) involve uncertain environments in which procedural learning has been proposed as a mechanism by which people learn to either predict (MCPL) or control (CDC) outcomes. Both these tasks involve a reward structure in which feedback from predictions about outcomes, or generating specific outcomes is used to incrementally improve learning about the task. However, to date, because (a) direct comparisons between prediction and control are rare, and (b) do not easily align on the basis of task instruction, or task environment, the question remains: If compared in an identical task environment, does positive and negative reward have the same impact on predictive and control accuracy? The aim of the study was to investigate separately the effect of positive and negative reward learning while predicting or controlling outcomes in an identical dynamic decision making task. Overall, the findings suggest that the two different types of feedback have differential effects on predictive and control based learning. The implications of these findings are discussed with respect to the Monitoring and Control (Osman, 2010a, 2010b) framework and its proposals complex decision making under conditions of uncertainty.
Marco Ragni is currently working at the Center of Cognitive Science at the University of Freiburg as an assistant professor (Akademischer Rat). After receiving a diploma in mathematics (Dipl.-Mathematiker) in May 2002, he was a research scientist (from July 2002 to January 2008) in the Research Group on the Foundations of Artificial Intelligence at the University of Freiburg, headed by Professor Bernhard Nebel. He defended his PhD in Computer Science (Dr. rer. nat.) in January 2008 and is currently leading the DFG funded projects R8-[CSPACE] investigating cognitive complexity in spatial reasoning and planning and the project ActivationSpace analyzing neural foundations for the human deduction process in the special transregional research center for Spatial Cognition (SFB/TR8). In Oct 2008 he was elected to the governing board of the German Society of Cognitive Science.
Computational complexity meets cognition: Two case studies.
Computational complexity theory offers important insights into problem difficulty claiming that a problem is difficult if it requires high amounts of (space or time) resources independent of a specific algorithm. While computational complexity is interested in lower bound complexity it captures the computational amount necessary to solve the problems. It aims to classify problems that can or cannot be solved with appropriately restricted resources. Such a theory of cognitive complexity seems most desirable for cognitive science although there are inherent differences. For instance different problem representations are computationally equivalent - a finding which does not always hold in human reasoning (e.g. Wason Selection Task).
In this talk I will briefly present, analyze and discuss two problems from the domain of relational reasoning and planning with respect to their computational and cognitive complexity.
Sebastian Sager is a Junior Research Group leader at the Interdisciplinary Center for Scientific Computing in Heidelberg. His interests as a mathematician are mainly on optimization theory, algorithms, and applications.
- 2006: PhD in Mathematics, Heidelberg
- 2001: Diplom in Mathematics, Heidelberg
- Academic Appointments
- 2008 - present: Junior Research Group Leader, IWR, Heidelberg
- 2007-08: Postdoctoral Fellow, IMDEA Mathematics Madrid
- 2006-07: Postdoctoral Research Fellow, International Research Training Group 710 Heidelberg-Warsaw
- Selected Awards
- 2007: Klaus Tschira Award for Achievements in Public Understanding of Science
- 2006: Dissertation Prize of the German Operations Research Society
- Cognition-related Publications
- Sager, S., Barth, C., Diedam, H., Engelhart, M., Funke, J., Optimization to measure performance in the Tailorshop test scenario - structured MINLPs and beyond, Proceedings EWMINLP10, pp. 261-269, April 12-16, CIRM, Marseille
- Sager, S., Diedam, H., Engelhart, M., Tailorshop: Optimization Based Analysis and data Generation tOol,
- Sager, S., Barth, C., Diedam, H., Engelhart, M., Funke, J., Optimization as an Analysis Tool for Human Complex Problem Solving, submitted.
Optimization as an Analysis Tool for Human Complex Problem Solving
Model based mathematical optimization is a powerful paradigm in the understanding and improvement of many complex processes, in particular in engineering. Also many natural processes obey optimality conditions. We want to highlight in this talk that there is also a great potential for this methodology in the cognitive sciences.
Over the last years, psychological research has increasingly used computer-supported tests, especially in the analysis of complex human decision making and problem solving. Modern optimization methodology can help to address two important questions in this context.
The first one considers an analysis of the exact situations and decisions that led to a bad or good overall performance of test persons. Such an analysis is possible with sensitivity information.
The second important question concerns an objective measure of performance. For many complex scenarios the choices made by humans can only be compared to one another. We propose to compare the performance to the optimal solution of a mathematical model of the test scenario instead.
We will give an overview of Scientific Computing and optimization in particular at IWR, before we turn to issues in decision making. We present a mathematical formulation of a test scenario that is in use in psychology since the eighties, the Tailorshop. Test persons are required to take business decisions to run a tailorshop. We show how mathematically this can be formulated as a discrete-time optimization problem and discuss solutions.
Wolfgang Schoppek studied Psychology in Bamberg. He received his doctoral degree from the University of Bayreuth. His early research, which he did with Dietrich Dörner, and later, with Wiebke Putz-Osterloh, was concerned with complex problem solving. During a postdoctoral year at George Mason University, Wolfgang Schoppek designed a generic ACT-R model of action in complex task environments, specifically applied to pilots' interaction with the Flight Management System. Recently, he has developed and evaluated the adaptive learning software "Merlins Math Mill", which has proven its value for supporting children, particularly with learning difficulties in arithmetic. Wolfgang Schoppek holds a lecturer position at the University of Bayreuth.
Scientific Computing in an Applied Context: Evaluating a Hypothetical Hierarchy of Skills for Helping Children with Dyscalculia
As a basis for adaptive task selection in the software "Merlin's Math Mill", we constructed a hypothetical polyhierarchy of skills in arithmetic (HiSkA). In several training experiments, the HiSkA has indirectly proven its general value for supporting children in the development of their mathematical skills. Puzzled by the finding that a test that was constructed to assess specific aspects of the HiSkA turned out to conform to the unidimensional Rasch model, we wanted to test the validity of the hierarchy more directly. How can a multidimensional network of skills generate unidimensional data? To this end, we combined a number of advanced methods for psychometric analysis with simulations of datasets that were based on different unidimensional and multidimensional models. The simulations were based on principles of the knowledge space theory. Comparisons between results of the analysis of simulated and empirical data with the same methods give interesting insights into the conditions under which multidimensionality can be detected. Regarding the HiSkA, our analyses showed that a general factor explains a substantial amount of variance in the empirical data, but that the HiSkA is capable of explaining much of the residual variance.
Tanja Schultz received her Ph.D. and Masters in Computer Science from University Karlsruhe, Germany in 2000 and 1995 respectively and got a German Staatsexamen in Mathematics, Sports, and Educational Science from University of Heidelberg, in 1990. She joined Carnegie Mellon University in 2000 and became a Research Professor at the Language Technologies Institute. Since 2007 she is also a Full Professor at the Computer Science Department of the Karlsruhe Institute of Technology (KIT) in Germany. She is the director of the Cognitive Systems Lab, where her research activities focus on human-machine interfaces with a particular area of expertise in rapid adaptation of speech processing systems to new domains and languages. She co-edited a book on this subject and received several awards for this work. In 2001 she received the FZI price for an outstanding Ph.D. thesis. In 2002 she was awarded the Allen Newell Medal for Research Excellence from Carnegie Mellon for her contribution to Speech Translation and the ISCA best paper award for her publication on language independent acoustic modeling. In 2005 she received the Carnegie Mellon Language Technologies Institute Junior Faculty Chair.
Her recent research focuses on human-centered technologies and intuitive human-machine interfaces based on biosignals, by capturing, processing, and interpreting signals such as muscle and brain activities. Her development of a silent speech interface based on myoelectric signals received the Interspeech 2006 Demo award and was selected into the top-ten most important attractions at CeBIT 2010. Tanja Schultz is the author of more than 190 articles published in books, journals, and proceedings. Currently, she is a member of the IEEE Computer Society, the International Speech Communication Association ISCA, the European Language Resource Association, the Society of Computer Science (GI), and serves as elected ISCA Board member, on several program committees, and review panels.
Biosignals and Interfaces
Human communication relies on signals like speech, mimics, or gestures and the interpretation of these signals seems to be innate to humans. In contrast, human interaction with machines and thus human communication mediated through machines is far from being natural. To date, it is restricted to few channels and the capabilities of machines to interpret human signals are still very limited.
At the Cognitive Systems Lab (CSL) we explore human-centered cognitive systems to improve human-machine interaction as well as machine-mediated human communication. We aim to make better use of the strength of machines by departing from just mimicking the human way of communication. Rather we focus on considering the full range of biosignals emitted from the human body, such as electrical biosignals like brain and muscle activity. These signals can be directly measured and interpreted by machines, leveraging emerging wearable, small and wireless sensor technologies. Using these biosignals offers an /inside perspective/ on human mental activities, intentions, or needs and thus complement the traditional way of observing humans from the outside.
In my talk I will discuss ongoing research on biosignals and interfaces at CSL, such as silent speech interfaces that rely on articulatory muscle movement, and interfaces that use brain activity to determine users' mental states, such as task activity, cognitive workload, emotion, and personality. We hope that our research will lead to a new generation of human centered systems, which are completely aware of the users' needs and provide an intuitive, efficient, robust, and adaptive input mechanism to interaction and communication.
Philippe Souères is Director of Research at LAAS-CNRS (Gepetto group) in Toulouse, France, since 2008. He received the M.S degree in Mathematics, the PhD in Robotics and the Habilitation from University Paul Sabatier of Toulouse in 1990, 1993 and 2001 respectively. From 1993 to 1994 he held a postdoctoral position at the University of California, at Berkeley in the EECS dept, under direction of Professor Shankar S. Sastry. From 1995 to 2000 he has been working on different facets of robot perception and control in the Robotics and Artificial Intelligence group of LAAS. In 2003 he started to work in collaboration with neuroscientists on the problem of multisensory and sensorimotor integration. From that date, he has been the leader of three interdisciplinary projects involving roboticists and neuroscientists. From 2006 to 2007 he joined the Brain and Research Center, UMR5549, as a temporary researcher. He is now a member of the team Gepetto at LAAS. His research interests include: nonlinear control, optimal control, robot perception and control (visual servoing, audio perception), wheeld robots, flying robots, humanoid robots, anthropomorphic systems and neurosciences.
Some insights form robotics into the neurobiology of reaching control
The humanoid robots of today strongly differ from humans in their mechanical structure, their sensing and actuation capabilities, and the way they process data. However, interesting links between robotics and neurosciences can already be drawn. This talk will present research studies, related to the control of reaching movements, that were recently developed at LAAS-CNRS in close collaboration with neuroscientists.
In the first part of the talk, we will show that biological principles of motor control can successfully be applied to the control of humanoid robots. In particular, we will bring to the fore that the principle of separation of static and dynamic efforts allows to simplify the computation of optimal trajectories, in order to produce realistic movements having the main kinematic characteristics of human movements. We will then show that a database of reaching movements, either computed from a biological model or registered with a motion capture system, can be encoded by a small set of motor primitives, as suggested by neuroscientists. Finally, it will be shown that such an approach can be easily generalized in order to produce new realistic movements with low computation times.
The second part of the talk will aim at showing that formalisms and methods from robotics can provide interesting hints to neuroscientists in their modeling task. The question we consider here is: "which reference-frame is used by the CNS to encode a visually guided reaching movement?". We will first present computational arguments to show that an eye-centered control scheme is more robust with respect to proprioceptive biases and sensorimotor delays than a body-centered control scheme. Then, we will present a model, based on the visual-servoing formalism from robotics, that allows to explain the gaze-related modulation of premotor neurons that was previously observed in monkey, during visually guided reaching tasks with prescribed fixation points. We will present experiments on the humanoid robot HRP2 of LAAS-CNRS as a result of these developments.
Andreas Voß studied psychology at Trier from 1994 to 2000. In 2004, he received his PhD from the University of Trier. Then, he became assistant professor in the team of Karl Christoph Klauer at the University of Freiburg. Since 2010, Andreas Voß is full professor for Research Methods at the Psychology Department at the University of Heidelberg. From the beginning of his academic carrier, quantitative models of decision making have been in the focus of his research. Specifically, his work centers on the analysis of cognitive processes in binary decisions with stochastic diffusion models.
Diffusion-Model Analyses of Fast Binary Decisions: An Application to Above-Average-Effects
Diffusion-Model Data Analysis. Diffusion-Models (Ratcliff, 1978) can be used to analyze cognitive processes underlying fast binary decisions. It is assumed that information is entered continuously in a decision processes. Thus, the state of the process reflects the so-far encoded information. The decision process is stopped as soon as an upper or lower threshold is hit, whereat the different thresholds represent the alternative decisional outcomes.
A diffusion model analysis is based on the response-time distributions for both decisions. From these distributions, a number of parameters are estimated (Voss, Rothermund, & Voss, 2004): The first parameter is the drift rate, that is, the average gradient of the diffusion process. The drift represents the speed (and direction) of information accumulation. The second diffusion-model parameter is the distance of thresholds, which is an indicator for the decisional style: A small distance reflects a liberal criterion, with fast responses a high error rate, whereas a large distance reflects a conservative criterion. Thirdly, a bias in criteria (or a response bias) is mapped by the starting point. Whenever one decision is preferred, that is, this response is chosen on the basis of little evidence, the starting point will move closer towards the threshold representing this preferred decision. Fourthly, non-decisional components of the response time (e.g. speed of response execution) are mapped on an additional parameter. Finally, it is also assumed that different parameters may vary from trial to trial of an experiment; consequently, a number of additional parameters are needed to account for this inter-trial-variability.
Diffusion-model analyses have been successfully used to analyze cognitive processes in many different psychological domains, like memory retrieval (Spaniol, Madden, & Voss, 2006), motivated perception (Voss, Rothermund, & Brandtstädter, 2008), lexical decisions (Ratcliff, Gomez, & McKoon, 2004) and may others. \medskip
Cognitive Processes underlying Above-Average-Effects. In the present study, the diffusion model was used to analyze cognitive processes underlying so-called above-average effects (Chambers & Windschitl, 2004). Most people believe that they are better than average, that is, that they best their fellows in many domains (e.g. driving), and that they are less prone to negative events (e.g., health problems). This phenomenon has been denoted as unrealistic optimism or above-average-effect. In the present study, a speeded self-description task was applied: Participants had to decide quickly whether presented adjectives described them above-average. Results showed the expected optimism: participants assumed to be above-average in positive traits more often than in negative traits. A diffusion-model analysis was conducted with the fast-dm software (Voss & Voss, 2007). Results suggest that unrealistic optimism is based on asymmetric response criteria (i.e. starting point) rather than on information processing (i.e. drift rate). This might indicate that the effect results at level of response selection and does not reflect an (unrealistic) positive self-concept.
Click on the more icon to see information on the speakers and their presentations.