ErgoVR人机交互CAVE沉浸式虚拟仿真实验室由津发科技自主研发的ErgoLAB虚拟世界人机环境同步云平台、CAVE虚拟现实系统、ErgoVR人机工效分析系统、ErgoHMI人机交互评估系统、美国WorldViz头戴式行走虚拟现实系统等核心部件组成,CAVE洞穴式虚拟现实系统是一个大型的可支持多用户的沉浸式虚拟现实显示交互环境,能够为用户提供大范围视野的高分辨率及高质量的立体影像,让虚拟环境完全媲美真实世界,为用户提供光环境与视觉模拟、声环境与听觉模拟、气味与嗅觉模拟、人机交互与触觉反馈模拟、人机交互测评、人机环境测试、人机工效分析、人因设计与虚拟装配、虚拟展示、虚拟训练等技术服务。 ErgoVR虚拟现实同步模块进行视觉、听觉、嗅觉、触觉和人机交互模拟,ErgoLAB人机环境同步云平台由可穿戴生理记录模块、VR眼动追踪模块、可穿戴脑电测量模块、交互行为观察模块、生物力学测量模块、环境测量模块等组成。实现在进行人机环境或者人类心理行为研究时结合虚拟现实技术,基于三维虚拟现实环境变化的情况下实时同步采集人-机-环境定量数据(包括如眼动、脑波、呼吸、心律、脉搏、皮电、皮温、心电、肌电、肢体动作、关节角度、人体压力、拉力、握力、捏力、振动、噪声、光照、大气压力、温湿度等物理环境数据)并进行分析评价,所获取的定量结果为科学研究做客观数据支撑。 作为该套系统方案的核心数据同步采集与分析平台,ErgoLAB人机环境同步平台不仅支持虚拟现实环境,也支持基于真实世界的户外现场研究、以及基于实验室基础研究的实验室研究,可以在任意的实验环境下采集多元数据并进行定量评价。(人机环境同步平台含虚拟现实同步模块、可穿戴生理记录模块、虚拟现实眼动追踪模块、可穿戴脑电测量模块、交互行为观察模块、生物力学测量模块、环境测量模块等组成) 作为该套系统方案的核心虚拟现实软件引擎,WorldViz不仅支持虚拟现实头盔,还可为用户提供优质的应用内容。结合行走运动追踪系统、虚拟人机交互系统,使用者最终完成与虚拟场景及内容的互动交互操作。 应用领域 BIM环境行为研究虚拟仿真实验室解决方案:建筑感性设计、环境行为、室内设计、人居环境研究等;
交互设计虚拟仿真实验室解决方案:虚拟规划、虚拟设计、虚拟装配、虚拟评审、虚拟训练、设备状态可视化等;
军工国防武器装备人机环境虚拟仿真实验室解决方案:武器装备人机环境系统工程研究以及军事心理学应用,军事训练、军事教育、作战指挥、武器研制与开发等;
用户体验与可用性研究虚拟仿真实验室方案:游戏体验、体验类运动项目、影视类娱乐、多人参与的娱乐项目。
虚拟购物消费行为研究实验室方案
安全人机与不安全行为虚拟仿真实验室方案
驾驶行为虚拟仿真实验室方案
人因工程与作业研究虚拟仿真实验室方案
其用户遍布各个应用领域,包括教育和心理、培训、建筑设计、军事航天、医疗、娱乐、图形建模等。同时该产品在认知相关的科研领域更具竞争力,在欧美和国内高等学府和研究机构拥有五百个以上用。 1)、加州大学圣巴巴拉分校虚拟环境与行为研究中心 该实验室主要致力于心理认知相关的科学研究,包括社会心理学、视觉、空间认知等,并有大量论文在国际知名刊物发表,具体详见论文列表。 2)、迈阿密大学心理与计算机科学实验室 研究领域:空间认知 Human Spatial Cognition In his research Professor David Waller investigates how people learn and mentally represent spatial information about their environment. Wearing a head-mounted display and carrying a laptop-based dual pipe image generator in a backpack, users can wirelessly walk through extremely large computer generated virtual environments. Research Project Examples Specificity of Spatial Memories When people learn about the locations of objects in a scene, what information gets represented in memory? For example, do people only remember what they saw, or do they commit more abstract information to memory? In two projects, we address these questions by examining how well people recognize perspectives of a scene that are similar but not identical to the views that they have learned. In a third project, we examine the reference frames that are used to code spatial information in memory. In a fourth project, we investigate whether the biases that people have in their memory for pictures also occur when they remember three-dimensional scenes. Nonvisual Egocentric Spatial Updating When we walk through the environment, we realize that the objects we pass do not cease to exist just because they are out of sight (e.g. behind us). We stay oriented in this way because we spatially update (i.e., keep track of changes in our position and orientation relative to the environment.)
3)、加拿大滑铁卢大学心理系 设备: WorldViz Vizard 3D software toolkit, WorldViz PPT H8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display, Arrington Eye Tracker 研究领域:行为科学 Professor Colin Ellard about his research: I am interested in how the organization and appearance of natural and built spaces affects movement, wayfinding, emotion and physiology. My approach to these questions is strongly multidisciplinary and is informed by collaborations with architects, artists, planners, and health professionals. Current studies include investigations of the psychology of residential design, wayfinding at the urban scale, restorative effects of exposure to natural settings, and comparative studies of defensive responses. My research methods include both field investigations and studies of human behavior in immersive virtual environments.
部分发表论文: Colin Ellard (2009). Where am I? Why we can find our way to the Moon but get lost in the mall. Toronto: Harper Collins Canada. Journal Articles: Colin Ellard and Lori Wagar (2008). Plasticity of the association between visual space and action space in a blind-walking task. Perception, 37(7), 1044-1053. Colin Ellard and Meghan Eller (2009). Spatial cognition in the gerbil: Computing optimal escape routes from visual threats. Animal Cognition, 12(2), 333-345. Posters: Kevin Barton and Colin Ellard (2009). Finding your way: The influence of global spatial intelligibility and field-of-view on a wayfinding task. Poster session presented at the 9th annual meeting of the Vision Sciences Society, Naples, FL. (Link To Poster) Brian Garrison and Colin Ellard (2009). The connection effect in the disconnect between peripersonal and extrapersonal space. Poster session presented at the 9th annual meeting of the Vision Sciences Society, Naples, FL. (Link To Poster)
4)、美国斯坦福大学信息学院虚拟人交互实验室 设备: WorldViz Vizard 3D software toolkit, WorldViz PPT X8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display, Complete Characters avatar package The mission of the Virtual Human Interaction Lab is to understand the dynamics and implications of interactions among people in immersive virtual reality simulations (VR), and other forms of human digital representations in media, communication systems, and games. Researchers in the lab are most concerned with understanding the social interaction that occurs within the confines of VR, and the majority of our work is centered on using empirical, behavioral science methodologies to explore people as they interact in these digital worlds. However, oftentimes it is necessary to develop new gesture tracking systems, three-dimensional modeling techniques, or agent-behavior algorithms in order to answer these basic social questions. Consequently, we also engage in research geared towards developing new ways to produce these VR simulations. Our research programs tend to fall under one of three larger questions: 1. What new social issues arise from the use of immersive VR communication systems? 2. How can VR be used as a basic research tool to study the nuances of face-to-face interaction? 3. How can VR be applied to improve everyday life, such as legal practices, and communications systems.
5)、加州大学圣迭戈分校神经科学实验室 设备: WorldViz Vizard 3D software toolkit, WorldViz PPT X8 optical inertial hybrid wide-area tracking system, NVIS nVisor SX head-mounted display The long-range objective of the laboratory is to better understand the neural bases of human sensorimotor control and learning. Our approach is to analyze normal motor control and learning processes, and the nature of the breakdown in those processes in patients with selective failure of specific sensory or motor systems of the brain. Toward this end, we have developed novel methods of imaging and graphic analysis of spatiotemporal patterns inherent in digital records of movement trajectories. We monitor movements of the limbs, body, head, and eyes, both in real environments and in 3D multimodal, immersive virtual environments, and recently have added synchronous recording of high-definition EEG. One domain of our studies is Parkinson's disease. Our studies have been dissecting out those elements of sensorimotor processing which may be most impaired in Parkinsonism, and those elements that may most crucially depend upon basal ganglia function and cannot be compensated for by other brain systems. Since skilled movement and learning may be considered opposite sides of the same coin, we also are investigating learning in Parkinson’s disease: how Parkinson’s patients learn to adapt their movements in altered sensorimotor environments; how their eye-hand coordination changes over the course of learning sequences; and how their neural dynamics are altered when learning to make decisions based on reward. Finally, we are examining the ability of drug versus deep brain stimulation therapies to ameliorate deficits in these functions. |