Narita, Akihiro; Ohkubo, Masaki; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi
2017-10-01
The aim of this feasibility study using phantoms was to propose a novel method for obtaining computer-generated realistic virtual nodules in lung computed tomography (CT). In the proposed methodology, pulmonary nodule images obtained with a CT scanner are deconvolved with the point spread function (PSF) in the scan plane and slice sensitivity profile (SSP) measured for the scanner; the resultant images are referred to as nodule-like object functions. Next, by convolving the nodule-like object function with the PSF and SSP of another (target) scanner, the virtual nodule can be generated so that it has the characteristics of the spatial resolution of the target scanner. To validate the methodology, the authors applied physical nodules of 5-, 7- and 10-mm-diameter (uniform spheres) included in a commercial CT test phantom. The nodule-like object functions were calculated from the sphere images obtained with two scanners (Scanner A and Scanner B); these functions were referred to as nodule-like object functions A and B, respectively. From these, virtual nodules were generated based on the spatial resolution of another scanner (Scanner C). By investigating the agreement of the virtual nodules generated from the nodule-like object functions A and B, the equivalence of the nodule-like object functions obtained from different scanners could be assessed. In addition, these virtual nodules were compared with the real (true) sphere images obtained with Scanner C. As a practical validation, five types of laboratory-made physical nodules with various complicated shapes and heterogeneous densities, similar to real lesions, were used. The nodule-like object functions were calculated from the images of these laboratory-made nodules obtained with Scanner A. From them, virtual nodules were generated based on the spatial resolution of Scanner C and compared with the real images of laboratory-made nodules obtained with Scanner C. Good agreement of the virtual nodules generated from the nodule-like object functions A and B of the phantom spheres was found, suggesting the validity of the nodule-like object functions. The virtual nodules generated from the nodule-like object function A of the phantom spheres were similar to the real images obtained with Scanner C; the root mean square errors (RMSEs) between them were 10.8, 11.1, and 12.5 Hounsfield units (HU) for 5-, 7-, and 10-mm-diameter spheres, respectively. The equivalent results (RMSEs) using the nodule-like object function B were 15.9, 16.8, and 16.5 HU, respectively. These RMSEs were small considering the high contrast between the sphere density and background density (approximately 674 HU). The virtual nodules generated from the nodule-like object functions of the five laboratory-made nodules were similar to the real images obtained with Scanner C; the RMSEs between them ranged from 6.2 to 8.6 HU in five cases. The nodule-like object functions calculated from real nodule images would be effective to generate realistic virtual nodules. The proposed method would be feasible for generating virtual nodules that have the characteristics of the spatial resolution of the CT system used in each institution, allowing for site-specific nodule generation. © 2017 American Association of Physicists in Medicine.
Illusion media: Generating virtual objects using realizable metamaterials
NASA Astrophysics Data System (ADS)
Jiang, Wei Xiang; Ma, Hui Feng; Cheng, Qiang; Cui, Tie Jun
2010-03-01
We propose a class of optical transformation media, illusion media, which render the enclosed object invisible and generate one or more virtual objects as desired. We apply the proposed media to design a microwave device, which transforms an actual object into two virtual objects. Such an illusion device exhibits unusual electromagnetic behavior as verified by full-wave simulations. Different from the published illusion devices which are composed of left-handed materials with simultaneously negative permittivity and permeability, the proposed illusion media have finite and positive permittivity and permeability. Hence the designed device could be realizable using artificial metamaterials.
The specificity of memory enhancement during interaction with a virtual environment.
Brooks, B M; Attree, E A; Rose, F D; Clifford, B R; Leadbetter, A G
1999-01-01
Two experiments investigated differences between active and passive participation in a computer-generated virtual environment in terms of spatial memory, object memory, and object location memory. It was found that active participants, who controlled their movements in the virtual environment using a joystick, recalled the spatial layout of the virtual environment better than passive participants, who merely watched the active participants' progress. Conversely, there were no significant differences between the active and passive participants' recall or recognition of the virtual objects, nor in their recall of the correct locations of objects in the virtual environment. These findings are discussed in terms of subject-performed task research and the specificity of memory enhancement in virtual environments.
Practical system for generating digital mixed reality video holograms.
Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il
2016-07-10
We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.
Generating Contextual Descriptions of Virtual Reality (VR) Spaces
NASA Astrophysics Data System (ADS)
Olson, D. M.; Zaman, C. H.; Sutherland, A.
2017-12-01
Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.
Newborn chickens generate invariant object representations at the onset of visual object experience
Wood, Justin N.
2013-01-01
To recognize objects quickly and accurately, mature visual systems build invariant object representations that generalize across a range of novel viewing conditions (e.g., changes in viewpoint). To date, however, the origins of this core cognitive ability have not yet been established. To examine how invariant object recognition develops in a newborn visual system, I raised chickens from birth for 2 weeks within controlled-rearing chambers. These chambers provided complete control over all visual object experiences. In the first week of life, subjects’ visual object experience was limited to a single virtual object rotating through a 60° viewpoint range. In the second week of life, I examined whether subjects could recognize that virtual object from novel viewpoints. Newborn chickens were able to generate viewpoint-invariant representations that supported object recognition across large, novel, and complex changes in the object’s appearance. Thus, newborn visual systems can begin building invariant object representations at the onset of visual object experience. These abstract representations can be generated from sparse data, in this case from a visual world containing a single virtual object seen from a limited range of viewpoints. This study shows that powerful, robust, and invariant object recognition machinery is an inherent feature of the newborn brain. PMID:23918372
Bats' avoidance of real and virtual objects: implications for the sonar coding of object size.
Goerlitz, Holger R; Genzel, Daria; Wiegrebe, Lutz
2012-01-01
Fast movement in complex environments requires the controlled evasion of obstacles. Sonar-based obstacle evasion involves analysing the acoustic features of object-echoes (e.g., echo amplitude) that correlate with this object's physical features (e.g., object size). Here, we investigated sonar-based obstacle evasion in bats emerging in groups from their day roost. Using video-recordings, we first show that the bats evaded a small real object (ultrasonic loudspeaker) despite the familiar flight situation. Secondly, we studied the sonar coding of object size by adding a larger virtual object. The virtual object echo was generated by real-time convolution of the bats' calls with the acoustic impulse response of a large spherical disc and played from the loudspeaker. Contrary to the real object, the virtual object did not elicit evasive flight, despite the spectro-temporal similarity of real and virtual object echoes. Yet, their spatial echo features differ: virtual object echoes lack the spread of angles of incidence from which the echoes of large objects arrive at a bat's ears (sonar aperture). We hypothesise that this mismatch of spectro-temporal and spatial echo features caused the lack of virtual object evasion and suggest that the sonar aperture of object echoscapes contributes to the sonar coding of object size. Copyright © 2011 Elsevier B.V. All rights reserved.
Method and Apparatus for Virtual Interactive Medical Imaging by Multiple Remotely-Located Users
NASA Technical Reports Server (NTRS)
Ross, Muriel D. (Inventor); Twombly, Ian Alexander (Inventor); Senger, Steven O. (Inventor)
2003-01-01
A virtual interactive imaging system allows the displaying of high-resolution, three-dimensional images of medical data to a user and allows the user to manipulate the images, including rotation of images in any of various axes. The system includes a mesh component that generates a mesh to represent a surface of an anatomical object, based on a set of data of the object, such as from a CT or MRI scan or the like. The mesh is generated so as to avoid tears, or holes, in the mesh, providing very high-quality representations of topographical features of the object, particularly at high- resolution. The system further includes a virtual surgical cutting tool that enables the user to simulate the removal of a piece or layer of a displayed object, such as a piece of skin or bone, view the interior of the object, manipulate the removed piece, and reattach the removed piece if desired. The system further includes a virtual collaborative clinic component, which allows the users of multiple, remotely-located computer systems to collaboratively and simultaneously view and manipulate the high-resolution, three-dimensional images of the object in real-time.
ERIC Educational Resources Information Center
Jacob, Laura Beth
2012-01-01
Virtual world environments have evolved from object-oriented, text-based online games to complex three-dimensional immersive social spaces where the lines between reality and computer-generated begin to blur. Educators use virtual worlds to create engaging three-dimensional learning spaces for students, but the impact of virtual worlds in…
Role of virtual reality for cerebral palsy management.
Weiss, Patrice L Tamar; Tirosh, Emanuel; Fehlings, Darcy
2014-08-01
Virtual reality is the use of interactive simulations to present users with opportunities to perform in virtual environments that appear, sound, and less frequently, feel similar to real-world objects and events. Interactive computer play refers to the use of a game where a child interacts and plays with virtual objects in a computer-generated environment. Because of their distinctive attributes that provide ecologically realistic and motivating opportunities for active learning, these technologies have been used in pediatric rehabilitation over the past 15 years. The ability of virtual reality to create opportunities for active repetitive motor/sensory practice adds to their potential for neuroplasticity and learning in individuals with neurologic disorders. The objectives of this article is to provide an overview of how virtual reality and gaming are used clinically, to present the results of several example studies that demonstrate their use in research, and to briefly remark on future developments. © The Author(s) 2014.
Virtual Environments Supporting Learning and Communication in Special Needs Education
ERIC Educational Resources Information Center
Cobb, Sue V. G.
2007-01-01
Virtual reality (VR) describes a set of technologies that allow users to explore and experience 3-dimensional computer-generated "worlds" or "environments." These virtual environments can contain representations of real or imaginary objects on a small or large scale (from modeling of molecular structures to buildings, streets, and scenery of a…
Challenges to the development of complex virtual reality surgical simulations.
Seymour, N E; Røtnes, J S
2006-11-01
Virtual reality simulation in surgical training has become more widely used and intensely investigated in an effort to develop safer, more efficient, measurable training processes. The development of virtual reality simulation of surgical procedures has begun, but well-described technical obstacles must be overcome to permit varied training in a clinically realistic computer-generated environment. These challenges include development of realistic surgical interfaces and physical objects within the computer-generated environment, modeling of realistic interactions between objects, rendering of the surgical field, and development of signal processing for complex events associated with surgery. Of these, the realistic modeling of tissue objects that are fully responsive to surgical manipulations is the most challenging. Threats to early success include relatively limited resources for development and procurement, as well as smaller potential for return on investment than in other simulation industries that face similar problems. Despite these difficulties, steady progress continues to be made in these areas. If executed properly, virtual reality offers inherent advantages over other training systems in creating a realistic surgical environment and facilitating measurement of surgeon performance. Once developed, complex new virtual reality training devices must be validated for their usefulness in formative training and assessment of skill to be established.
Sculpting 3D worlds with music: advanced texturing techniques
NASA Astrophysics Data System (ADS)
Greuel, Christian; Bolas, Mark T.; Bolas, Niko; McDowall, Ian E.
1996-04-01
Sound within the virtual environment is often considered to be secondary to the graphics. In a typical scenario, either audio cues are locally associated with specific 3D objects or a general aural ambiance is supplied in order to alleviate the sterility of an artificial experience. This paper discusses a completely different approach, in which cues are extracted from live or recorded music in order to create geometry and control object behaviors within a computer- generated environment. Advanced texturing techniques used to generate complex stereoscopic images are also discussed. By analyzing music for standard audio characteristics such as rhythm and frequency, information is extracted and repackaged for processing. With the Soundsculpt Toolkit, this data is mapped onto individual objects within the virtual environment, along with one or more predetermined behaviors. Mapping decisions are implemented with a user definable schedule and are based on the aesthetic requirements of directors and designers. This provides for visually active, immersive environments in which virtual objects behave in real-time correlation with the music. The resulting music-driven virtual reality opens up several possibilities for new types of artistic and entertainment experiences, such as fully immersive 3D `music videos' and interactive landscapes for live performance.
Measurement Tools for the Immersive Visualization Environment: Steps Toward the Virtual Laboratory.
Hagedorn, John G; Dunkers, Joy P; Satterfield, Steven G; Peskin, Adele P; Kelso, John T; Terrill, Judith E
2007-01-01
This paper describes a set of tools for performing measurements of objects in a virtual reality based immersive visualization environment. These tools enable the use of the immersive environment as an instrument for extracting quantitative information from data representations that hitherto had be used solely for qualitative examination. We provide, within the virtual environment, ways for the user to analyze and interact with the quantitative data generated. We describe results generated by these methods to obtain dimensional descriptors of tissue engineered medical products. We regard this toolbox as our first step in the implementation of a virtual measurement laboratory within an immersive visualization environment.
NASA employee utilizes Virtual Reality (VR) equipment
NASA Technical Reports Server (NTRS)
1991-01-01
Bebe Ly of the Information Systems Directorate's Software Technology Branch at JSC gives virtual reality a try. The stero video goggles and headphones allow her to see and hear in a computer-generated world and the gloves allow her to move around and grasp objects.
Teleoperation with virtual force feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R.J.
1993-08-01
In this paper we describe an algorithm for generating virtual forces in a bilateral teleoperator system. The virtual forces are generated from a world model and are used to provide real-time obstacle avoidance and guidance capabilities. The algorithm requires that the slaves tool and every object in the environment be decomposed into convex polyhedral Primitives. Intrusion distance and extraction vectors are then derived at every time step by applying Gilbert`s polyhedra distance algorithm, which has been adapted for the task. This information is then used to determine the compression and location of nonlinear virtual spring-dampers whose total force is summedmore » and applied to the manipulator/teleoperator system. Experimental results validate the whole approach, showing that it is possible to compute the algorithm and generate realistic, useful psuedo forces for a bilateral teleoperator system using standard VME bus hardware.« less
The RoboCup Mixed Reality League - A Case Study
NASA Astrophysics Data System (ADS)
Gerndt, Reinhard; Bohnen, Matthias; da Silva Guerra, Rodrigo; Asada, Minoru
In typical mixed reality systems there is only a one-way interaction from real to virtual. A human user or the physics of a real object may influence the behavior of virtual objects, but real objects usually cannot be influenced by the virtual world. By introducing real robots into the mixed reality system, we allow a true two-way interaction between virtual and real worlds. Our system has been used since 2007 to implement the RoboCup mixed reality soccer games and other applications for research and edutainment. Our framework system is freely programmable to generate any virtual environment, which may then be further supplemented with virtual and real objects. The system allows for control of any real object based on differential drive robots. The robots may be adapted for different applications, e.g., with markers for identification or with covers to change shape and appearance. They may also be “equipped” with virtual tools. In this chapter we present the hardware and software architecture of our system and some applications. The authors believe this can be seen as a first implementation of Ivan Sutherland’s 1965 idea of the ultimate display: “The ultimate display would, of course, be a room within which the computer can control the existence of matter …” (Sutherland, 1965, Proceedings of IFIPS Congress 2:506-508).
LivePhantom: Retrieving Virtual World Light Data to Real Environments.
Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.
LivePhantom: Retrieving Virtual World Light Data to Real Environments
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663
Self-Assessment Exercises in Continuum Mechanics with Autonomous Learning
ERIC Educational Resources Information Center
Marcé-Nogué, Jordi; Gil, LLuís; Pérez, Marco A.; Sánchez, Montserrat
2013-01-01
The main objective of this work is to generate a set of exercises to improve the autonomous learning in "Continuum Mechanics" through a virtual platform. Students will have to resolve four exercises autonomously related to the subject developed in class and they will post the solutions on the virtual platform within a deadline. Students…
Fisher, J Brian; Porter, Susan M
2002-01-01
This paper describes an application of a display approach which uses chromakey techniques to composite real and computer-generated images allowing a user to see his hands and medical instruments collocated with the display of virtual objects during a medical training simulation. Haptic feedback is provided through the use of a PHANTOM force feedback device in addition to tactile augmentation, which allows the user to touch virtual objects by introducing corresponding real objects in the workspace. A simplified catheter introducer insertion simulation was developed to demonstrate the capabilities of this approach.
True 3D digital holographic tomography for virtual reality applications
NASA Astrophysics Data System (ADS)
Downham, A.; Abeywickrema, U.; Banerjee, P. P.
2017-09-01
Previously, a single CCD camera has been used to record holograms of an object while the object is rotated about a single axis to reconstruct a pseudo-3D image, which does not show detailed depth information from all perspectives. To generate a true 3D image, the object has to be rotated through multiple angles and along multiple axes. In this work, to reconstruct a true 3D image including depth information, a die is rotated along two orthogonal axes, and holograms are recorded using a Mach-Zehnder setup, which are subsequently numerically reconstructed. This allows for the generation of multiple images containing phase (i.e., depth) information. These images, when combined, create a true 3D image with depth information which can be exported to a Microsoft® HoloLens for true 3D virtual reality.
Lee, Kyung-Min; Uhm, Gi-Soo; Cho, Jin-Hyoung; McNamara, James A.
2013-01-01
Objective The purpose of this study was to evaluate the effectiveness of the use of Reference Ear Plug (REP) during cone-beam computed tomography (CBCT) scan for the generation of lateral cephalograms from CBCT scan data. Methods Two CBCT scans were obtained from 33 adults. One CBCT scan was acquired using conventional methods, and the other scan was acquired with the use of REP. Virtual lateral cephalograms created from each CBCT image were traced and compared with tracings of the real cephalograms obtained from the same subject. Results CBCT scan with REP resulted in a smaller discrepancy between real and virtual cephalograms. In comparing the real and virtual cephalograms, no measurements significantly differed from real cephalogram values in case of CBCT scan with REP, whereas many measurements significantly differed in the case of CBCT scan without REP. Conclusion Measurements from CBCT-generated cephalograms are more similar to those from real cephalograms when REP are used during CBCT scan. Thus, the use of REP is suggested during CBCT scan to generate accurate virtual cephalograms from CBCT scan data. PMID:23671830
Kurzynski, Marek; Jaskolska, Anna; Marusiak, Jaroslaw; Wolczowski, Andrzej; Bierut, Przemyslaw; Szumowski, Lukasz; Witkowski, Jerzy; Kisiel-Sajewicz, Katarzyna
2017-08-01
One of the biggest problems of upper limb transplantation is lack of certainty as to whether a patient will be able to control voluntary movements of transplanted hands. Based on findings of the recent research on brain cortex plasticity, a premise can be drawn that mental training supported with visual and sensory feedback can cause structural and functional reorganization of the sensorimotor cortex, which leads to recovery of function associated with the control of movements performed by the upper limbs. In this study, authors - based on the above observations - propose the computer-aided training (CAT) system, which generating visual and sensory stimuli, should enhance the effectiveness of mental training applied to humans before upper limb transplantation. The basis for the concept of computer-aided training system is a virtual hand whose reaching and grasping movements the trained patient can observe on the VR headset screen (visual feedback) and whose contact with virtual objects the patient can feel as a touch (sensory feedback). The computer training system is composed of three main components: (1) the system generating 3D virtual world in which the patient sees the virtual limb from the perspective as if it were his/her own hand; (2) sensory feedback transforming information about the interaction of the virtual hand with the grasped object into mechanical vibration; (3) the therapist's panel for controlling the training course. Results of the case study demonstrate that mental training supported with visual and sensory stimuli generated by the computer system leads to a beneficial change of the brain activity related to motor control of the reaching in the patient with bilateral upper limb congenital transverse deficiency. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Data Management System for International Space Station Simulation Tools
NASA Technical Reports Server (NTRS)
Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)
2002-01-01
Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.
Realistic Real-Time Outdoor Rendering in Augmented Reality
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480
Realistic real-time outdoor rendering in augmented reality.
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.
Sumner, Walton; Xu, Jin Zhong; Roussel, Guy; Hagen, Michael D
2007-10-11
The American Board of Family Medicine deployed virtual patient simulations in 2004 to evaluate Diplomates' diagnostic and management skills. A previously reported dynamic process generates general symptom histories from time series data representing baseline values and reactions to medications. The simulator also must answer queries about details such as palliation and provocation. These responses often describe some recurring pattern, such as, "this medicine relieves my symptoms in a few minutes." The simulator can provide a detail stored as text, or it can evaluate a reference to a second query object. The second query object can generate details using a single Bayesian network to evaluate the effect of each drug in a virtual patient's medication list. A new medication option may not require redesign of the second query object if its implementation is consistent with related drugs. We expect this mechanism to maintain realistic responses to detail questions in complex simulations.
Virtual Boutique: a 3D modeling and content-based management approach to e-commerce
NASA Astrophysics Data System (ADS)
Paquet, Eric; El-Hakim, Sabry F.
2000-12-01
The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.
Virtual phantom magnetic resonance imaging (ViP MRI) on a clinical MRI platform.
Saint-Jalmes, Hervé; Bordelois, Alejandro; Gambarota, Giulio
2018-01-01
The purpose of this study was to implement Virtual Phantom Magnetic Resonance Imaging (ViP MRI), a technique that allows for generating reference signals in MR images using radiofrequency (RF) signals, on a clinical MR system and to test newly designed virtual phantoms. MRI experiments were conducted on a 1.5 T MRI scanner. Electromagnetic modelling of the ViP system was done using the principle of reciprocity. The ViP RF signals were generated using a compact waveform generator (dimensions of 26 cm × 18 cm × 16 cm), connected to a homebuilt 25 mm-diameter RF coil. The ViP RF signals were transmitted to the MRI scanner bore, simultaneously with the acquisition of the signal from the object of interest. Different types of MRI data acquisition (2D and 3D gradient-echo) as well as different phantoms, including the Shepp-Logan phantom, were tested. Furthermore, a uniquely designed virtual phantom - in the shape of a grid - was generated; this newly proposed phantom allows for the investigations of the vendor distortion correction field. High quality MR images of virtual phantoms were obtained. An excellent agreement was found between the experimental data and the inverse cube law, which was the expected functional dependence obtained from the electromagnetic modelling of the ViP system. Short-term time stability measurements yielded a coefficient of variation in the signal intensity over time equal to 0.23% and 0.13% for virtual and physical phantom, respectively. MR images of the virtual grid-shaped phantom were reconstructed with the vendor distortion correction; this allowed for a direct visualization of the vendor distortion correction field. Furthermore, as expected from the electromagnetic modelling of the ViP system, a very compact coil (diameter ~ cm) and very small currents (intensity ~ mA) were sufficient to generate a signal comparable to that of physical phantoms in MRI experiments. The ViP MRI technique was successfully implemented on a clinical MR system. One of the major advantages of ViP MRI over previous approaches is that the generation and transmission of RF signals can be achieved with a self-contained apparatus. As such, the ViP MRI technique is transposable to different platforms (preclinical and clinical) of different vendors. It is also shown here that ViP MRI could be used to generate signals whose characteristics cannot be reproduced by physical objects. This could be exploited to assess MRI system properties, such as the vendor distortion correction field. © 2017 American Association of Physicists in Medicine.
Surface matching for correlation of virtual models: Theory and application
NASA Technical Reports Server (NTRS)
Caracciolo, Roberto; Fanton, Francesco; Gasparetto, Alessandro
1994-01-01
Virtual reality can enable a robot user to off line generate and test in a virtual environment a sequence of operations to be executed by the robot in an assembly cell. Virtual models of objects are to be correlated to the real entities they represent by means of a suitable transformation. A solution to the correlation problem, which is basically a problem of 3-dimensional adjusting, has been found exploiting the surface matching theory. An iterative algorithm has been developed, which matches the geometric surface representing the shape of the virtual model of an object, with a set of points measured on the surface in the real world. A peculiar feature of the algorithm is to work also if there is no one-to-one correspondence between the measured points and those representing the surface model. Furthermore the problem of avoiding convergence to local minima is solved, by defining a starting point of states ensuring convergence to the global minimum. The developed algorithm has been tested by simulation. Finally, this paper proposes a specific application, i.e., correlating a robot cell, equipped for biomedical use with its virtual representation.
Applying Virtual Reality to commercial Edutainment
NASA Technical Reports Server (NTRS)
Grissom, F.; Goza, Sharon P.; Goza, S. Michael
1994-01-01
Virtual reality (VR) when defined as a computer generated, immersive, three dimensional graphics environment which provides varying degrees of interactivity, remains an expensive, highly specialized application, yet to find its way into the school, home, or business. As a novel approach to a theme park-type attraction, though, its use can be justified. This paper describes how a virtual reality 'tour of the human digestive system' was created for the Omniplex Science Museum of Oklahoma City, Oklahoma. The customers main objectives were: (1) to educate; (2) to entertain; (3) to draw visitors; and (4) to generate revenue. The 'Edutainment' system ultimately delivered met these goals. As more such systems come into existence the resulting library of licensable programs will greatly reduce development costs to individual institutions.
Crossing the Virtual World Barrier with OpenAvatar
NASA Technical Reports Server (NTRS)
Joy, Bruce; Kavle, Lori; Tan, Ian
2012-01-01
There are multiple standards and formats for 3D models in virtual environments. The problem is that there is no open source platform for generating models out of discrete parts; this results in the process of having to "reinvent the wheel" when new games, virtual worlds and simulations want to enable their users to create their own avatars or easily customize in-world objects. OpenAvatar is designed to provide a framework to allow artists and programmers to create reusable assets which can be used by end users to generate vast numbers of complete models that are unique and functional. OpenAvatar serves as a framework which facilitates the modularization of 3D models allowing parts to be interchanged within a set of logical constraints.
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
Encountered-Type Haptic Interface for Representation of Shape and Rigidity of 3D Virtual Objects.
Takizawa, Naoki; Yano, Hiroaki; Iwata, Hiroo; Oshiro, Yukio; Ohkohchi, Nobuhiro
2017-01-01
This paper describes the development of an encountered-type haptic interface that can generate the physical characteristics, such as shape and rigidity, of three-dimensional (3D) virtual objects using an array of newly developed non-expandable balloons. To alter the rigidity of each non-expandable balloon, the volume of air in it is controlled through a linear actuator and a pressure sensor based on Hooke's law. Furthermore, to change the volume of each balloon, its exposed surface area is controlled by using another linear actuator with a trumpet-shaped tube. A position control mechanism is constructed to display virtual objects using the balloons. The 3D position of each balloon is controlled using a flexible tube and a string. The performance of the system is tested and the results confirm the effectiveness of the proposed principle and interface.
Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.
2013-01-01
Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760
Kibria, Muhammad Golam; Ali, Sajjad; Jarwar, Muhammad Aslam; Kumar, Sunil; Chong, Ilyoung
2017-09-22
Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented.
Chong, Ilyoung
2017-01-01
Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented. PMID:28937590
ERIC Educational Resources Information Center
Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel
2015-01-01
A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…
Virtual hydrology observatory: an immersive visualization of hydrology modeling
NASA Astrophysics Data System (ADS)
Su, Simon; Cruz-Neira, Carolina; Habib, Emad; Gerndt, Andreas
2009-02-01
The Virtual Hydrology Observatory will provide students with the ability to observe the integrated hydrology simulation with an instructional interface by using a desktop based or immersive virtual reality setup. It is the goal of the virtual hydrology observatory application to facilitate the introduction of field experience and observational skills into hydrology courses through innovative virtual techniques that mimic activities during actual field visits. The simulation part of the application is developed from the integrated atmospheric forecast model: Weather Research and Forecasting (WRF), and the hydrology model: Gridded Surface/Subsurface Hydrologic Analysis (GSSHA). Both the output from WRF and GSSHA models are then used to generate the final visualization components of the Virtual Hydrology Observatory. The various visualization data processing techniques provided by VTK are 2D Delaunay triangulation and data optimization. Once all the visualization components are generated, they are integrated into the simulation data using VRFlowVis and VR Juggler software toolkit. VR Juggler is used primarily to provide the Virtual Hydrology Observatory application with fully immersive and real time 3D interaction experience; while VRFlowVis provides the integration framework for the hydrologic simulation data, graphical objects and user interaction. A six-sided CAVETM like system is used to run the Virtual Hydrology Observatory to provide the students with a fully immersive experience.
Creating technical heritage object replicas in a virtual environment
NASA Astrophysics Data System (ADS)
Egorova, Olga; Shcherbinin, Dmitry
2016-03-01
The paper presents innovative informatics methods for creating virtual technical heritage replicas, which are of significant scientific and practical importance not only to researchers but to the public in general. By performing 3D modeling and animation of aircrafts, spaceships, architectural-engineering buildings, and other technical objects, the process of learning is achieved while promoting the preservation of the replicas for future generations. Modern approaches based on the wide usage of computer technologies attract a greater number of young people to explore the history of science and technology and renew their interest in the field of mechanical engineering.
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Büyüksalih, G.; Tschirschwitz, F.; Kan, T.; Deggim, S.; Kaya, Y.; Baskaraca, A. P.
2017-05-01
Recent advances in contemporary Virtual Reality (VR) technologies are going to have a significant impact on veryday life. Through VR it is possible to virtually explore a computer-generated environment as a different reality, and to immerse oneself into the past or in a virtual museum without leaving the current real-life situation. For such the ultimate VR experience, the user should only see the virtual world. Currently, the user must wear a VR headset which fits around the head and over the eyes to visually separate themselves from the physical world. Via the headset images are fed to the eyes through two small lenses. Cultural heritage monuments are ideally suited both for thorough multi-dimensional geometric documentation and for realistic interactive visualisation in immersive VR applications. Additionally, the game industry offers tools for interactive visualisation of objects to motivate users to virtually visit objects and places. In this paper the generation of a virtual 3D model of the Selimiye mosque in the city of Edirne, Turkey and its processing for data integration into the game engine Unity is presented. The project has been carried out as a co-operation between BİMTAŞ, a company of the Greater Municipality of Istanbul, Turkey and the Photogrammetry & Laser Scanning Lab of the HafenCity University Hamburg, Germany to demonstrate an immersive and interactive visualisation using the new VR system HTC Vive. The workflow from data acquisition to VR visualisation, including the necessary programming for navigation, is described. Furthermore, the possible use (including simultaneous multiple users environments) of such a VR visualisation for a CH monument is discussed in this contribution.
NASA Technical Reports Server (NTRS)
1990-01-01
While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.
Interactions with Virtual People: Do Avatars Dream of Digital Sheep?. Chapter 6
NASA Technical Reports Server (NTRS)
Slater, Mel; Sanchez-Vives, Maria V.
2007-01-01
This paper explores another form of artificial entity, ones without physical embodiment. We refer to virtual characters as the name for a type of interactive object that have become familiar in computer games and within virtual reality applications. We refer to these as avatars: three-dimensional graphical objects that are in more-or-less human form which can interact with humans. Sometimes such avatars will be representations of real-humans who are interacting together within a shared networked virtual environment, other times the representations will be of entirely computer generated characters. Unlike other authors, who reserve the term agent for entirely computer generated characters and avatars for virtual embodiments of real people; the same term here is used for both. This is because avatars and agents are on a continuum. The question is where does their behaviour originate? At the extremes the behaviour is either completely computer generated or comes only from tracking of a real person. However, not every aspect of a real person can be tracked every eyebrow move, every blink, every breath rather real tracking data would be supplemented by inferred behaviours which are programmed based on the available information as to what the real human is doing and her/his underlying emotional and psychological state. Hence there is always some programmed behaviour it is only a matter of how much. In any case the same underlying problem remains how can the human character be portrayed in such a manner that its actions are believable and have an impact on the real people with whom it interacts? This paper has three main parts. In the first part we will review some evidence that suggests that humans react with appropriate affect in their interactions with virtual human characters, or with other humans who are represented as avatars. This is so in spite of the fact that the representational fidelity is relatively low. Our evidence will be from the realm of psychotherapy, where virtual social situations are created that do test whether people react appropriately within these situations. We will also consider some experiments on face-to-face virtual communications between people in the same shared virtual environments. The second part will try to give some clues about why this might happen, taking into account modern theories of perception from neuroscience. The third part will include some speculations about the future developments of the relationship between people and virtual people. We will suggest that a more likely scenario than the world becoming populated by physically embodied virtual people (robots, androids) is that in the relatively near future we will interact more and more in our everyday lives with virtual people- bank managers, shop assistants, instructors, and so on. What is happening in the movies with computer graphic generated individuals and entire crowds may move into the space of everyday life.
Use of 3D techniques for virtual production
NASA Astrophysics Data System (ADS)
Grau, Oliver; Price, Marc C.; Thomas, Graham A.
2000-12-01
Virtual production for broadcast is currently mainly used in the form of virtual studios, where the resulting media is a sequence of 2D images. With the steady increase of 3D computing power in home PCs and the technical progress in 3D display technology, the content industry is looking for new kinds of program material, which makes use of 3D technology. The applications range form analysis of sport scenes, 3DTV, up to the creation of fully immersive content. In a virtual studio a camera films one or more actors in a controlled environment. The pictures of the actors can be segmented very accurately in real time using chroma keying techniques. The isolated silhouette can be integrated into a new synthetic virtual environment using a studio mixer. The resulting shape description of the actors is 2D so far. For the realization of more sophisticated optical interactions of the actors with the virtual environment, such as occlusions and shadows, an object-based 3D description of scenes is needed. However, the requirements of shape accuracy, and the kind of representation, differ in accordance with the application. This contribution gives an overview of requirements and approaches for the generation of an object-based 3D description in various applications studied by the BBC R and D department. An enhanced Virtual Studio for 3D programs is proposed that covers a range of applications for virtual production.
A 3D visualization and simulation of the individual human jaw.
Muftić, Osman; Keros, Jadranka; Baksa, Sarajko; Carek, Vlado; Matković, Ivo
2003-01-01
A new biomechanical three-dimensional (3D) model for the human mandible based on computer-generated virtual model is proposed. Using maps obtained from the special kinds of photos of the face of the real subject, it is possible to attribute personality to the virtual character, while computer animation offers movements and characteristics within the confines of space and time of the virtual world. A simple two-dimensional model of the jaw cannot explain the biomechanics, where the muscular forces through occlusion and condylar surfaces are in the state of 3D equilibrium. In the model all forces are resolved into components according to a selected coordinate system. The muscular forces act on the jaw, along with the necessary force level for chewing as some kind of mandible balance, preventing dislocation and loading of nonarticular tissues. In the work is used new approach to computer-generated animation of virtual 3D characters (called "Body SABA"), using in one object package of minimal costs and easy for operation.
A Context-Aware Method for Authentically Simulating Outdoors Shadows for Mobile Augmented Reality.
Barreira, Joao; Bessa, Maximino; Barbosa, Luis; Magalhaes, Luis
2018-03-01
Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.
Virtual reality training improves balance function.
Mao, Yurong; Chen, Peiming; Li, Le; Huang, Dongfeng
2014-09-01
Virtual reality is a new technology that simulates a three-dimensional virtual world on a computer and enables the generation of visual, audio, and haptic feedback for the full immersion of users. Users can interact with and observe objects in three-dimensional visual space without limitation. At present, virtual reality training has been widely used in rehabilitation therapy for balance dysfunction. This paper summarizes related articles and other articles suggesting that virtual reality training can improve balance dysfunction in patients after neurological diseases. When patients perform virtual reality training, the prefrontal, parietal cortical areas and other motor cortical networks are activated. These activations may be involved in the reconstruction of neurons in the cerebral cortex. Growing evidence from clinical studies reveals that virtual reality training improves the neurological function of patients with spinal cord injury, cerebral palsy and other neurological impairments. These findings suggest that virtual reality training can activate the cerebral cortex and improve the spatial orientation capacity of patients, thus facilitating the cortex to control balance and increase motion function.
Virtual reality training improves balance function
Mao, Yurong; Chen, Peiming; Li, Le; Huang, Dongfeng
2014-01-01
Virtual reality is a new technology that simulates a three-dimensional virtual world on a computer and enables the generation of visual, audio, and haptic feedback for the full immersion of users. Users can interact with and observe objects in three-dimensional visual space without limitation. At present, virtual reality training has been widely used in rehabilitation therapy for balance dysfunction. This paper summarizes related articles and other articles suggesting that virtual reality training can improve balance dysfunction in patients after neurological diseases. When patients perform virtual reality training, the prefrontal, parietal cortical areas and other motor cortical networks are activated. These activations may be involved in the reconstruction of neurons in the cerebral cortex. Growing evidence from clinical studies reveals that virtual reality training improves the neurological function of patients with spinal cord injury, cerebral palsy and other neurological impairments. These findings suggest that virtual reality training can activate the cerebral cortex and improve the spatial orientation capacity of patients, thus facilitating the cortex to control balance and increase motion function. PMID:25368651
Shared virtual environments for aerospace training
NASA Technical Reports Server (NTRS)
Loftin, R. Bowen; Voss, Mark
1994-01-01
Virtual environments have the potential to significantly enhance the training of NASA astronauts and ground-based personnel for a variety of activities. A critical requirement is the need to share virtual environments, in real or near real time, between remote sites. It has been hypothesized that the training of international astronaut crews could be done more cheaply and effectively by utilizing such shared virtual environments in the early stages of mission preparation. The Software Technology Branch at NASA's Johnson Space Center has developed the capability for multiple users to simultaneously share the same virtual environment. Each user generates the graphics needed to create the virtual environment. All changes of object position and state are communicated to all users so that each virtual environment maintains its 'currency.' Examples of these shared environments will be discussed and plans for the utilization of the Department of Defense's Distributed Interactive Simulation (DIS) protocols for shared virtual environments will be presented. Finally, the impact of this technology on training and education in general will be explored.
On-line interactive virtual experiments on nanoscience
NASA Astrophysics Data System (ADS)
Kadar, Manuella; Ileana, Ioan; Hutanu, Constantin
2009-01-01
This paper is an overview on the next generation web which allows students to experience virtual experiments on nano science, physics devices, processes and processing equipment. Virtual reality is used to support a real university lab in which a student can experiment real lab sessions. The web material is presented in an intuitive and highly visual 3D form that is accessible to a diverse group of students. Such type of laboratory provides opportunities for professional and practical education for a wide range of users. The expensive equipment and apparatuses that build the experimental stage in a particular standard laboratory is used to create virtual educational research laboratories. Students learn how to prepare the apparatuses and facilities for the experiment. The online experiments metadata schema is the format for describing online experiments, much like the schema behind a library catalogue used to describe the books in a library. As an online experiment is a special kind of learning object, one specifies its schema as an extension to an established metadata schema for learning objects. The content of the courses, metainformation as well as readings and user data are saved on the server in a database as XML objects.
Realistic terrain visualization based on 3D virtual world technology
NASA Astrophysics Data System (ADS)
Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai
2009-09-01
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.
Realistic terrain visualization based on 3D virtual world technology
NASA Astrophysics Data System (ADS)
Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai
2010-11-01
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.
Plank, Markus; Snider, Joseph; Kaestner, Erik; Halgren, Eric; Poizner, Howard
2015-02-01
Using a novel, fully mobile virtual reality paradigm, we investigated the EEG correlates of spatial representations formed during unsupervised exploration. On day 1, subjects implicitly learned the location of 39 objects by exploring a room and popping bubbles that hid the objects. On day 2, they again popped bubbles in the same environment. In most cases, the objects hidden underneath the bubbles were in the same place as on day 1. However, a varying third of them were misplaced in each block. Subjects indicated their certainty that the object was in the same location as the day before. Compared with bubble pops revealing correctly placed objects, bubble pops revealing misplaced objects evoked a decreased negativity starting at 145 ms, with scalp topography consistent with generation in medial parietal cortex. There was also an increased negativity starting at 515 ms to misplaced objects, with scalp topography consistent with generation in inferior temporal cortex. Additionally, misplaced objects elicited an increase in frontal midline theta power. These findings suggest that the successive neurocognitive stages of processing allocentric space may include an initial template matching, integration of the object within its spatial cognitive map, and memory recall, analogous to the processing negativity N400 and theta that support verbal cognitive maps in humans. Copyright © 2015 the American Physiological Society.
Creating objects and object categories for studying perception and perceptual learning.
Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay
2012-11-02
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties. Many innovative and useful methods currently exist for creating novel objects and object categories (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis.
Teachers' Perspectives on Online Virtual Labs vs. Hands-On Labs in High School Science
NASA Astrophysics Data System (ADS)
Bohr, Teresa M.
This study of online science teachers' opinions addressed the use of virtual labs in online courses. A growing number of schools use virtual labs that must meet mandated laboratory standards to ensure they provide learning experiences comparable to hands-on labs, which are an integral part of science curricula. The purpose of this qualitative case study was to examine teachers' perceptions of the quality and effectiveness of high school virtual labs. The theoretical foundation was constructivism, as labs provide student-centered activities for problem solving, inquiry, and exploration of phenomena. The research questions focused on experienced teachers' perceptions of the quality of virtual vs. hands-on labs. Data were collected through survey questions derived from the lab objectives of The Next Generation Science Standards . Eighteen teachers rated the degree of importance of each objective and also rated how they felt virtual labs met these objectives; these ratings were reported using descriptive statistics. Responses to open-ended questions were few and served to illustrate the numerical results. Many teachers stated that virtual labs are valuable supplements but could not completely replace hands-on experiences. Studies on the quality and effectiveness of high school virtual labs are limited despite widespread use. Comprehensive studies will ensure that online students have equal access to quality labs. School districts need to define lab requirements, and colleges need to specify the lab experience they require. This study has potential to inspire positive social change by assisting science educators, including those in the local school district, in evaluating and selecting courseware designed to promote higher order thinking skills, real-world problem solving, and development of strong inquiry skills, thereby improving science instruction for all high school students.
Rapid prototyping 3D virtual world interfaces within a virtual factory environment
NASA Technical Reports Server (NTRS)
Kosta, Charles Paul; Krolak, Patrick D.
1993-01-01
On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.
Compression of computer generated phase-shifting hologram sequence using AVC and HEVC
NASA Astrophysics Data System (ADS)
Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic
2013-09-01
With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.
Evaluation of Wearable Haptic Systems for the Fingers in Augmented Reality Applications.
Maisto, Maurizio; Pacchierotti, Claudio; Chinello, Francesco; Salvietti, Gionata; De Luca, Alessandro; Prattichizzo, Domenico
2017-01-01
Although Augmented Reality (AR) has been around for almost five decades, only recently we have witnessed AR systems and applications entering in our everyday life. Representative examples of this technological revolution are the smartphone games "Pokémon GO" and "Ingress" or the Google Translate real-time sign interpretation app. Even if AR applications are already quite compelling and widespread, users are still not able to physically interact with the computer-generated reality. In this respect, wearable haptics can provide the compelling illusion of touching the superimposed virtual objects without constraining the motion or the workspace of the user. In this paper, we present the experimental evaluation of two wearable haptic interfaces for the fingers in three AR scenarios, enrolling 38 participants. In the first experiment, subjects were requested to write on a virtual board using a real chalk. The haptic devices provided the interaction forces between the chalk and the board. In the second experiment, subjects were asked to pick and place virtual and real objects. The haptic devices provided the interaction forces due to the weight of the virtual objects. In the third experiment, subjects were asked to balance a virtual sphere on a real cardboard. The haptic devices provided the interaction forces due to the weight of the virtual sphere rolling on the cardboard. Providing haptic feedback through the considered wearable device significantly improved the performance of all the considered tasks. Moreover, subjects significantly preferred conditions providing wearable haptic feedback.
Virtual Reality at the PC Level
NASA Technical Reports Server (NTRS)
Dean, John
1998-01-01
The main objective of my research has been to incorporate virtual reality at the desktop level; i.e., create virtual reality software that can be run fairly inexpensively on standard PC's. The standard language used for virtual reality on PC's is VRML (Virtual Reality Modeling Language). It is a new language so it is still undergoing a lot of changes. VRML 1.0 came out only a couple years ago and VRML 2.0 came out around last September. VRML is an interpreted language that is run by a web browser plug-in. It is fairly flexible in terms of allowing you to create different shapes and animations. Before this summer, I knew very little about virtual reality and I did not know VRML at all. I learned the VRML language by reading two books and experimenting on a PC. The following topics are presented: CAD to VRML, VRML 1.0 to VRML 2.0, VRML authoring tools, VRML browsers, finding virtual reality applications, the AXAF project, the VRML generator program, web communities and future plans.
Interactive visuo-motor therapy system for stroke rehabilitation.
Eng, Kynan; Siekierka, Ewa; Pyk, Pawel; Chevrier, Edith; Hauser, Yves; Cameirao, Monica; Holper, Lisa; Hägni, Karin; Zimmerli, Lukas; Duff, Armin; Schuster, Corina; Bassetti, Claudio; Verschure, Paul; Kiper, Daniel
2007-09-01
We present a virtual reality (VR)-based motor neurorehabilitation system for stroke patients with upper limb paresis. It is based on two hypotheses: (1) observed actions correlated with self-generated or intended actions engage cortical motor observation, planning and execution areas ("mirror neurons"); (2) activation in damaged parts of motor cortex can be enhanced by viewing mirrored movements of non-paretic limbs. We postulate that our approach, applied during the acute post-stroke phase, facilitates motor re-learning and improves functional recovery. The patient controls a first-person view of virtual arms in tasks varying from simple (hitting objects) to complex (grasping and moving objects). The therapist adjusts weighting factors in the non-paretic limb to move the paretic virtual limb, thereby stimulating the mirror neuron system and optimizing patient motivation through graded task success. We present the system's neuroscientific background, technical details and preliminary results.
Jiřík, Miroslav; Bartoš, Martin; Tomášek, Petr; Malečková, Anna; Kural, Tomáš; Horáková, Jana; Lukáš, David; Suchý, Tomáš; Kochová, Petra; Hubálek Kalbáčová, Marie; Králíčková, Milena; Tonar, Zbyněk
2018-06-01
Quantification of the structure and composition of biomaterials using micro-CT requires image segmentation due to the low contrast and overlapping radioopacity of biological materials. The amount of bias introduced by segmentation procedures is generally unknown. We aim to develop software that generates three-dimensional models of fibrous and porous structures with known volumes, surfaces, lengths, and object counts in fibrous materials and to provide a software tool that calibrates quantitative micro-CT assessments. Virtual image stacks were generated using the newly developed software TeIGen, enabling the simulation of micro-CT scans of unconnected tubes, connected tubes, and porosities. A realistic noise generator was incorporated. Forty image stacks were evaluated using micro-CT, and the error between the true known and estimated data was quantified. Starting with geometric primitives, the error of the numerical estimation of surfaces and volumes was eliminated, thereby enabling the quantification of volumes and surfaces of colliding objects. Analysis of the sensitivity of the thresholding upon parameters of generated testing image sets revealed the effects of decreasing resolution and increasing noise on the accuracy of the micro-CT quantification. The size of the error increased with decreasing resolution when the voxel size exceeded 1/10 of the typical object size, which simulated the effect of the smallest details that could still be reliably quantified. Open-source software for calibrating quantitative micro-CT assessments by producing and saving virtually generated image data sets with known morphometric data was made freely available to researchers involved in morphometry of three-dimensional fibrillar and porous structures in micro-CT scans. © 2018 Wiley Periodicals, Inc.
Design of virtual simulation experiment based on key events
NASA Astrophysics Data System (ADS)
Zhong, Zheng; Zhou, Dongbo; Song, Lingxiu
2018-06-01
Considering complex content and lacking of guidance in virtual simulation experiments, the key event technology in VR narrative theory was introduced for virtual simulation experiment to enhance fidelity and vividness process. Based on the VR narrative technology, an event transition structure was designed to meet the need of experimental operation process, and an interactive event processing model was used to generate key events in interactive scene. The experiment of" margin value of bees foraging" based on Biologic morphology was taken as an example, many objects, behaviors and other contents were reorganized. The result shows that this method can enhance the user's experience and ensure experimental process complete and effectively.
NASA Astrophysics Data System (ADS)
Berthier, J.; Carry, B.; Vachier, F.; Eggl, S.; Santerne, A.
2016-05-01
All the fields of the extended space mission Kepler/K2 are located within the ecliptic. Many Solar system objects thus cross the K2 stellar masks on a regular basis. We aim at providing to the entire community a simple tool to search and identify Solar system objects serendipitously observed by Kepler. The sky body tracker (SkyBoT) service hosted at Institut de mécanique céleste et de calcul des éphémérides provides a Virtual Observatory compliant cone search that lists all Solar system objects present within a field of view at a given epoch. To generate such a list in a timely manner, ephemerides are pre-computed, updated weekly, and stored in a relational data base to ensure a fast access. The SkyBoT web service can now be used with Kepler. Solar system objects within a small (few arcminutes) field of view are identified and listed in less than 10 s. Generating object data for the entire K2 field of view (14°) takes about a minute. This extension of the SkyBoT service opens new possibilities with respect to mining K2 data for Solar system science, as well as removing Solar system objects from stellar photometric time series.
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.
1991-01-01
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.
NASA Astrophysics Data System (ADS)
Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.
1991-03-01
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.
Creating Objects and Object Categories for Studying Perception and Perceptual Learning
Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay
2012-01-01
In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created by these simulations can be further manipulated by various morphing methods to generate systematic variations of shape characteristics15,16. The VP and morphing methods can also be applied, in principle, to novel virtual objects other than digital embryos, or to virtual versions of real-world objects9,13. Virtual objects created in this fashion can be rendered as visual images using a conventional graphical toolkit, with desired manipulations of surface texture, illumination, size, viewpoint and background. The virtual objects can also be 'printed' as haptic objects using a conventional 3-D prototyper. We also describe some implementations of these computational algorithms to help illustrate the potential utility of the algorithms. It is important to distinguish the algorithms from their implementations. The implementations are demonstrations offered solely as a 'proof of principle' of the underlying algorithms. It is important to note that, in general, an implementation of a computational algorithm often has limitations that the algorithm itself does not have. Together, these methods represent a set of powerful and flexible tools for studying object recognition and perceptual learning by biological and computational systems alike. With appropriate extensions, these methods may also prove useful in the study of morphogenesis and phylogenesis. PMID:23149420
NASA Astrophysics Data System (ADS)
Beckhaus, Steffi
Virtual Reality aims at creating an artificial environment that can be perceived as a substitute to a real setting. Much effort in research and development goes into the creation of virtual environments that in their majority are perceivable only by eyes and hands. The multisensory nature of our perception, however, allows and, arguably, also expects more than that. As long as we are not able to simulate and deliver a fully sensory believable virtual environment to a user, we could make use of the fully sensory, multi-modal nature of real objects to fill in for this deficiency. The idea is to purposefully integrate real artifacts into the application and interaction, instead of dismissing anything real as hindering the virtual experience. The term virtual reality - denoting the goal, not the technology - shifts from a core virtual reality to an “enriched” reality, technologically encompassing both the computer generated and the real, physical artifacts. Together, either simultaneously or in a hybrid way, real and virtual jointly provide stimuli that are perceived by users through their senses and are later formed into an experience by the user's mind.
Cosmology of Universe Particles and Beyond
NASA Astrophysics Data System (ADS)
Xu, Wei
2016-06-01
For the first time in history, all properties of cosmology particles are uncovered and described concisely and systematically, known as the elementary particles in contemporary physics.Aligning with the synthesis of the virtual and physical worlds in a hierarchical taxonomy of the universe, this theory refines the topology framework of cosmology, and presents a new perspective of the Yin Yang natural laws that, through the processes of creation and reproduction, the fundamental elements generate an infinite series of circular objects and a Yin Yang duality of dynamic fields that are sequenced and transformed states of matter between the virtual and physical worlds.Once virtual objects are transformed, they embody various enclaves of energy states, known as dark energy, quarks, leptons, bosons, protons, and neutrons, characterized by their incentive oscillations of timestate variables in a duality of virtual realities: energy and time, spin and charge, mass and space, symmetry and antisymmetry.As a consequence, it derives the fully-scaled quantum properties of physical particles in accordance with numerous historical experiments, and has overcome the limitations of uncertainty principle and the Standard Model, towards concisely exploring physical nature and beyond...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkerts, MM; University of California San Diego, La Jolla, California; Long, T
Purpose: To provide a tool to generate large sets of realistic virtual patient geometries and beamlet doses for treatment optimization research. This tool enables countless studies exploring the fundamental interplay between patient geometry, objective functions, weight selections, and achievable dose distributions for various algorithms and modalities. Methods: Generating realistic virtual patient geometries requires a small set of real patient data. We developed a normalized patient shape model (PSM) which captures organ and target contours in a correspondence-preserving manner. Using PSM-processed data, we perform principal component analysis (PCA) to extract major modes of variation from the population. These PCA modes canmore » be shared without exposing patient information. The modes are re-combined with different weights to produce sets of realistic virtual patient contours. Because virtual patients lack imaging information, we developed a shape-based dose calculation (SBD) relying on the assumption that the region inside the body contour is water. SBD utilizes a 2D fluence-convolved scatter kernel, derived from Monte Carlo simulations, and can compute both full dose for a given set of fluence maps, or produce a dose matrix (dose per fluence pixel) for many modalities. Combining the shape model with SBD provides the data needed for treatment plan optimization research. Results: We used PSM to capture organ and target contours for 96 prostate cases, extracted the first 20 PCA modes, and generated 2048 virtual patient shapes by randomly sampling mode scores. Nearly half of the shapes were thrown out for failing anatomical checks, the remaining 1124 were used in computing dose matrices via SBD and a standard 7-beam protocol. As a proof of concept, and to generate data for later study, we performed fluence map optimization emphasizing PTV coverage. Conclusions: We successfully developed and tested a tool for creating customizable sets of virtual patients suitable for large-scale radiation therapy optimization research.« less
A Low-cost System for Generating Near-realistic Virtual Actors
NASA Astrophysics Data System (ADS)
Afifi, Mahmoud; Hussain, Khaled F.; Ibrahim, Hosny M.; Omar, Nagwa M.
2015-06-01
Generating virtual actors is one of the most challenging fields in computer graphics. The reconstruction of a realistic virtual actor has been paid attention by the academic research and the film industry to generate human-like virtual actors. Many movies were acted by human-like virtual actors, where the audience cannot distinguish between real and virtual actors. The synthesis of realistic virtual actors is considered a complex process. Many techniques are used to generate a realistic virtual actor; however they usually require expensive hardware equipment. In this paper, a low-cost system that generates near-realistic virtual actors is presented. The facial features of the real actor are blended with a virtual head that is attached to the actor's body. Comparing with other techniques that generate virtual actors, the proposed system is considered a low-cost system that requires only one camera that records the scene without using any expensive hardware equipment. The results of our system show that the system generates good near-realistic virtual actors that can be used on many applications.
Active tactile exploration using a brain-machine-brain interface.
O'Doherty, Joseph E; Lebedev, Mikhail A; Ifft, Peter J; Zhuang, Katie Z; Shokur, Solaiman; Bleuler, Hannes; Nicolelis, Miguel A L
2011-10-05
Brain-machine interfaces use neuronal activity recorded from the brain to establish direct communication with external actuators, such as prosthetic arms. It is hoped that brain-machine interfaces can be used to restore the normal sensorimotor functions of the limbs, but so far they have lacked tactile sensation. Here we report the operation of a brain-machine-brain interface (BMBI) that both controls the exploratory reaching movements of an actuator and allows signalling of artificial tactile feedback through intracortical microstimulation (ICMS) of the primary somatosensory cortex. Monkeys performed an active exploration task in which an actuator (a computer cursor or a virtual-reality arm) was moved using a BMBI that derived motor commands from neuronal ensemble activity recorded in the primary motor cortex. ICMS feedback occurred whenever the actuator touched virtual objects. Temporal patterns of ICMS encoded the artificial tactile properties of each object. Neuronal recordings and ICMS epochs were temporally multiplexed to avoid interference. Two monkeys operated this BMBI to search for and distinguish one of three visually identical objects, using the virtual-reality arm to identify the unique artificial texture associated with each. These results suggest that clinical motor neuroprostheses might benefit from the addition of ICMS feedback to generate artificial somatic perceptions associated with mechanical, robotic or even virtual prostheses.
DOT National Transportation Integrated Search
2016-05-01
As driving becomes more automated, vehicles are being equipped with more sensors generating even higher data rates. Radars (RAdio Detection and Ranging) are used for object detection, visual cameras as virtual mirrors, and LIDARs (LIght Detection and...
Manually locating physical and virtual reality objects.
Chen, Karen B; Kimmel, Ryan A; Bartholomew, Aaron; Ponto, Kevin; Gleicher, Michael L; Radwin, Robert G
2014-09-01
In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance. Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects. Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object. Users were 1.64 times less accurate (p < .001) and spent 1.49 times more time (p = .01) targeting virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p < .05) greater than the observed errors for farther virtual targets but not significantly different for close-up virtual targets. Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision. Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.
The Development of the Virtual Learning Media of the Sacred Object Artwork
ERIC Educational Resources Information Center
Nuanmeesri, Sumitra; Jamornmongkolpilai, Saran
2018-01-01
This research aimed to develop the virtual learning media of the sacred object artwork by applying the concept of the virtual technology in order to publicize knowledge on the cultural wisdom of the sacred object artwork. It was done by designing and developing the virtual learning media of the sacred object artwork for the virtual presentation.…
Evaluating Remapped Physical Reach for Hand Interactions with Passive Haptics in Virtual Reality.
Han, Dustin T; Suhail, Mohamed; Ragan, Eric D
2018-04-01
Virtual reality often uses motion tracking to incorporate physical hand movements into interaction techniques for selection and manipulation of virtual objects. To increase realism and allow direct hand interaction, real-world physical objects can be aligned with virtual objects to provide tactile feedback and physical grasping. However, unless a physical space is custom configured to match a specific virtual reality experience, the ability to perfectly match the physical and virtual objects is limited. Our research addresses this challenge by studying methods that allow one physical object to be mapped to multiple virtual objects that can exist at different virtual locations in an egocentric reference frame. We study two such techniques: one that introduces a static translational offset between the virtual and physical hand before a reaching action, and one that dynamically interpolates the position of the virtual hand during a reaching motion. We conducted two experiments to assess how the two methods affect reaching effectiveness, comfort, and ability to adapt to the remapping techniques when reaching for objects with different types of mismatches between physical and virtual locations. We also present a case study to demonstrate how the hand remapping techniques could be used in an immersive game application to support realistic hand interaction while optimizing usability. Overall, the translational technique performed better than the interpolated reach technique and was more robust for situations with larger mismatches between virtual and physical objects.
Validation of smoking-related virtual environments for cue exposure therapy.
García-Rodríguez, Olaya; Pericot-Valverde, Irene; Gutiérrez-Maldonado, José; Ferrer-García, Marta; Secades-Villa, Roberto
2012-06-01
Craving is considered one of the main factors responsible for relapse after smoking cessation. Cue exposure therapy (CET) consists of controlled and repeated exposure to drug-related stimuli in order to extinguish associated responses. The main objective of this study was to assess the validity of 7 virtual reality environments for producing craving in smokers that can be used within the CET paradigm. Forty-six smokers and 44 never-smokers were exposed to 7 complex virtual environments with smoking-related cues that reproduce typical situations in which people smoke, and to a neutral virtual environment without smoking cues. Self-reported subjective craving and psychophysiological measures were recorded during the exposure. All virtual environments with smoking-related cues were able to generate subjective craving in smokers, while no increase was observed for the neutral environment. The most sensitive psychophysiological variable to craving increases was heart rate. The findings provide evidence of the utility of virtual reality for simulating real situations capable of eliciting craving. We also discuss how CET for smoking cessation can be improved through these virtual tools. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ferrer-García, Marta; García-Rodríguez, Olaya; Gutiérrez-Maldonado, José; Pericot-Valverde, Irene; Secades-Villa, Roberto
2010-01-01
Virtual Reality environments that reproduce typical contexts associated with tobacco use may be useful for aiding smoking cessation. The main objective of this study was to assess the capacity of eight environments to produce the craving to smoke and determine the relation of craving to nicotine dependence and level of presence. The results show that all the environments were able to generate the desire to smoke; a direct relation was found between sense of presence and craving.
NASA employee utilizes Virtual Reality (VR) equipment
1991-10-28
S91-50404 (1 Nov 1991) --- Bebe Ly of the Information Systems Directorate's (ISD) Software Technology Branch at the Johnson Space Center (JSC) gives virtual reality a try. The stereo video goggles and head[phones allow her to see and hear in a computer-generated world and the gloves allow her to move around and grasp objects. Ly is a member of the team that developed the C Language Integrated production System (CLIPS) which has been instrumental in developing several of the systems to be demonstrated in an upcoming Software Technology Exposition at JSC.
Holographic video at 40 frames per second for 4-million object points.
Tsang, Peter; Cheung, W-K; Poon, T-C; Zhou, C
2011-08-01
We propose a fast method for generating digital Fresnel holograms based on an interpolated wavefront-recording plane (IWRP) approach. Our method can be divided into two stages. First, a small, virtual IWRP is derived in a computational-free manner. Second, the IWRP is expanded into a Fresnel hologram with a pair of fast Fourier transform processes, which are realized with the graphic processing unit (GPU). We demonstrate state-of-the-art experimental results, capable of generating a 2048 x 2048 Fresnel hologram of around 4 × 10(6) object points at a rate of over 40 frames per second.
Virtual 3d City Modeling: Techniques and Applications
NASA Astrophysics Data System (ADS)
Singh, S. P.; Jain, K.; Mandla, V. R.
2013-08-01
3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3-D City model is a very useful for various kinds of applications such as for planning in Navigation, Tourism, Disasters Management, Transportations, Municipality, Urban Environmental Managements and Real-estate industry. So the Construction of Virtual 3-D city models is a most interesting research topic in recent years.
Valdés, Julio J; Barton, Alan J
2007-05-01
A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.
A multi-criteria approach to camera motion design for volume data animation.
Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.
Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback
NASA Astrophysics Data System (ADS)
Margolis, Todd; DeFanti, Thomas A.; Dawe, Greg; Prudhomme, Andrew; Schulze, Jurgen P.; Cutchin, Steve
2011-03-01
Researchers at the University of California, San Diego, have created a new, relatively low-cost augmented reality system that enables users to touch the virtual environment they are immersed in. The Heads-Up Virtual Reality device (HUVR) couples a consumer 3D HD flat screen TV with a half-silvered mirror to project any graphic image onto the user's hands and into the space surrounding them. With his or her head position optically tracked to generate the correct perspective view, the user maneuvers a force-feedback (haptic) device to interact with the 3D image, literally 'touching' the object's angles and contours as if it was a tangible physical object. HUVR can be used for training and education in structural and mechanical engineering, archaeology and medicine as well as other tasks that require hand-eye coordination. One of the most unique characteristics of HUVR is that a user can place their hands inside of the virtual environment without occluding the 3D image. Built using open-source software and consumer level hardware, HUVR offers users a tactile experience in an immersive environment that is functional, affordable and scalable.
An artificial reality environment for remote factory control and monitoring
NASA Technical Reports Server (NTRS)
Kosta, Charles Paul; Krolak, Patrick D.
1993-01-01
Work has begun on the merger of two well known systems, VEOS (HITLab) and CLIPS (NASA). In the recent past, the University of Massachusetts Lowell developed a parallel version of NASA CLIPS, called P-CLIPS. This modification allows users to create smaller expert systems which are able to communicate with each other to jointly solve problems. With the merger of a VEOS message system, PCLIPS-V can now act as a group of entities working within VEOS. To display the 3D virtual world we have been using a graphics package called HOOPS, from Ithaca Software. The artificial reality environment we have set up contains actors and objects as found in our Lincoln Logs Factory of the Future project. The environment allows us to view and control the objects within the virtual world. All communication between the separate CLIPS expert systems is done through VEOS. A graphical renderer generates camera views on X-Windows devices; Head Mounted Devices are not required. This allows more people to make use of this technology. We are experimenting with different types of virtual vehicles to give the user a sense that he or she is actually moving around inside the factory looking ahead through windows and virtual monitors.
V-Man Generation for 3-D Real Time Animation. Chapter 5
NASA Technical Reports Server (NTRS)
Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang
2007-01-01
The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.
An efficient hole-filling method based on depth map in 3D view generation
NASA Astrophysics Data System (ADS)
Liang, Haitao; Su, Xiu; Liu, Yilin; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong
2018-01-01
New virtual view is synthesized through depth image based rendering(DIBR) using a single color image and its associated depth map in 3D view generation. Holes are unavoidably generated in the 2D to 3D conversion process. We propose a hole-filling method based on depth map to address the problem. Firstly, we improve the process of DIBR by proposing a one-to-four (OTF) algorithm. The "z-buffer" algorithm is used to solve overlap problem. Then, based on the classical patch-based algorithm of Criminisi et al., we propose a hole-filling algorithm using the information of depth map to handle the image after DIBR. In order to improve the accuracy of the virtual image, inpainting starts from the background side. In the calculation of the priority, in addition to the confidence term and the data term, we add the depth term. In the search for the most similar patch in the source region, we define the depth similarity to improve the accuracy of searching. Experimental results show that the proposed method can effectively improve the quality of the 3D virtual view subjectively and objectively.
Virtual reality, disability and rehabilitation.
Wilson, P N; Foreman, N; Stanton, D
1997-06-01
Virtual reality, or virtual environment computer technology, generates simulated objects and events with which people can interact. Existing and potential applications for this technology in the field of disability and rehabilitation are discussed. The main benefits identified for disabled people are that they can engage in a range of activities in a simulator relatively free from the limitations imposed by their disability, and they can do so in safety. Evidence that the knowledge and skills acquired by disabled individuals in simulated environments can transfer to the real world is presented. In particular, spatial information and life skills learned in a virtual environment have been shown to transfer to the real world. Applications for visually impaired people are discussed, and the potential for medical interventions and the assessment and treatment of neurological damage are considered. Finally some current limitations of the technology, and ethical concerns in relation to disability, are discussed.
Virtual hand: a 3D tactile interface to virtual environments
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Borrel, Paul
2008-02-01
We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.
RandomSpot: A web-based tool for systematic random sampling of virtual slides.
Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E
2015-01-01
This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.
Geovisualisation of relief in a virtual reality system on the basis of low-level aerial imagery
NASA Astrophysics Data System (ADS)
Halik, Łukasz; Smaczyński, Maciej
2017-12-01
The aim of the following paper was to present the geomatic process of transforming low-level aerial imagery obtained with unmanned aerial vehicles (UAV) into a digital terrain model (DTM) and implementing the model into a virtual reality system (VR). The object of the study was a natural aggretage heap of an irregular shape and denivelations up to 11 m. Based on the obtained photos, three point clouds (varying in the level of detail) were generated for the 20,000-m2-area. For further analyses, the researchers selected the point cloud with the best ratio of accuracy to output file size. This choice was made based on seven control points of the heap surveyed in the field and the corresponding points in the generated 3D model. The obtained several-centimetre differences between the control points in the field and the ones from the model might testify to the usefulness of the described algorithm for creating large-scale DTMs for engineering purposes. Finally, the chosen model was implemented into the VR system, which enables the most lifelike exploration of 3D terrain plasticity in real time, thanks to the first person view mode (FPV). In this mode, the user observes an object with the aid of a Head- mounted display (HMD), experiencing the geovisualisation from the inside, and virtually analysing the terrain as a direct animator of the observations.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Bucher, Urs J.; Statler, Irving C. (Technical Monitor)
1994-01-01
The influence of physically presented background stimuli on the perceived depth of optically overlaid, stereoscopic virtual images has been studied using headmounted stereoscopic, virtual image displays. These displays allow presentation of physically unrealizable stimulus combinations. Positioning of an opaque physical object either at the initial perceived depth of the virtual image or at a position substantially in front of the virtual image, causes the virtual image to perceptually move closer to the observer. In the case of objects positioned substantially in front of the virtual image, subjects often perceive the opaque object to become transparent. Evidence is presented that the apparent change of position caused by interposition of the physical object is not due to occlusion cues. According, it may have an alternative cause such as variation in the binocular vengeance position of the eyes caused by introduction of the physical object. This effect may complicate design of overlaid virtual image displays for near objects and appears to be related to the relative conspicuousness of the overlaid virtual image and the background. Consequently, it may be related to earlier analyses of John Foley which modeled open-loop pointing errors to stereoscopically presented points of light in terms of errors in determination of a reference point for interpretation of observed retinal disparities. Implications for the design of see-through displays for manufacturing will be discussed.
Using virtual reality to test the regularity priors used by the human visual system
NASA Astrophysics Data System (ADS)
Palmer, Eric; Kwon, TaeKyu; Pizlo, Zygmunt
2017-09-01
Virtual reality applications provide an opportunity to test human vision in well-controlled scenarios that would be difficult to generate in real physical spaces. This paper presents a study intended to evaluate the importance of the regularity priors used by the human visual system. Using a CAVE simulation, subjects viewed virtual objects in a variety of experimental manipulations. In the first experiment, the subject was asked to count the objects in a scene that was viewed either right-side-up or upside-down for 4 seconds. The subject counted more accurately in the right-side-up condition regardless of the presence of binocular disparity or color. In the second experiment, the subject was asked to reconstruct the scene from a different viewpoint. Reconstructions were accurate, but the position and orientation error was twice as high when the scene was rotated by 45°, compared to 22.5°. Similarly to the first experiment, there was little difference between monocular and binocular viewing. In the third experiment, the subject was asked to adjust the position of one object to match the depth extent to the frontal extent among three objects. Performance was best with symmetrical objects and became poorer with asymmetrical objects and poorest with only small circular markers on the floor. Finally, in the fourth experiment, we demonstrated reliable performance in monocular and binocular recovery of 3D shapes of objects standing naturally on the simulated horizontal floor. Based on these results, we conclude that gravity, horizontal ground, and symmetry priors play an important role in veridical perception of scenes.
Validation of virtual learning object to support the teaching of nursing care systematization.
Salvador, Pétala Tuani Candido de Oliveira; Mariz, Camila Maria Dos Santos; Vítor, Allyne Fortes; Ferreira Júnior, Marcos Antônio; Fernandes, Maria Isabel Domingues; Martins, José Carlos Amado; Santos, Viviane Euzébia Pereira
2018-01-01
to describe the content validation process of a Virtual Learning Object to support the teaching of nursing care systematization to nursing professionals. methodological study, with quantitative approach, developed according to the methodological reference of Pasquali's psychometry and conducted from March to July 2016, from two-stage Delphi procedure. in the Delphi 1 stage, eight judges evaluated the Virtual Object; in Delphi 2 stage, seven judges evaluated it. The seven screens of the Virtual Object were analyzed as to the suitability of its contents. The Virtual Learning Object to support the teaching of nursing care systematization was considered valid in its content, with a Total Content Validity Coefficient of 0.96. it is expected that the Virtual Object can support the teaching of nursing care systematization in light of appropriate and effective pedagogical approaches.
The Development of the Learning Object Standard Using a Pedagogic Approach: A Comparative Study.
ERIC Educational Resources Information Center
Yahya, Yazrina; Jenkins, John; Yusoff, Mohammed
Education is moving towards revenue generation from such channels as electronic learning, distance learning and virtual education. Hence learning technology standards are critical to the sector's success. Existing learning technology standards have focused on various topics such as metadata, question and test interoperability and others. However,…
Tangible display systems: direct interfaces for computer-based studies of surface appearance
NASA Astrophysics Data System (ADS)
Darling, Benjamin A.; Ferwerda, James A.
2010-02-01
When evaluating the surface appearance of real objects, observers engage in complex behaviors involving active manipulation and dynamic viewpoint changes that allow them to observe the changing patterns of surface reflections. We are developing a class of tangible display systems to provide these natural modes of interaction in computer-based studies of material perception. A first-generation tangible display was created from an off-the-shelf laptop computer containing an accelerometer and webcam as standard components. Using these devices, custom software estimated the orientation of the display and the user's viewing position. This information was integrated with a 3D rendering module so that rotating the display or moving in front of the screen would produce realistic changes in the appearance of virtual objects. In this paper, we consider the design of a second-generation system to improve the fidelity of the virtual surfaces rendered to the screen. With a high-quality display screen and enhanced tracking and rendering capabilities, a secondgeneration system will be better able to support a range of appearance perception applications.
Novel interactive virtual showcase based on 3D multitouch technology
NASA Astrophysics Data System (ADS)
Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian
2009-11-01
A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.
Heterogeneous Deformable Modeling of Bio-Tissues and Haptic Force Rendering for Bio-Object Modeling
NASA Astrophysics Data System (ADS)
Lin, Shiyong; Lee, Yuan-Shin; Narayan, Roger J.
This paper presents a novel technique for modeling soft biological tissues as well as the development of an innovative interface for bio-manufacturing and medical applications. Heterogeneous deformable models may be used to represent the actual internal structures of deformable biological objects, which possess multiple components and nonuniform material properties. Both heterogeneous deformable object modeling and accurate haptic rendering can greatly enhance the realism and fidelity of virtual reality environments. In this paper, a tri-ray node snapping algorithm is proposed to generate a volumetric heterogeneous deformable model from a set of object interface surfaces between different materials. A constrained local static integration method is presented for simulating deformation and accurate force feedback based on the material properties of a heterogeneous structure. Biological soft tissue modeling is used as an example to demonstrate the proposed techniques. By integrating the heterogeneous deformable model into a virtual environment, users can both observe different materials inside a deformable object as well as interact with it by touching the deformable object using a haptic device. The presented techniques can be used for surgical simulation, bio-product design, bio-manufacturing, and medical applications.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Menges, Brian M.
1998-01-01
Errors in the localization of nearby virtual objects presented via see-through, helmet mounted displays are examined as a function of viewing conditions and scene content in four experiments using a total of 38 subjects. Monocular, biocular or stereoscopic presentation of the virtual objects, accommodation (required focus), subjects' age, and the position of physical surfaces are examined. Nearby physical surfaces are found to introduce localization errors that differ depending upon the other experimental factors. These errors apparently arise from the occlusion of the physical background by the optically superimposed virtual objects. But they are modified by subjects' accommodative competence and specific viewing conditions. The apparent physical size and transparency of the virtual objects and physical surfaces respectively are influenced by their relative position when superimposed. The design implications of the findings are discussed in a concluding section.
A standardized set of 3-D objects for virtual reality research and applications.
Peeters, David
2018-06-01
The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.
Analysis Methodology for Balancing Authority Cooperation in High Penetration of Variable Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, Yuri V.; Etingov, Pavel V.; Zhou, Ning
2010-02-01
With the rapidly growing penetration level of wind and solar generation, the challenges of managing variability and the uncertainty of intermittent renewable generation become more and more significant. The problem of power variability and uncertainty gets exacerbated when each balancing authority (BA) works locally and separately to balance its own subsystem. The virtual BA concept means various forms of collaboration between individual BAs must manage power variability and uncertainty. The virtual BA will have a wide area control capability in managing its operational balancing requirements in different time frames. This coordination results in the improvement of efficiency and reliability ofmore » power system operation while facilitating the high level integration of green, intermittent energy resources. Several strategies for virtual BA implementation, such as ACE diversity interchange (ADI), wind only BA, BA consolidation, dynamic scheduling, regulation and load following sharing, extreme event impact study are discussed in this report. The objective of such strategies is to allow individual BAs within a large power grid to help each other deal with power variability. Innovative methods have been developed to simulate the balancing operation of BAs. These methods evaluate the BA operation through a number of metrics — such as capacity, ramp rate, ramp duration, energy and cycling requirements — to evaluate the performances of different virtual BA strategies. The report builds a systematic framework for evaluating BA consolidation and coordination. Results for case studies show that significant economic and reliability benefits can be gained. The merits and limitation of each virtual BA strategy are investigated. The report provides guidelines for the power industry to evaluate the coordination or consolidation method. The application of the developed strategies in cooperation with several regional BAs is in progress for several off-spring projects.« less
Wallmeier, Ludwig; Kish, Daniel; Wiegrebe, Lutz; Flanagin, Virginia L
2015-03-01
Some blind humans have developed the remarkable ability to detect and localize objects through the auditory analysis of self-generated tongue clicks. These echolocation experts show a corresponding increase in 'visual' cortex activity when listening to echo-acoustic sounds. Echolocation in real-life settings involves multiple reflections as well as active sound production, neither of which has been systematically addressed. We developed a virtualization technique that allows participants to actively perform such biosonar tasks in virtual echo-acoustic space during magnetic resonance imaging (MRI). Tongue clicks, emitted in the MRI scanner, are picked up by a microphone, convolved in real time with the binaural impulse responses of a virtual space, and presented via headphones as virtual echoes. In this manner, we investigated the brain activity during active echo-acoustic localization tasks. Our data show that, in blind echolocation experts, activations in the calcarine cortex are dramatically enhanced when a single reflector is introduced into otherwise anechoic virtual space. A pattern-classification analysis revealed that, in the blind, calcarine cortex activation patterns could discriminate left-side from right-side reflectors. This was found in both blind experts, but the effect was significant for only one of them. In sighted controls, 'visual' cortex activations were insignificant, but activation patterns in the planum temporale were sufficient to discriminate left-side from right-side reflectors. Our data suggest that blind and echolocation-trained, sighted subjects may recruit different neural substrates for the same active-echolocation task. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.
2017-12-01
Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.
Simulation of Physical Experiments in Immersive Virtual Environments
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Wasfy, Tamer M.
2001-01-01
An object-oriented event-driven immersive Virtual environment is described for the creation of virtual labs (VLs) for simulating physical experiments. Discussion focuses on a number of aspects of the VLs, including interface devices, software objects, and various applications. The VLs interface with output devices, including immersive stereoscopic screed(s) and stereo speakers; and a variety of input devices, including body tracking (head and hands), haptic gloves, wand, joystick, mouse, microphone, and keyboard. The VL incorporates the following types of primitive software objects: interface objects, support objects, geometric entities, and finite elements. Each object encapsulates a set of properties, methods, and events that define its behavior, appearance, and functions. A container object allows grouping of several objects. Applications of the VLs include viewing the results of the physical experiment, viewing a computer simulation of the physical experiment, simulation of the experiments procedure, computational steering, and remote control of the physical experiment. In addition, the VL can be used as a risk-free (safe) environment for training. The implementation of virtual structures testing machines, virtual wind tunnels, and a virtual acoustic testing facility is described.
Designing 3 Dimensional Virtual Reality Using Panoramic Image
NASA Astrophysics Data System (ADS)
Wan Abd Arif, Wan Norazlinawati; Wan Ahmad, Wan Fatimah; Nordin, Shahrina Md.; Abdullah, Azrai; Sivapalan, Subarna
The high demand to improve the quality of the presentation in the knowledge sharing field is to compete with rapidly growing technology. The needs for development of technology based learning and training lead to an idea to develop an Oil and Gas Plant Virtual Environment (OGPVE) for the benefit of our future. Panoramic Virtual Reality learning based environment is essential in order to help educators overcome the limitations in traditional technical writing lesson. Virtual reality will help users to understand better by providing the simulations of real-world and hard to reach environment with high degree of realistic experience and interactivity. Thus, in order to create a courseware which will achieve the objective, accurate images of intended scenarios must be acquired. The panorama shows the OGPVE and helps to generate ideas to users on what they have learnt. This paper discusses part of the development in panoramic virtual reality. The important phases for developing successful panoramic image are image acquisition and image stitching or mosaicing. In this paper, the combination of wide field-of-view (FOV) and close up image used in this panoramic development are also discussed.
Levy
1996-08-01
New interactive computer technologies are having a significant influence on medical education, training, and practice. The newest innovation in computer technology, virtual reality, allows an individual to be immersed in a dynamic computer-generated, three-dimensional environment and can provide realistic simulations of surgical procedures. A new virtual reality hysteroscope passes through a sensing device that synchronizes movements with a three-dimensional model of a uterus. Force feedback is incorporated into this model, so the user actually experiences the collision of an instrument against the uterine wall or the sensation of the resistance or drag of a resectoscope as it cuts through a myoma in a virtual environment. A variety of intrauterine pathologies and procedures are simulated, including hyperplasia, cancer, resection of a uterine septum, polyp, or myoma, and endometrial ablation. This technology will be incorporated into comprehensive training programs that will objectively assess hand-eye coordination and procedural skills. It is possible that by incorporating virtual reality into hysteroscopic training programs, a decrease in the learning curve and the number of complications presently associated with the procedures may be realized. Prospective studies are required to assess these potential benefits.
Learning Anatomy via Mobile Augmented Reality: Effects on Achievement and Cognitive Load
ERIC Educational Resources Information Center
Küçük, Sevda; Kapakin, Samet; Göktas, Yüksel
2016-01-01
Augmented reality (AR), a new generation of technology, has attracted the attention of educators in recent years. In this study, a MagicBook was developed for a neuroanatomy topic by using mobile augmented reality (mAR) technology. This technology integrates virtual learning objects into the real world and allow users to interact with the…
The detection of 'virtual' objects using echoes by humans: Spectral cues.
Rowan, Daniel; Papadopoulos, Timos; Archer, Lauren; Goodhew, Amanda; Cozens, Hayley; Lopez, Ricardo Guzman; Edwards, David; Holmes, Hannah; Allen, Robert
2017-07-01
Some blind people use echoes to detect discrete, silent objects to support their spatial orientation/navigation, independence, safety and wellbeing. The acoustical features that people use for this are not well understood. Listening to changes in spectral shape due to the presence of an object could be important for object detection and avoidance, especially at short range, although it is currently not known whether it is possible with echolocation-related sounds. Bands of noise were convolved with recordings of binaural impulse responses of objects in an anechoic chamber to create 'virtual objects', which were analysed and played to sighted and blind listeners inexperienced in echolocation. The sounds were also manipulated to remove cues unrelated to spectral shape. Most listeners could accurately detect hard flat objects using changes in spectral shape. The useful spectral changes for object detection occurred above approximately 3 kHz, as with object localisation. However, energy in the sounds below 3 kHz was required to exploit changes in spectral shape for object detection, whereas energy below 3 kHz impaired object localisation. Further recordings showed that the spectral changes were diminished by room reverberation. While good high-frequency hearing is generally important for echolocation, the optimal echo-generating stimulus will probably depend on the task. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun
2018-05-17
This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timothy Shaw; Anthony Baratta; Vaughn Whisker
2005-02-28
Task 4 report of 3 year DOE NERI-sponsored effort evaluating immersive virtual reality (CAVE) technology for design review, construction planning, and maintenance planning and training for next generation nuclear power plants. Program covers development of full-scale virtual mockups generated from 3D CAD data presented in a CAVE visualization facility. This report focuses on using Full-scale virtual mockups for nuclear power plant training applications.
Vision-based overlay of a virtual object into real scene for designing room interior
NASA Astrophysics Data System (ADS)
Harasaki, Shunsuke; Saito, Hideo
2001-10-01
In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).
Fast in-situ tool inspection based on inverse fringe projection and compact sensor heads
NASA Astrophysics Data System (ADS)
Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard
2016-11-01
Inspection of machine elements is an important task in production processes in order to ensure the quality of produced parts and to gather feedback for the continuous improvement process. A new measuring system is presented, which is capable of performing the inspection of critical tool geometries, such as gearing elements, inside the forming machine. To meet the constraints on sensor head size and inspection time imposed by the limited space inside the machine and the cycle time of the process, the measuring device employs a combination of endoscopy techniques with the fringe projection principle. Compact gradient index lenses enable a compact design of the sensor head, which is connected to a CMOS camera and a flexible micro-mirror based projector via flexible fiber bundles. Using common fringe projection patterns, the system achieves measuring times of less than five seconds. To further reduce the time required for inspection, the generation of inverse fringe projection patterns has been implemented for the system. Inverse fringe projection speeds up the inspection process by employing object-adapted patterns, which enable the detection of geometry deviations in a single image. Two different approaches to generate object adapted patterns are presented. The first approach uses a reference measurement of a manufactured tool master to generate the inverse pattern. The second approach is based on a virtual master geometry in the form of a CAD file and a ray-tracing model of the measuring system. Virtual modeling of the measuring device and inspection setup allows for geometric tolerancing for free-form surfaces by the tool designer in the CAD-file. A new approach is presented, which uses virtual tolerance specifications and additional simulation steps to enable fast checking of metric tolerances. Following the description of the pattern generation process, the image processing steps required for inspection are demonstrated on captures of gearing geometries.
AR Feels "Softer" than VR: Haptic Perception of Stiffness in Augmented versus Virtual Reality.
Gaffary, Yoren; Le Gouis, Benoit; Marchal, Maud; Argelaguet, Ferran; Arnaldi, Bruno; Lecuyer, Anatole
2017-11-01
Does it feel the same when you touch an object in Augmented Reality (AR) or in Virtual Reality (VR)? In this paper we study and compare the haptic perception of stiffness of a virtual object in two situations: (1) a purely virtual environment versus (2) a real and augmented environment. We have designed an experimental setup based on a Microsoft HoloLens and a haptic force-feedback device, enabling to press a virtual piston, and compare its stiffness successively in either Augmented Reality (the virtual piston is surrounded by several real objects all located inside a cardboard box) or in Virtual Reality (the same virtual piston is displayed in a fully virtual scene composed of the same other objects). We have conducted a psychophysical experiment with 12 participants. Our results show a surprising bias in perception between the two conditions. The virtual piston is on average perceived stiffer in the VR condition compared to the AR condition. For instance, when the piston had the same stiffness in AR and VR, participants would select the VR piston as the stiffer one in 60% of cases. This suggests a psychological effect as if objects in AR would feel "softer" than in pure VR. Taken together, our results open new perspectives on perception in AR versus VR, and pave the way to future studies aiming at characterizing potential perceptual biases.
Linkenauger, Sally A.; Leyrer, Markus; Bülthoff, Heinrich H.; Mohler, Betty J.
2013-01-01
The notion of body-based scaling suggests that our body and its action capabilities are used to scale the spatial layout of the environment. Here we present four studies supporting this perspective by showing that the hand acts as a metric which individuals use to scale the apparent sizes of objects in the environment. However to test this, one must be able to manipulate the size and/or dimensions of the perceiver’s hand which is difficult in the real world due to impliability of hand dimensions. To overcome this limitation, we used virtual reality to manipulate dimensions of participants’ fully-tracked, virtual hands to investigate its influence on the perceived size and shape of virtual objects. In a series of experiments, using several measures, we show that individuals’ estimations of the sizes of virtual objects differ depending on the size of their virtual hand in the direction consistent with the body-based scaling hypothesis. Additionally, we found that these effects were specific to participants’ virtual hands rather than another avatar’s hands or a salient familiar-sized object. While these studies provide support for a body-based approach to the scaling of the spatial layout, they also demonstrate the influence of virtual bodies on perception of virtual environments. PMID:23874681
Object Creation and Human Factors Evaluation for Virtual Environments
NASA Technical Reports Server (NTRS)
Lindsey, Patricia F.
1998-01-01
The main objective of this project is to provide test objects for simulated environments utilized by the recently established Army/NASA Virtual Innovations Lab (ANVIL) at Marshall Space Flight Center, Huntsville, Al. The objective of the ANVIL lab is to provide virtual reality (VR) models and environments and to provide visualization and manipulation methods for the purpose of training and testing. Visualization equipment used in the ANVIL lab includes head-mounted and boom-mounted immersive virtual reality display devices. Objects in the environment are manipulated using data glove, hand controller, or mouse. These simulated objects are solid or surfaced three dimensional models. They may be viewed or manipulated from any location within the environment and may be viewed on-screen or via immersive VR. The objects are created using various CAD modeling packages and are converted into the virtual environment using dVise. This enables the object or environment to be viewed from any angle or distance for training or testing purposes.
Operator Localization of Virtual Objects
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Menges, Brian M.; Null, Cynthia H. (Technical Monitor)
1998-01-01
Errors in the localization of nearby virtual objects presented via see-through, helmet mounted displays are examined as a function of viewing conditions and scene content. Monocular, biocular or stereoscopic presentation of the virtual objects, accommodation (required focus), subjects'age, and the position of physical surfaces are examined. Nearby physical surfaces are found to introduce localization errors that differ depending upon the other experimental factors. The apparent physical size and transparency of the virtual objects and physical surfaces respectively are also influenced by their relative position when superimposed. Design implications are discussed.
Duffy, Fergal J; Verniere, Mélanie; Devocelle, Marc; Bernard, Elise; Shields, Denis C; Chubb, Anthony J
2011-04-25
We introduce CycloPs, software for the generation of virtual libraries of constrained peptides including natural and nonnatural commercially available amino acids. The software is written in the cross-platform Python programming language, and features include generating virtual libraries in one-dimensional SMILES and three-dimensional SDF formats, suitable for virtual screening. The stand-alone software is capable of filtering the virtual libraries using empirical measurements, including peptide synthesizability by standard peptide synthesis techniques, stability, and the druglike properties of the peptide. The software and accompanying Web interface is designed to enable the rapid generation of large, structurally diverse, synthesizable virtual libraries of constrained peptides quickly and conveniently, for use in virtual screening experiments. The stand-alone software, and the Web interface for evaluating these empirical properties of a single peptide, are available at http://bioware.ucd.ie .
Matsushima, Kyoji; Sonobe, Noriaki
2018-01-01
Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.
Kosterhon, Michael; Gutenberg, Angelika; Kantelhardt, Sven R; Conrad, Jens; Nimer Amr, Amr; Gawehn, Joachim; Giese, Alf
2017-08-01
A feasibility study. To develop a method based on the DICOM standard which transfers complex 3-dimensional (3D) trajectories and objects from external planning software to any navigation system for planning and intraoperative guidance of complex spinal procedures. There have been many reports about navigation systems with embedded planning solutions but only few on how to transfer planning data generated in external software. Patients computerized tomography and/or magnetic resonance volume data sets of the affected spinal segments were imported to Amira software, reconstructed to 3D images and fused with magnetic resonance data for soft-tissue visualization, resulting in a virtual patient model. Objects needed for surgical plans or surgical procedures such as trajectories, implants or surgical instruments were either digitally constructed or computerized tomography scanned and virtually positioned within the 3D model as required. As crucial step of this method these objects were fused with the patient's original diagnostic image data, resulting in a single DICOM sequence, containing all preplanned information necessary for the operation. By this step it was possible to import complex surgical plans into any navigation system. We applied this method not only to intraoperatively adjustable implants and objects under experimental settings, but also planned and successfully performed surgical procedures, such as the percutaneous lateral approach to the lumbar spine following preplanned trajectories and a thoracic tumor resection including intervertebral body replacement using an optical navigation system. To demonstrate the versatility and compatibility of the method with an entirely different navigation system, virtually preplanned lumbar transpedicular screw placement was performed with a robotic guidance system. The presented method not only allows virtual planning of complex surgical procedures, but to export objects and surgical plans to any navigation or guidance system able to read DICOM data sets, expanding the possibilities of embedded planning software.
Virtual reality simulators: valuable surgical skills trainers or video games?
Willis, Ross E; Gomez, Pedro Pablo; Ivatury, Srinivas J; Mitra, Hari S; Van Sickle, Kent R
2014-01-01
Virtual reality (VR) and physical model (PM) simulators differ in terms of whether the trainee is manipulating actual 3-dimensional objects (PM) or computer-generated 3-dimensional objects (VR). Much like video games (VG), VR simulators utilize computer-generated graphics. These differences may have profound effects on the utility of VR and PM training platforms. In this study, we aimed to determine whether a relationship exists between VR, PM, and VG platforms. VR and PM simulators for laparoscopic camera navigation ([LCN], experiment 1) and flexible endoscopy ([FE] experiment 2) were used in this study. In experiment 1, 20 laparoscopic novices played VG and performed 0° and 30° LCN exercises on VR and PM simulators. In experiment 2, 20 FE novices played VG and performed colonoscopy exercises on VR and PM simulators. In both experiments, VG performance was correlated with VR performance but not with PM performance. Performance on VR simulators did not correlate with performance on respective PM models. VR environments may be more like VG than previously thought. © 2013 Published by Association of Program Directors in Surgery on behalf of Association of Program Directors in Surgery.
NASA Astrophysics Data System (ADS)
Starodubtsev, Illya
2017-09-01
The paper describes the implementation of the system of interaction with virtual objects based on gestures. The paper describes the common problems of interaction with virtual objects, specific requirements for the interfaces for virtual and augmented reality.
ERIC Educational Resources Information Center
Paulsson, Fredrik; Naeve, Ambjorn
2006-01-01
Based on existing Learning Object taxonomies, this article suggests an alternative Learning Object taxonomy, combined with a general Service Oriented Architecture (SOA) framework, aiming to transfer the modularized concept of Learning Objects to modularized Virtual Learning Environments. The taxonomy and SOA-framework exposes a need for a clearer…
A convertor and user interface to import CAD files into worldtoolkit virtual reality systems
NASA Technical Reports Server (NTRS)
Wang, Peter Hor-Ching
1996-01-01
Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.
High-level virtual reality simulator for endourologic procedures of lower urinary tract.
Reich, Oliver; Noll, Margarita; Gratzke, Christian; Bachmann, Alexander; Waidelich, Raphaela; Seitz, Michael; Schlenker, Boris; Baumgartner, Reinhold; Hofstetter, Alfons; Stief, Christian G
2006-06-01
To analyze the limitations of existing simulators for urologic techniques, and then test and evaluate a novel virtual reality (VR) simulator for endourologic procedures of the lower urinary tract. Surgical simulation using VR has the potential to have a tremendous impact on surgical training, testing, and certification. Endourologic procedures seem to be an ideal target for VR systems. The URO-Trainer features genuine VR, obtained from digital video footage of more than 400 endourologic diagnostic and therapeutic procedures, as well as data from cross-sectional imaging. The software offers infinite random variations of the anatomy and pathologic features for diagnosis and surgical intervention. An advanced haptic force feedback is incorporated. Virtual cystoscopy and resection of bladder tumors were evaluated by 24 medical students and 12 residents at our department. The system was assessed by more than 150 international urologists with varying experience at different conventions and workshops from March 2003 to September 2004. Because of these evaluations and constant evolutions, the final version provides a genuine representation of endourologic procedures. Objective data are generated by a tutoring system that has documented evident teaching benefits for medical students and residents in cystoscopy and treatment of bladder tumors. The URO-Trainer represents the latest generation of endoscopy simulators. Authentic visual and haptic sensations, unlimited virtual cases, and an intelligent tutoring system make this modular system an important improvement in computer-based training and quality control in urology.
NASA Astrophysics Data System (ADS)
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
The Virtual Wave Observatory (VWO): A Portal to Heliophysics Wave Data
NASA Technical Reports Server (NTRS)
Fung, Shing F.
2010-01-01
The Virtual Wave Observatory (VWO) is one of the discipline-oriented virtual observatories that help form the nascent NASA Heliophysics Data environment to support heliophysics research. It focuses on supporting the searching and accessing of distributed heliophysics wave data and information that are available online. Since the occurrence of a natural wave phenomenon often depends on the underlying geophysical -- i.e., context -- conditions under which the waves are generated and propagate, and the observed wave characteristics can also depend on the location of observation, VWO will implement wave-data search-by-context conditions and location, in addition to searching by time and observing platforms (both space-based and ground-based). This paper describes the VWO goals, the basic design objectives, and the key VWO functionality to be expected. Members of the heliophysics community are invited to participate in VWO development in order to ensure its usefulness and success.
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
Zenner, Andre; Kruger, Antonio
2017-04-01
We define the concept of Dynamic Passive Haptic Feedback (DPHF) for virtual reality by introducing the weight-shifting physical DPHF proxy object Shifty. This concept combines actuators known from active haptics and physical proxies known from passive haptics to construct proxies that automatically adapt their passive haptic feedback. We describe the concept behind our ungrounded weight-shifting DPHF proxy Shifty and the implementation of our prototype. We then investigate how Shifty can, by automatically changing its internal weight distribution, enhance the user's perception of virtual objects interacted with in two experiments. In a first experiment, we show that Shifty can enhance the perception of virtual objects changing in shape, especially in length and thickness. Here, Shifty was shown to increase the user's fun and perceived realism significantly, compared to an equivalent passive haptic proxy. In a second experiment, Shifty is used to pick up virtual objects of different virtual weights. The results show that Shifty enhances the perception of weight and thus the perceived realism by adapting its kinesthetic feedback to the picked-up virtual object. In the same experiment, we additionally show that specific combinations of haptic, visual and auditory feedback during the pick-up interaction help to compensate for visual-haptic mismatch perceived during the shifting process.
Grasping trajectories in a virtual environment adhere to Weber's law.
Ozana, Aviad; Berman, Sigal; Ganel, Tzvi
2018-06-01
Virtual-reality and telerobotic devices simulate local motor control of virtual objects within computerized environments. Here, we explored grasping kinematics within a virtual environment and tested whether, as in normal 3D grasping, trajectories in the virtual environment are performed analytically, violating Weber's law with respect to object's size. Participants were asked to grasp a series of 2D objects using a haptic system, which projected their movements to a virtual space presented on a computer screen. The apparatus also provided object-specific haptic information upon "touching" the edges of the virtual targets. The results showed that grasping movements performed within the virtual environment did not produce the typical analytical trajectory pattern obtained during 3D grasping. Unlike as in 3D grasping, grasping trajectories in the virtual environment adhered to Weber's law, which indicates relative resolution in size processing. In addition, the trajectory patterns differed from typical trajectories obtained during 3D grasping, with longer times to complete the movement, and with maximum grip apertures appearing relatively early in the movement. The results suggest that grasping movements within a virtual environment could differ from those performed in real space, and are subjected to irrelevant effects of perceptual information. Such atypical pattern of visuomotor control may be mediated by the lack of complete transparency between the interface and the virtual environment in terms of the provided visual and haptic feedback. Possible implications of the findings to movement control within robotic and virtual environments are further discussed.
Towards control of dexterous hand manipulations using a silicon Pattern Generator.
Russell, Alexander; Tenore, Francesco; Singhal, Girish; Thakor, Nitish; Etienne-Cummings, Ralph
2008-01-01
This work demonstrates how an in silico Pattern Generator (PG) can be used as a low power control system for rhythmic hand movements in an upper-limb prosthesis. Neural spike patterns, which encode rotation of a cylindrical object, were implemented in a custom Very Large Scale Integration chip. PG control was tested by using the decoded control signals to actuate the fingers of a virtual prosthetic arm. This system provides a framework for prototyping and controlling dexterous hand manipulation tasks in a compact and efficient solution.
Chen, Karen B; Ponto, Kevin; Tredinnick, Ross D; Radwin, Robert G
2015-06-01
This study was a proof of concept for virtual exertions, a novel method that involves the use of body tracking and electromyography for grasping and moving projections of objects in virtual reality (VR). The user views objects in his or her hands during rehearsed co-contractions of the same agonist-antagonist muscles normally used for the desired activities to suggest exerting forces. Unlike physical objects, virtual objects are images and lack mass. There is currently no practical physically demanding way to interact with virtual objects to simulate strenuous activities. Eleven participants grasped and lifted similar physical and virtual objects of various weights in an immersive 3-D Cave Automatic Virtual Environment. Muscle activity, localized muscle fatigue, ratings of perceived exertions, and NASA Task Load Index were measured. Additionally, the relationship between levels of immersion (2-D vs. 3-D) was studied. Although the overall magnitude of biceps activity and workload were greater in VR, muscle activity trends and fatigue patterns for varying weights within VR and physical conditions were the same. Perceived exertions for varying weights were not significantly different between VR and physical conditions. Perceived exertion levels and muscle activity patterns corresponded to the assigned virtual loads, which supported the hypothesis that the method evoked the perception of physical exertions and showed that the method was promising. Ultimately this approach may offer opportunities for research and training individuals to perform strenuous activities under potentially safer conditions that mimic situations while seeing their own body and hands relative to the scene. © 2014, Human Factors and Ergonomics Society.
Can hazard risk be communicated through a virtual experience?
Mitchell, J T
1997-09-01
Cyberspace, defined by William Gibson as a consensual hallucination, now refers to all computer-generated interactive environments. Virtual reality, one of a class of interactive cyberspaces, allows us to create and interact directly with objects not available in the everyday world. Despite successes in the entertainment and aviation industries, this technology has been called a 'solution in search of a problem'. The purpose of this commentary is to suggest such a problem: the inability to acquire experience with a hazard to motivate mitigation. Direct experience with a hazard has been demonstrated as a powerful incentive to adopt mitigation measures. While we lack the ability to summon hazard events at will in order to gain access to that experience, a virtual environment can provide an arena where potential victims are exposed to a hazard's effects. Immersion as an active participant within the hazard event through virtual reality may stimulate users to undertake mitigation steps that might otherwise remain undone. This paper details the possible direction in which virtual reality may be applied to hazards mitigation through a discussion of the technology, the role of hazard experience, the creation of a hazard stimulation and the issues constraining implementation.
A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment
Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae
2015-01-01
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service. PMID:26393609
A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment.
Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae
2015-09-18
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.
A Standard-Compliant Virtual Meeting System with Active Video Object Tracking
NASA Astrophysics Data System (ADS)
Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting
2002-12-01
This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.
Altered sense of Agency in children with spastic cerebral palsy
2011-01-01
Background Children diagnosed with spastic Cerebral Palsy (CP) often show perceptual and cognitive problems, which may contribute to their functional deficit. Here we investigated if altered ability to determine whether an observed movement is performed by themselves (sense of agency) contributes to the motor deficit in children with CP. Methods Three groups; 1) CP children, 2) healthy peers, and 3) healthy adults produced straight drawing movements on a pen-tablet which was not visible for the subjects. The produced movement was presented as a virtual moving object on a computer screen. Subjects had to evaluate after each trial whether the movement of the object on the computer screen was generated by themselves or by a computer program which randomly manipulated the visual feedback by angling the trajectories 0, 5, 10, 15, 20 degrees away from target. Results Healthy adults executed the movements in 310 seconds, whereas healthy children and especially CP children were significantly slower (p < 0.002) (on average 456 seconds and 543 seconds respectively). There was also a statistical difference between the healthy and age matched CP children (p = 0.037). When the trajectory of the object generated by the computer corresponded to the subject's own movements all three groups reported that they were responsible for the movement of the object. When the trajectory of the object deviated by more than 10 degrees from target, healthy adults and children more frequently than CP children reported that the computer was responsible for the movement of the object. CP children consequently also attempted to compensate more frequently from the perturbation generated by the computer. Conclusions We conclude that CP children have a reduced ability to determine whether movement of a virtual moving object is caused by themselves or an external source. We suggest that this may be related to a poor integration of their intention of movement with visual and proprioceptive information about the performed movement and that altered sense of agency may be an important functional problem in children with CP. PMID:22129483
SU-F-T-436: A Method to Evaluate Dosimetric Properties of SFGRT in Eclipse TPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, M; Tobias, R; Pankuch, M
Purpose: The objective was to develop a method for dose distribution calculation of spatially-fractionated-GRID-radiotherapy (SFGRT) in Eclipse treatment-planning-system (TPS). Methods: Patient treatment-plans with SFGRT for bulky tumors were generated in Varian Eclipse version11. A virtual structure based on the GRID pattern was created and registered to a patient CT image dataset. The virtual GRID structure was positioned on the iso-center level together with matching beam geometries to simulate a commercially available GRID block made of brass. This method overcame the difficulty in treatment-planning and dose-calculation due to the lack o-the option to insert a GRID block add-on in Eclipse TPS.more » The patient treatment-planning displayed GRID effects on the target, critical structures, and dose distribution. The dose calculations were compared to the measurement results in phantom. Results: The GRID block structure was created to follow the beam divergence to the patient CT images. The inserted virtual GRID block made it possible to calculate the dose distributions and profiles at various depths in Eclipse. The virtual GRID block was added as an option to TPS. The 3D representation of the isodose distribution of the spatially-fractionated beam was generated in axial, coronal, and sagittal planes. Physics of GRID can be different from that for fields shaped by regular blocks because the charge-particle-equilibrium cannot be guaranteed for small field openings. Output factor (OF) measurement was required to calculate the MU to deliver the prescribed dose. The calculated OF based on the virtual GRID agreed well with the measured OF in phantom. Conclusion: The method to create the virtual GRID block has been proposed for the first time in Eclipse TPS. The dosedistributions, in-plane and cross-plane profiles in PTV can be displayed in 3D-space. The calculated OF’s based on the virtual GRID model compare well to the measured OF’s for SFGRT clinical use.« less
Advanced Technology for Portable Personal Visualization.
1992-06-01
interactive radiosity . 6 Advanced Technology for Portable Personal Visualization Progress Report January-June 1992 9 2.5 Virtual-Environment Ultrasound...the system, with support for textures, model partitioning, more complex radiosity emitters, and the replacement of model parts with objects from our...model libraries. "* Add real-time, interactive radiosity to the display program on Pixel-Planes 5. "* Move the real-time model mesh-generation to the
NASA Astrophysics Data System (ADS)
Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.
2016-05-01
The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Z; Greskovich, J; Xia, P
Purpose: To generate virtual phantoms with clinically relevant deformation and use them to objectively evaluate geometric and dosimetric uncertainties of deformable image registration (DIR) algorithms. Methods: Ten lung cancer patients undergoing adaptive 3DCRT planning were selected. For each patient, a pair of planning CT (pCT) and replanning CT (rCT) were used as the basis for virtual phantom generation. Manually adjusted meshes were created for selected ROIs (e.g. PTV, lungs, spinal cord, esophagus, and heart) on pCT and rCT. The mesh vertices were input into a thin-plate spline algorithm to generate a reference displacement vector field (DVF). The reference DVF wasmore » used to deform pCT to generate a simulated replanning CT (srCT) that was closely matched to rCT. Three DIR algorithms (Demons, B-Spline, and intensity-based) were applied to these ten virtual phantoms. The images, ROIs, and doses were mapped from pCT to srCT using the DVFs computed by these three DIRs and compared to those mapped using the reference DVF. Results: The average Dice coefficients for selected ROIs were from 0.85 to 0.96 for Demons, from 0.86 to 0.97 for intensity-based, and from 0.76 to 0.95 for B-Spline. The average Hausdorff distances for selected ROIs were from 2.2 to 5.4 mm for Demons, from 2.3 to 6.8 mm for intensity-based, and from 2.4 to 11.4 mm for B-Spline. The average absolute dose errors for selected ROIs were from 0.2 to 0.6 Gy for Demons, from 0.1 to 0.5 Gy for intensity-based, and from 0.5 to 1.5 Gy for B-Spline. Conclusion: Virtual phantoms were modeled after patients with lung cancer and were clinically relevant for adaptive radiotherapy treatment replanning. Virtual phantoms with known DVFs serve as references and can provide a fair comparison when evaluating different DIRs. Demons and intensity-based DIRs were shown to have smaller geometric and dosimetric uncertainties than B-Spline. Z Shen: None; K Bzdusek: an employee of Philips Healthcare; J Greskovich: None; P Xia: received research grants from Philips Healthcare and Siemens Healthcare.« less
Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model
NASA Astrophysics Data System (ADS)
Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man
2017-03-01
Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.
3D Modelling of Kizildag Monument
NASA Astrophysics Data System (ADS)
Karauguz, Güngör; Kalayci, İbrahim; Öğütcü, Sermet
2016-10-01
The most important cultural property that the nations possess is their historical accumulation, and bringing these to light, taking measures to preserve them or at least maintain the continuity of transferring them to next generations by means of recent technic and technology, ought to be the business of present generations. Although, nowadays, intensive documentation and archiving studies are done by means of classical techniques, besides studies towards preserving historical objects, modelling one-to-one or scaled modelling were not possible until recently. Computing devices and the on-going reflection of this, which is acknowledged as digital technology, is widely used in many areas and makes it possible to document and archive historical works. Even virtual forms in quantitative environments can be transferred to next generations in a scaled and one-to-one modelled way. Within this scope, every single artefact categorization belonging to any era or civilization present in our country can be considered in separate study areas. Furthermore, any work or likewise can be evaluated in separate categories. Also, it is possible to construct travelable virtual 3D museums that make it possible to visit these artefacts. Under the auspices of these technologies, it is quite possible to construct single virtual indoor museums or also, at the final stage, a 3D travelable open-air museum, a platform or more precisely, to establish a data system that spreads all over the country on a broad spectrum. With a long-termed, significant and extensive study and a substantial organization, such a data system can be established, which also serves as a serious infrastructure for alternative tourism possibilities. Located beside a stepped altar and right above the Kizildag IV inscription, the offering pot is destructed and rolled away a few meters to the south slope of the mould. Every time visiting these artefacts with our undergraduate students, unfortunately, we observe more demolishment. This case study aims to construct the extensive data system mentioned above, and in the context of historical artefacts it aims-which is the lowest stage of such a study gathering information about the Kizildag findings using the previously mentioned technologies. This paper will explain how the geometry and texture of historical objects can be automatically constructed, modelled and visualized from digital image processing software. In this context, the second research has been conducted, aimed to obtain the visuals of the Hittite hieroglyph inscriptions located in Kizildag by using digital photogrammetry technique. After obtaining the visuals, they will be evaluated in a photogrammetric software which endues the finally constructed 3D virtual product with its original texture. In this way, the current destructed artefacts mentioned above can be handed down to the next generations in form of scaled, virtual models. We consider this to be of particular importance.
Visuo-Haptic Mixed Reality with Unobstructed Tool-Hand Integration.
Cosco, Francesco; Garre, Carlos; Bruno, Fabio; Muzzupappa, Maurizio; Otaduy, Miguel A
2013-01-01
Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. Unfortunately, the use of commodity haptic devices poses obstruction and misalignment issues that complicate the correct integration of a virtual tool and the user's real hand in the mixed reality scene. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, using commodity haptic devices, and with a visually consistent integration of the user's hand and the virtual tool. We discuss the visual obstruction and misalignment issues introduced by commodity haptic devices, and then propose a solution that relies on four simple technical steps: color-based segmentation of the hand, tracking-based segmentation of the haptic device, background repainting using image-based models, and misalignment-free compositing of the user's hand. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects and interact with them in the context of a real scene, and we have evaluated the impact on user performance of obstruction and misalignment correction.
One New Method to Generate 3-Dimensional Virtual Mannequin
NASA Astrophysics Data System (ADS)
Xiu-jin, Shi; Zhi-jun, Wang; Jia-jin, Le
The personal virtual mannequin is very important in electronic made to measure (eMTM) system. There is one new easy method to generate personal virtual mannequin. First, the characteristic information of customer's body is got from two photos. Secondly, some human body part templates corresponding with the customer are selected from the templates library. Thirdly, these templates are modified and assembled according to certain rules to generate a personalized 3-dimensional human, and then the virtual mannequin is realized. Experimental result shows that the method is easy and feasible.
Direct manipulation of virtual objects
NASA Astrophysics Data System (ADS)
Nguyen, Long K.
Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities -- proprioception, haptics, and audition -- and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum -- Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.
A microbased shared virtual world prototype
NASA Technical Reports Server (NTRS)
Pitts, Gerald; Robinson, Mark; Strange, Steve
1993-01-01
Virtual reality (VR) allows sensory immersion and interaction with a computer-generated environment. The user adopts a physical interface with the computer, through Input/Output devices such as a head-mounted display, data glove, mouse, keyboard, or monitor, to experience an alternate universe. What this means is that the computer generates an environment which, in its ultimate extension, becomes indistinguishable from the real world. 'Imagine a wraparound television with three-dimensional programs, including three-dimensional sound, and solid objects that you can pick up and manipulate, even feel with your fingers and hands.... 'Imagine that you are the creator as well as the consumer of your artificial experience, with the power to use a gesture or word to remold the world you see and hear and feel. That part is not fiction... three-dimensional computer graphics, input/output devices, computer models that constitute a VR system make it possible, today, to immerse yourself in an artificial world and to reach in and reshape it.' Our research's goal was to propose a feasibility experiment in the construction of a networked virtual reality system, making use of current personal computer (PC) technology. The prototype was built using Borland C compiler, running on an IBM 486 33 MHz and a 386 33 MHz. Each game currently is represented as an IPX client on a non-dedicated Novell server. We initially posed the two questions: (1) Is there a need for networked virtual reality? (2) In what ways can the technology be made available to the most people possible?
NASA Technical Reports Server (NTRS)
Nguyen, Lac; Kenney, Patrick J.
1993-01-01
Development of interactive virtual environments (VE) has typically consisted of three primary activities: model (object) development, model relationship tree development, and environment behavior definition and coding. The model and relationship tree development activities are accomplished with a variety of well-established graphic library (GL) based programs - most utilizing graphical user interfaces (GUI) with point-and-click interactions. Because of this GUI format, little programming expertise on the part of the developer is necessary to create the 3D graphical models or to establish interrelationships between the models. However, the third VE development activity, environment behavior definition and coding, has generally required the greatest amount of time and programmer expertise. Behaviors, characteristics, and interactions between objects and the user within a VE must be defined via command line C coding prior to rendering the environment scenes. In an effort to simplify this environment behavior definition phase for non-programmers, and to provide easy access to model and tree tools, a graphical interface and development tool has been created. The principal thrust of this research is to effect rapid development and prototyping of virtual environments. This presentation will discuss the 'Visual Interface for Virtual Interaction Development' (VIVID) tool; an X-Windows based system employing drop-down menus for user selection of program access, models, and trees, behavior editing, and code generation. Examples of these selection will be highlighted in this presentation, as will the currently available program interfaces. The functionality of this tool allows non-programming users access to all facets of VE development while providing experienced programmers with a collection of pre-coded behaviors. In conjunction with its existing, interfaces and predefined suite of behaviors, future development plans for VIVID will be described. These include incorporation of dual user virtual environment enhancements, tool expansion, and additional behaviors.
A review of virtual cutting methods and technology in deformable objects.
Wang, Monan; Ma, Yuzheng
2018-06-05
Virtual cutting of deformable objects has been a research topic for more than a decade and has been used in many areas, especially in surgery simulation. We refer to the relevant literature and briefly describe the related research. The virtual cutting method is introduced, and we discuss the benefits and limitations of these methods and explore possible research directions. Virtual cutting is a category of object deformation. It needs to represent the deformation of models in real time as accurately, robustly and efficiently as possible. To accurately represent models, the method must be able to: (1) model objects with different material properties; (2) handle collision detection and collision response; and (3) update the geometry and topology of the deformable model that is caused by cutting. Virtual cutting is widely used in surgery simulation, and research of the cutting method is important to the development of surgery simulation. Copyright © 2018 John Wiley & Sons, Ltd.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745
ERIC Educational Resources Information Center
Chang, Hsin-Yi; Wu, Hsin-Kai; Hsu, Ying-Shao
2013-01-01
virtual objects or information overlaying physical objects or environments, resulting in a mixed reality in which virtual objects and real environments coexist in a meaningful way to augment learning…
Automated recycling of chemistry for virtual screening and library design.
Vainio, Mikko J; Kogej, Thierry; Raubacher, Florian
2012-07-23
An early stage drug discovery project needs to identify a number of chemically diverse and attractive compounds. These hit compounds are typically found through high-throughput screening campaigns. The diversity of the chemical libraries used in screening is therefore important. In this study, we describe a virtual high-throughput screening system called Virtual Library. The system automatically "recycles" validated synthetic protocols and available starting materials to generate a large number of virtual compound libraries, and allows for fast searches in the generated libraries using a 2D fingerprint based screening method. Virtual Library links the returned virtual hit compounds back to experimental protocols to quickly assess the synthetic accessibility of the hits. The system can be used as an idea generator for library design to enrich the screening collection and to explore the structure-activity landscape around a specific active compound.
3D augmented reality with integral imaging display
NASA Astrophysics Data System (ADS)
Shen, Xin; Hua, Hong; Javidi, Bahram
2016-06-01
In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.
NASA Astrophysics Data System (ADS)
Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Bai, Peirui; Torigian, Drew A.
2017-03-01
Much has been published on finding landmarks on object surfaces in the context of shape modeling. While this is still an open problem, many of the challenges of past approaches can be overcome by removing the restriction that landmarks must be on the object surface. The virtual landmarks we propose may reside inside, on the boundary of, or outside the object and are tethered to the object. Our solution is straightforward, simple, and recursive in nature, proceeding from global features initially to local features in later levels to detect landmarks. Principal component analysis (PCA) is used as an engine to recursively subdivide the object region. The object itself may be represented in binary or fuzzy form or with gray values. The method is illustrated in 3D space (although it generalizes readily to spaces of any dimensionality) on four objects (liver, trachea and bronchi, and outer boundaries of left and right lungs along pleura) derived from 5 patient computed tomography (CT) image data sets of the thorax and abdomen. The virtual landmark identification approach seems to work well on different structures in different subjects and seems to detect landmarks that are homologously located in different samples of the same object. The approach guarantees that virtual landmarks are invariant to translation, scaling, and rotation of the object/image. Landmarking techniques are fundamental for many computer vision and image processing applications, and we are currently exploring the use virtual landmarks in automatic anatomy recognition and object analytics.
Haptic interfaces: Hardware, software and human performance
NASA Technical Reports Server (NTRS)
Srinivasan, Mandayam A.
1995-01-01
Virtual environments are computer-generated synthetic environments with which a human user can interact to perform a wide variety of perceptual and motor tasks. At present, most of the virtual environment systems engage only the visual and auditory senses, and not the haptic sensorimotor system that conveys the sense of touch and feel of objects in the environment. Computer keyboards, mice, and trackballs constitute relatively simple haptic interfaces. Gloves and exoskeletons that track hand postures have more interaction capabilities and are available in the market. Although desktop and wearable force-reflecting devices have been built and implemented in research laboratories, the current capabilities of such devices are quite limited. To realize the full promise of virtual environments and teleoperation of remote systems, further developments of haptic interfaces are critical. In this paper, the status and research needs in human haptics, technology development and interactions between the two are described. In particular, the excellent performance characteristics of Phantom, a haptic interface recently developed at MIT, are highlighted. Realistic sensations of single point of contact interactions with objects of variable geometry (e.g., smooth, textured, polyhedral) and material properties (e.g., friction, impedance) in the context of a variety of tasks (e.g., needle biopsy, switch panels) achieved through this device are described and the associated issues in haptic rendering are discussed.
Guided exploration in virtual environments
NASA Astrophysics Data System (ADS)
Beckhaus, Steffi; Eckel, Gerhard; Strothotte, Thomas
2001-06-01
We describe an application supporting alternating interaction and animation for the purpose of exploration in a surround- screen projection-based virtual reality system. The exploration of an environment is a highly interactive and dynamic process in which the presentation of objects of interest can give the user guidance while exploring the scene. Previous systems for automatic presentation of models or scenes need either cinematographic rules, direct human interaction, framesets or precalculation (e.g. precalculation of paths to a predefined goal). We report on the development of a system that can deal with rapidly changing user interest in objects of a scene or model as well as with dynamic models and changes of the camera position introduced interactively by the user. It is implemented as a potential-field based camera data generating system. In this paper we describe the implementation of our approach in a virtual art museum on the CyberStage, our surround-screen projection-based stereoscopic display. The paradigm of guided exploration is introduced describing the freedom of the user to explore the museum autonomously. At the same time, if requested by the user, guided exploration provides just-in-time navigational support. The user controls this support by specifying the current field of interest in high-level search criteria. We also present an informal user study evaluating this approach.
Integration of the virtual 3D model of a control system with the virtual controller
NASA Astrophysics Data System (ADS)
Herbuś, K.; Ociepka, P.
2015-11-01
Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the operation of the adopted research object. The carried out work allowed foot the integration of the virtual model of the control system of the tunneling machine with the virtual controller, enabling the verification of its operation.
Chen, H F; Dong, X C; Zen, B S; Gao, K; Yuan, S G; Panaye, A; Doucet, J P; Fan, B T
2003-08-01
An efficient virtual and rational drug design method is presented. It combines virtual bioactive compound generation with 3D-QSAR model and docking. Using this method, it is possible to generate a lot of highly diverse molecules and find virtual active lead compounds. The method was validated by the study of a set of anti-tumor drugs. With the constraints of pharmacophore obtained by DISCO implemented in SYBYL 6.8, 97 virtual bioactive compounds were generated, and their anti-tumor activities were predicted by CoMFA. Eight structures with high activity were selected and screened by the 3D-QSAR model. The most active generated structure was further investigated by modifying its structure in order to increase the activity. A comparative docking study with telomeric receptor was carried out, and the results showed that the generated structures could form more stable complexes with receptor than the reference compound selected from experimental data. This investigation showed that the proposed method was a feasible way for rational drug design with high screening efficiency.
NASA Astrophysics Data System (ADS)
Tan, Kian Lam; Lim, Chen Kim
2017-10-01
In the last decade, cultural heritage including historical sites are reconstructed into digital heritage. Based on UNESCO, digital heritage defines as "cultural, educational, scientific and administrative resources, as well as technical, legal, medical and other kinds of information created digitally, or converted into digital form from existing analogue resources". In addition, the digital heritage is doubling in size every two years and expected will grow tenfold between 2013 and 2020. In order to attract and stir the interest of younger generations about digital heritage, gamification has been widely promoted. In this research, a virtual walkthrough combine with gamifications are proposed for learning and exploring historical places in Malaysia by using mobile device. In conjunction with Visit Perak 2017 Campaign, this virtual walkthrough is proposed for Kellie's Castle at Perak. The objectives of this research is two folds 1) modelling and design of innovative mobile game for virtual walkthrough application, and 2) to attract tourist to explore and learn historical places by using sophisticated graphics from Augmented Reality. The efficiency and effectiveness of the mobile virtual walkthrough will be accessed by the International and local tourists. In conclusion, this research is speculated to be pervasively improve the cultural and historical knowledge of the learners.
Kobayashi, Hajime; Ohkubo, Masaki; Narita, Akihiro; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Sone, Shusuke
2017-01-01
Objective: We propose the application of virtual nodules to evaluate the performance of computer-aided detection (CAD) of lung nodules in cancer screening using low-dose CT. Methods: The virtual nodules were generated based on the spatial resolution measured for a CT system used in an institution providing cancer screening and were fused into clinical lung images obtained at that institution, allowing site specificity. First, we validated virtual nodules as an alternative to artificial nodules inserted into a phantom. In addition, we compared the results of CAD analysis between the real nodules (n = 6) and the corresponding virtual nodules. Subsequently, virtual nodules of various sizes and contrasts between nodule density and background density (ΔCT) were inserted into clinical images (n = 10) and submitted for CAD analysis. Results: In the validation study, 46 of 48 virtual nodules had the same CAD results as artificial nodules (kappa coefficient = 0.913). Real nodules and the corresponding virtual nodules showed the same CAD results. The detection limits of the tested CAD system were determined in terms of size and density of peripheral lung nodules; we demonstrated that a nodule with a 5-mm diameter was detected when the nodule had a ΔCT > 220 HU. Conclusion: Virtual nodules are effective in evaluating CAD performance using site-specific scan/reconstruction conditions. Advances in knowledge: Virtual nodules can be an effective means of evaluating site-specific CAD performance. The methodology for guiding the detection limit for nodule size/density might be a useful evaluation strategy. PMID:27897029
Virtual cathode microwave generator having annular anode slit
Kwan, Thomas J. T.; Snell, Charles M.
1988-01-01
A microwave generator is provided for generating microwaves substantially from virtual cathode oscillation. Electrons are emitted from a cathode and accelerated to an anode which is spaced apart from the cathode. The anode has an annular slit therethrough effective to form the virtual cathode. The anode is at least one range thickness relative to electrons reflecting from the virtual cathode. A magnet is provided to produce an optimum magnetic field having the field strength effective to form an annular beam from the emitted electrons in substantial alignment with the annular anode slit. The magnetic field, however, does permit the reflected electrons to axially diverge from the annular beam. The reflected electrons are absorbed by the anode in returning to the real cathode, such that substantially no reflexing electrons occur. The resulting microwaves are produced with a single dominant mode and are substantially monochromatic relative to conventional virtual cathode microwave generators.
Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E.; Tkachenko, Valery; Torcivia-Rodriguez, John; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja
2016-01-01
The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure. The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu PMID:26989153
Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E; Tkachenko, Valery; Torcivia-Rodriguez, John; Voskanian, Alin; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja
2016-01-01
The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure.The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu. © The Author(s) 2016. Published by Oxford University Press.
Allen, R J; Rieger, T R; Musante, C J
2016-03-01
Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations.
a New ER Fluid Based Haptic Actuator System for Virtual Reality
NASA Astrophysics Data System (ADS)
Böse, H.; Baumann, M.; Monkman, G. J.; Egersdörfer, S.; Tunayar, A.; Freimuth, H.; Ermert, H.; Khaled, W.
The concept and some steps in the development of a new actuator system which enables the haptic perception of mechanically inhomogeneous virtual objects are introduced. The system consists of a two-dimensional planar array of actuator elements containing an electrorheological (ER) fluid. When a user presses his fingers onto the surface of the actuator array, he perceives locally variable resistance forces generated by vertical pistons which slide in the ER fluid through the gaps between electrode pairs. The voltage in each actuator element can be individually controlled by a novel sophisticated switching technology based on optoelectric gallium arsenide elements. The haptic information which is represented at the actuator array can be transferred from a corresponding sensor system based on ultrasonic elastography. The combined sensor-actuator system may serve as a technology platform for various applications in virtual reality, like telemedicine where the information on the consistency of tissue of a real patient is detected by the sensor part and recorded by the actuator part at a remote location.
NASA Astrophysics Data System (ADS)
Krapukhina, Nina; Senchenko, Roman; Kamenov, Nikolay
2017-12-01
Road safety and driving in dense traffic flows poses some challenges in receiving information about surrounding moving object, some of which can be in the vehicle's blind spot. This work suggests an approach to virtual monitoring of the objects in a current road scene via a system with a multitude of cooperating smart vehicles exchanging information. It also describes the intellectual agent model, and provides methods and algorithms of identifying and evaluating various characteristics of moving objects in video flow. Authors also suggest ways for integrating the information from the technical vision system into the model with further expansion of virtual monitoring for the system's objects. Implementation of this approach can help to expand the virtual field of view for a technical vision system.
Rosales, Jonathan-Hernando; Cervantes, José-Antonio
2017-01-01
Emotion regulation is a process by which human beings control emotional behaviors. From neuroscientific evidence, this mechanism is the product of conscious or unconscious processes. In particular, the mechanism generated by a conscious process needs a priori components to be computed. The behaviors generated by previous experiences are among these components. These behaviors need to be adapted to fulfill the objectives in a specific situation. The problem we address is how to endow virtual creatures with emotion regulation in order to compute an appropriate behavior in a specific emotional situation. This problem is clearly important and we have not identified ways to solve this problem in the current literature. In our proposal, we show a way to generate the appropriate behavior in an emotional situation using a learning classifier system (LCS). We illustrate the function of our proposal in unknown and known situations by means of two case studies. Our results demonstrate that it is possible to converge to the appropriate behavior even in the first case; that is, when the system does not have previous experiences and in situations where some previous information is available our proposal proves to be a very powerful tool. PMID:29209362
ERIC Educational Resources Information Center
Auld, Lawrence W. S.; Pantelidis, Veronica S.
1994-01-01
Describes the Virtual Reality and Education Lab (VREL) established at East Carolina University to study the implications of virtual reality for elementary and secondary education. Highlights include virtual reality software evaluation; hardware evaluation; computer-based curriculum objectives which could use virtual reality; and keeping current…
NASA Astrophysics Data System (ADS)
Moazami Goodarzi, Hamed; Kazemi, Mohammad Hosein
2018-05-01
Microgrid (MG) clustering is regarded as an important driver in improving the robustness of MGs. However, little research has been conducted on providing appropriate MG clustering. This article addresses this shortfall. It proposes a novel multi-objective optimization approach for finding optimal clustering of autonomous MGs by focusing on variables such as distributed generation (DG) droop parameters, the location and capacity of DG units, renewable energy sources, capacitors and powerline transmission. Power losses are minimized and voltage stability is improved while virtual cut-set lines with minimum power transmission for clustering MGs are obtained. A novel chaotic grey wolf optimizer (CGWO) algorithm is applied to solve the proposed multi-objective problem. The performance of the approach is evaluated by utilizing a 69-bus MG in several scenarios.
Lui, Justin T; Hoy, Monica Y
2017-06-01
Background The increasing prevalence of virtual reality simulation in temporal bone surgery warrants an investigation to assess training effectiveness. Objectives To determine if temporal bone simulator use improves mastoidectomy performance. Data Sources Ovid Medline, Embase, and PubMed databases were systematically searched per the PRISMA guidelines. Review Methods Inclusion criteria were peer-reviewed publications that utilized quantitative data of mastoidectomy performance following the use of a temporal bone simulator. The search was restricted to human studies published in English. Studies were excluded if they were in non-peer-reviewed format, were descriptive in nature, or failed to provide surgical performance outcomes. Meta-analysis calculations were then performed. Results A meta-analysis based on the random-effects model revealed an improvement in overall mastoidectomy performance following training on the temporal bone simulator. A standardized mean difference of 0.87 (95% CI, 0.38-1.35) was generated in the setting of a heterogeneous study population ( I 2 = 64.3%, P < .006). Conclusion In the context of a diverse population of virtual reality simulation temporal bone surgery studies, meta-analysis calculations demonstrate an improvement in trainee mastoidectomy performance with virtual simulation training.
EMG and Kinematic Responses to Unexpected Slips After Slip Training in Virtual Reality
Parijat, Prakriti; Lockhart, Thurmon E.
2015-01-01
The objective of the study was to design a virtual reality (VR) training to induce perturbation in older adults similar to a slip and examine the effect of the training on kinematic and muscular responses in older adults. Twenty-four older adults were involved in a laboratory study and randomly assigned to two groups (virtual reality training and control). Both groups went through three sessions including baseline slip, training, and transfer of training on slippery surface. The training group experienced twelve simulated slips using a visual perturbation induced by tilting a virtual reality scene while walking on the treadmill and the control group completed normal walking during the training session. Kinematic, kinetic, and EMG data were collected during all the sessions. Results demonstrated the proactive adjustments such as increased trunk flexion at heel contact after training. Reactive adjustments included reduced time to peak activations of knee flexors, reduced knee coactivation, reduced time to trunk flexion, and reduced trunk angular velocity after training. In conclusion, the study findings indicate that the VR training was able to generate a perturbation in older adults that evoked recovery reactions and such motor skill can be transferred to the actual slip trials. PMID:25296401
Rieger, TR; Musante, CJ
2016-01-01
Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777
Hybrid Reality Lab Capabilities - Video 2
NASA Technical Reports Server (NTRS)
Delgado, Francisco J.; Noyes, Matthew
2016-01-01
Our Hybrid Reality and Advanced Operations Lab is developing incredibly realistic and immersive systems that could be used to provide training, support engineering analysis, and augment data collection for various human performance metrics at NASA. To get a better understanding of what Hybrid Reality is, let's go through the two most commonly known types of immersive realities: Virtual Reality, and Augmented Reality. Virtual Reality creates immersive scenes that are completely made up of digital information. This technology has been used to train astronauts at NASA, used during teleoperation of remote assets (arms, rovers, robots, etc.) and other activities. One challenge with Virtual Reality is that if you are using it for real time-applications (like landing an airplane) then the information used to create the virtual scenes can be old (i.e. visualized long after physical objects moved in the scene) and not accurate enough to land the airplane safely. This is where Augmented Reality comes in. Augmented Reality takes real-time environment information (from a camera, or see through window, and places digitally created information into the scene so that it matches with the video/glass information). Augmented Reality enhances real environment information collected with a live sensor or viewport (e.g. camera, window, etc.) with the information-rich visualization provided by Virtual Reality. Hybrid Reality takes Augmented Reality even further, by creating a higher level of immersion where interactivity can take place. Hybrid Reality takes Virtual Reality objects and a trackable, physical representation of those objects, places them in the same coordinate system, and allows people to interact with both objects' representations (virtual and physical) simultaneously. After a short period of adjustment, the individuals begin to interact with all the objects in the scene as if they were real-life objects. The ability to physically touch and interact with digitally created objects that have the same shape, size, location to their physical object counterpart in virtual reality environment can be a game changer when it comes to training, planning, engineering analysis, science, entertainment, etc. Our Project is developing such capabilities for various types of environments. The video outlined with this abstract is a representation of an ISS Hybrid Reality experience. In the video you can see various Hybrid Reality elements that provide immersion beyond just standard Virtual Reality or Augmented Reality.
Systems and Methods for Data Visualization Using Three-Dimensional Displays
NASA Technical Reports Server (NTRS)
Davidoff, Scott (Inventor); Djorgovski, Stanislav G. (Inventor); Estrada, Vicente (Inventor); Donalek, Ciro (Inventor)
2017-01-01
Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to: load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points; receive mappings of data dimensions to visualization attributes; determine the visualization attributes of the set of 3D objects based upon the selected mappings of data dimensions to 3D object attributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization attributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.
Mattsson, Sofia; Sjöström, Hans-Erik; Englund, Claire
2016-06-25
Objective. To develop and implement a virtual tablet machine simulation to aid distance students' understanding of the processes involved in tablet production. Design. A tablet simulation was created enabling students to study the effects different parameters have on the properties of the tablet. Once results were generated, students interpreted and explained them on the basis of current theory. Assessment. The simulation was evaluated using written questionnaires and focus group interviews. Students appreciated the exercise and considered it to be motivational. Students commented that they found the simulation, together with the online seminar and the writing of the report, was beneficial for their learning process. Conclusion. According to students' perceptions, the use of the tablet simulation contributed to their understanding of the compaction process.
Sjöström, Hans-Erik; Englund, Claire
2016-01-01
Objective. To develop and implement a virtual tablet machine simulation to aid distance students’ understanding of the processes involved in tablet production. Design. A tablet simulation was created enabling students to study the effects different parameters have on the properties of the tablet. Once results were generated, students interpreted and explained them on the basis of current theory. Assessment. The simulation was evaluated using written questionnaires and focus group interviews. Students appreciated the exercise and considered it to be motivational. Students commented that they found the simulation, together with the online seminar and the writing of the report, was beneficial for their learning process. Conclusion. According to students’ perceptions, the use of the tablet simulation contributed to their understanding of the compaction process. PMID:27402990
Optimal Regulation of Virtual Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall Anese, Emiliano; Guggilam, Swaroop S.; Simonetto, Andrea
This paper develops a real-time algorithmic framework for aggregations of distributed energy resources (DERs) in distribution networks to provide regulation services in response to transmission-level requests. Leveraging online primal-dual-type methods for time-varying optimization problems and suitable linearizations of the nonlinear AC power-flow equations, we believe this work establishes the system-theoretic foundation to realize the vision of distribution-level virtual power plants. The optimization framework controls the output powers of dispatchable DERs such that, in aggregate, they respond to automatic-generation-control and/or regulation-services commands. This is achieved while concurrently regulating voltages within the feeder and maximizing customers' and utility's performance objectives. Convergence andmore » tracking capabilities are analytically established under suitable modeling assumptions. Simulations are provided to validate the proposed approach.« less
C-arm technique using distance driven method for nephrolithiasis and kidney stones detection
NASA Astrophysics Data System (ADS)
Malalla, Nuhad; Sun, Pengfei; Chen, Ying; Lipkin, Michael E.; Preminger, Glenn M.; Qin, Jun
2016-04-01
Distance driven represents a state of art method that used for reconstruction for x-ray techniques. C-arm tomography is an x-ray imaging technique that provides three dimensional information of the object by moving the C-shaped gantry around the patient. With limited view angle, C-arm system was investigated to generate volumetric data of the object with low radiation dosage and examination time. This paper is a new simulation study with two reconstruction methods based on distance driven including: simultaneous algebraic reconstruction technique (SART) and Maximum Likelihood expectation maximization (MLEM). Distance driven is an efficient method that has low computation cost and free artifacts compared with other methods such as ray driven and pixel driven methods. Projection images of spherical objects were simulated with a virtual C-arm system with a total view angle of 40 degrees. Results show the ability of limited angle C-arm technique to generate three dimensional images with distance driven reconstruction.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
1991-01-01
Natural environments have a content, i.e., the objects in them; a geometry, i.e., a pattern of rules for positioning and displacing the objects; and a dynamics, i.e., a system of rules describing the effects of forces acting on the objects. Human interaction with most common natural environments has been optimized by centuries of evolution. Virtual environments created through the human-computer interface similarly have a content, geometry, and dynamics, but the arbitrary character of the computer simulation creating them does not insure that human interaction with these virtual environments will be natural. The interaction, indeed, could be supernatural but it also could be impossible. An important determinant of the comprehensibility of a virtual environment is the correspondence between the environmental frames of reference and those associated with the control of environmental objects. The effects of rotation and displacement of control frames of reference with respect to corresponding environmental references differ depending upon whether perceptual judgement or manual tracking performance is measured. The perceptual effects of frame of reference displacement may be analyzed in terms of distortions in the process of virtualizing the synthetic environment space. The effects of frame of reference displacement and rotation have been studied by asking subjects to estimate exocentric direction in a virtual space.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
Algorithms for Haptic Rendering of 3D Objects
NASA Technical Reports Server (NTRS)
Basdogan, Cagatay; Ho, Chih-Hao; Srinavasan, Mandayam
2003-01-01
Algorithms have been developed to provide haptic rendering of three-dimensional (3D) objects in virtual (that is, computationally simulated) environments. The goal of haptic rendering is to generate tactual displays of the shapes, hardnesses, surface textures, and frictional properties of 3D objects in real time. Haptic rendering is a major element of the emerging field of computer haptics, which invites comparison with computer graphics. We have already seen various applications of computer haptics in the areas of medicine (surgical simulation, telemedicine, haptic user interfaces for blind people, and rehabilitation of patients with neurological disorders), entertainment (3D painting, character animation, morphing, and sculpting), mechanical design (path planning and assembly sequencing), and scientific visualization (geophysical data analysis and molecular manipulation).
Narracott, Andrew J; Manini, Simone; Bayley, Martin J; Lawford, Patricia V; McCormack, Keith; Zary, Nabil
2014-01-01
Background Virtual patients are increasingly common tools used in health care education to foster learning of clinical reasoning skills. One potential way to expand their functionality is to augment virtual patients’ interactivity by enriching them with computational models of physiological and pathological processes. Objective The primary goal of this paper was to propose a conceptual framework for the integration of computational models within virtual patients, with particular focus on (1) characteristics to be addressed while preparing the integration, (2) the extent of the integration, (3) strategies to achieve integration, and (4) methods for evaluating the feasibility of integration. An additional goal was to pilot the first investigation of changing framework variables on altering perceptions of integration. Methods The framework was constructed using an iterative process informed by Soft System Methodology. The Virtual Physiological Human (VPH) initiative has been used as a source of new computational models. The technical challenges associated with development of virtual patients enhanced by computational models are discussed from the perspectives of a number of different stakeholders. Concrete design and evaluation steps are discussed in the context of an exemplar virtual patient employing the results of the VPH ARCH project, as well as improvements for future iterations. Results The proposed framework consists of four main elements. The first element is a list of feasibility features characterizing the integration process from three perspectives: the computational modelling researcher, the health care educationalist, and the virtual patient system developer. The second element included three integration levels: basic, where a single set of simulation outcomes is generated for specific nodes in the activity graph; intermediate, involving pre-generation of simulation datasets over a range of input parameters; advanced, including dynamic solution of the model. The third element is the description of four integration strategies, and the last element consisted of evaluation profiles specifying the relevant feasibility features and acceptance thresholds for specific purposes. The group of experts who evaluated the virtual patient exemplar found higher integration more interesting, but at the same time they were more concerned with the validity of the result. The observed differences were not statistically significant. Conclusions This paper outlines a framework for the integration of computational models into virtual patients. The opportunities and challenges of model exploitation are discussed from a number of user perspectives, considering different levels of model integration. The long-term aim for future research is to isolate the most crucial factors in the framework and to determine their influence on the integration outcome. PMID:24463466
Integrating Virtual Worlds with Tangible User Interfaces for Teaching Mathematics: A Pilot Study.
Guerrero, Graciela; Ayala, Andrés; Mateu, Juan; Casades, Laura; Alamán, Xavier
2016-10-25
This article presents a pilot study of the use of two new tangible interfaces and virtual worlds for teaching geometry in a secondary school. The first tangible device allows the user to control a virtual object in six degrees of freedom. The second tangible device is used to modify virtual objects, changing attributes such as position, size, rotation and color. A pilot study on using these devices was carried out at the "Florida Secundaria" high school. A virtual world was built where students used the tangible interfaces to manipulate geometrical figures in order to learn different geometrical concepts. The pilot experiment results suggest that the use of tangible interfaces and virtual worlds allowed a more meaningful learning (concepts learnt were more durable).
Information Retrieval in Virtual Universities
ERIC Educational Resources Information Center
Puustjärvi, Juha; Pöyry, Päivi
2006-01-01
Information retrieval in the context of virtual universities deals with the representation, organization, and access to learning objects. The representation and organization of learning objects should provide the learner with an easy access to the learning objects. In this article, we give an overview of the ONES system, and analyze the relevance…
3D geospatial visualizations: Animation and motion effects on spatial objects
NASA Astrophysics Data System (ADS)
Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos
2018-02-01
Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.
Gibo, Tricia L; Bastian, Amy J; Okamura, Allison M
2014-03-01
When grasping and manipulating objects, people are able to efficiently modulate their grip force according to the experienced load force. Effective grip force control involves providing enough grip force to prevent the object from slipping, while avoiding excessive force to avoid damage and fatigue. During indirect object manipulation via teleoperation systems or in virtual environments, users often receive limited somatosensory feedback about objects with which they interact. This study examines the effects of force feedback, accuracy demands, and training on grip force control during object interaction in a virtual environment. The task required subjects to grasp and move a virtual object while tracking a target. When force feedback was not provided, subjects failed to couple grip and load force, a capability fundamental to direct object interaction. Subjects also exerted larger grip force without force feedback and when accuracy demands of the tracking task were high. In addition, the presence or absence of force feedback during training affected subsequent performance, even when the feedback condition was switched. Subjects' grip force control remained reminiscent of their employed grip during the initial training. These results motivate the use of force feedback during telemanipulation and highlight the effect of force feedback during training.
NASA Astrophysics Data System (ADS)
Wong, Erwin
2000-03-01
Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.
Design of virtual three-dimensional instruments for sound control
NASA Astrophysics Data System (ADS)
Mulder, Axel Gezienus Elith
An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object parameters. While the virtual instruments can be adapted to exploit many manipulation gestures, further work is required to reduce the need for technical expertise to realize adaptations. Better virtual object simulation techniques and faster sensor data acquisition will improve the performance of virtual instruments. The design environment which has been developed should prove useful as a (musical) instrument prototyping tool and as a tool for researching the optimal adaptation of machines to humans.
Roitberg, Ben Z; Kania, Patrick; Luciano, Cristian; Dharmavaram, Naga; Banerjee, Pat
2015-01-01
Manual skill is an important attribute for any surgeon. Current methods to evaluate sensory-motor skills in neurosurgical residency applicants are limited. We aim to develop an objective multifaceted measure of sensory-motor skills using a virtual reality surgical simulator. A set of 3 tests of sensory-motor function was performed using a 3-dimensional surgical simulator with head and arm tracking, collocalization, and haptic feedback. (1) Trajectory planning: virtual reality drilling of a pedicle. Entry point, target point, and trajectory were scored-evaluating spatial memory and orientation. (2) Motor planning: sequence, timing, and precision: hemostasis in a postresection cavity in the brain. (3) Haptic perception: touching virtual spheres to determine which is softest of the group, with progressive difficulty. Results were analyzed individually and for a combined score of all the tasks. The University of Chicago Hospital's tertiary care academic center. A total of 95 consecutive applicants interviewed at a neurosurgery residency program over 2 years were offered anonymous participation in the study; in 2 cohorts, 36 participants in year 1 and 27 participants in year 2 (validation cohort) agreed and completed all the tasks. We also tested 10 first-year medical students and 4 first- and second-year neurosurgery residents. A cumulative score was generated from the 3 tests. The mean score was 14.47 (standard deviation = 4.37), median score was 13.42, best score was 8.41, and worst score was 30.26. Separate analysis of applicants from each of 2 years yielded nearly identical results. Residents tended to cluster on the better performance side, and first-year students were not different from applicants. (1) Our cumulative score measures sensory-motor skills in an objective and reproducible way. (2) Better performance by residents hints at validity for neurosurgery. (3) We were able to demonstrate good psychometric qualities and generate a proposed sensory-motor quotient distribution in our tested population. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
The Role of Semantics in Next-Generation Online Virtual World-Based Retail Store
NASA Astrophysics Data System (ADS)
Sharma, Geetika; Anantaram, C.; Ghosh, Hiranmay
Online virtual environments are increasingly becoming popular for entrepreneurship. While interactions are primarily between avatars, some interactions could occur through intelligent chatbots. Such interactions require connecting to backend business applications to obtain information, carry out real-world transactions etc. In this paper, we focus on integrating business application systems with virtual worlds. We discuss the probable features of a next-generation online virtual world-based retail store and the technologies involved in realizing the features of such a store. In particular, we examine the role of semantics in integrating popular virtual worlds with business applications to provide natural language based interactions.
NASA Astrophysics Data System (ADS)
Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin
2006-02-01
A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.
NASA Technical Reports Server (NTRS)
Leifer, Larry; Michalowski, Stefan; Vanderloos, Machiel
1991-01-01
The Stanford/VA Interactive Robotics Laboratory set out in 1978 to test the hypothesis that industrial robotics technology could be applied to serve the manipulation needs of severely impaired individuals. Five generations of hardware, three generations of system software, and over 125 experimental subjects later, we believe that genuine utility is achievable. The experience includes development of over 65 task applications using voiced command, joystick control, natural language command and 3D object designation technology. A brief foray into virtual environments, using flight simulator technology, was instructive. If reality and virtuality come for comparable prices, you cannot beat reality. A detailed review of assistive robot anatomy and the performance specifications needed to achieve cost/beneficial utility will be used to support discussion of the future of rehabilitation telerobotics. Poised on the threshold of commercial viability, but constrained by the high cost of technically adequate manipulators, this worthy application domain flounders temporarily. In the long run, it will be the user interface that governs utility.
Palpation simulator with stable haptic feedback.
Kim, Sang-Youn; Ryu, Jee-Hwan; Lee, WooJeong
2015-01-01
The main difficulty in constructing palpation simulators is to compute and to generate stable and realistic haptic feedback without vibration. When a user haptically interacts with highly non-homogeneous soft tissues through a palpation simulator, a sudden change of stiffness in target tissues causes unstable interaction with the object. We propose a model consisting of a virtual adjustable damper and an energy measuring element. The energy measuring element gauges energy which is stored in a palpation simulator and the virtual adjustable damper dissipates the energy to achieve stable haptic interaction. To investigate the haptic behavior of the proposed method, impulse and continuous inputs are provided to target tissues. If a haptic interface point meets with the hardest portion in the target tissues modeled with a conventional method, we observe unstable motion and feedback force. However, when the target tissues are modeled with the proposed method, a palpation simulator provides stable interaction without vibration. The proposed method overcomes a problem in conventional haptic palpation simulators where unstable force or vibration can be generated if there is a big discrepancy in material property between an element and its neighboring elements in target tissues.
Learning Objects and Virtual Learning Environments Technical Evaluation Criteria
ERIC Educational Resources Information Center
Kurilovas, Eugenijus; Dagiene, Valentina
2009-01-01
The main scientific problems investigated in this article deal with technical evaluation of quality attributes of the main components of e-Learning systems (referred here as DLEs--Digital Libraries of Educational Resources and Services), i.e., Learning Objects (LOs) and Virtual Learning Environments (VLEs). The main research object of the work is…
Third-Graders Learn about Fractions Using Virtual Manipulatives: A Classroom Study
ERIC Educational Resources Information Center
Reimer, Kelly; Moyer, Patricia S.
2005-01-01
With recent advances in computer technology, it is no surprise that the manipulation of objects in mathematics classrooms now includes the manipulation of objects on the computer screen. These objects, referred to as "virtual manipulatives," are essentially replicas of physical manipulatives placed on the World Wide Web in the form of computer…
Integrating Virtual Worlds with Tangible User Interfaces for Teaching Mathematics: A Pilot Study
Guerrero, Graciela; Ayala, Andrés; Mateu, Juan; Casades, Laura; Alamán, Xavier
2016-01-01
This article presents a pilot study of the use of two new tangible interfaces and virtual worlds for teaching geometry in a secondary school. The first tangible device allows the user to control a virtual object in six degrees of freedom. The second tangible device is used to modify virtual objects, changing attributes such as position, size, rotation and color. A pilot study on using these devices was carried out at the “Florida Secundaria” high school. A virtual world was built where students used the tangible interfaces to manipulate geometrical figures in order to learn different geometrical concepts. The pilot experiment results suggest that the use of tangible interfaces and virtual worlds allowed a more meaningful learning (concepts learnt were more durable). PMID:27792132
Fat ViP MRI: Virtual Phantom Magnetic Resonance Imaging of water-fat systems.
Salvati, Roberto; Hitti, Eric; Bellanger, Jean-Jacques; Saint-Jalmes, Hervé; Gambarota, Giulio
2016-06-01
Virtual Phantom Magnetic Resonance Imaging (ViP MRI) is a method to generate reference signals on MR images, using external radiofrequency (RF) signals. The aim of this study was to assess the feasibility of ViP MRI to generate complex-data images of phantoms mimicking water-fat systems. Various numerical phantoms with a given fat fraction, T2* and field map were designed. The k-space of numerical phantoms was converted into RF signals to generate virtual phantoms. MRI experiments were performed at 4.7T using a multi-gradient-echo sequence on virtual and physical phantoms. The data acquisition of virtual and physical phantoms was simultaneous. Decomposition of the water and fat signals was performed using a complex-based water-fat separation algorithm. Overall, a good agreement was observed between the fat fraction, T2* and phase map values of the virtual and numerical phantoms. In particular, fat fractions of 10.5±0.1 (vs 10% of the numerical phantom), 20.3±0.1 (vs 20%) and 30.4±0.1 (vs 30%) were obtained in virtual phantoms. The ViP MRI method allows for generating imaging phantoms that i) mimic water-fat systems and ii) can be analyzed with water-fat separation algorithms based on complex data. Copyright © 2016 Elsevier Inc. All rights reserved.
Vision-Based Haptic Feedback for Remote Micromanipulation in-SEM Environment
NASA Astrophysics Data System (ADS)
Bolopion, Aude; Dahmen, Christian; Stolle, Christian; Haliyo, Sinan; Régnier, Stéphane; Fatikow, Sergej
2012-07-01
This article presents an intuitive environment for remote micromanipulation composed of both haptic feedback and virtual reconstruction of the scene. To enable nonexpert users to perform complex teleoperated micromanipulation tasks, it is of utmost importance to provide them with information about the 3-D relative positions of the objects and the tools. Haptic feedback is an intuitive way to transmit such information. Since position sensors are not available at this scale, visual feedback is used to derive information about the scene. In this work, three different techniques are implemented, evaluated, and compared to derive the object positions from scanning electron microscope images. The modified correlation matching with generated template algorithm is accurate and provides reliable detection of objects. To track the tool, a marker-based approach is chosen since fast detection is required for stable haptic feedback. Information derived from these algorithms is used to propose an intuitive remote manipulation system that enables users situated in geographically distant sites to benefit from specific equipments, such as SEMs. Stability of the haptic feedback is ensured by the minimization of the delays, the computational efficiency of vision algorithms, and the proper tuning of the haptic coupling. Virtual guides are proposed to avoid any involuntary collisions between the tool and the objects. This approach is validated by a teleoperation involving melamine microspheres with a diameter of less than 2 μ m between Paris, France and Oldenburg, Germany.
Research on 3D virtual campus scene modeling based on 3ds Max and VRML
NASA Astrophysics Data System (ADS)
Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue
2015-12-01
With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.
Using EMG to anticipate head motion for virtual-environment applications
NASA Technical Reports Server (NTRS)
Barniv, Yair; Aguilar, Mario; Hasanbelliu, Erion
2005-01-01
In virtual environment (VE) applications, where virtual objects are presented in a see-through head-mounted display, virtual images must be continuously stabilized in space in response to user's head motion. Time delays in head-motion compensation cause virtual objects to "swim" around instead of being stable in space which results in misalignment errors when overlaying virtual and real objects. Visual update delays are a critical technical obstacle for implementing head-mounted displays in applications such as battlefield simulation/training, telerobotics, and telemedicine. Head motion is currently measurable by a head-mounted 6-degrees-of-freedom inertial measurement unit. However, even given this information, overall VE-system latencies cannot be reduced under about 25 ms. We present a novel approach to eliminating latencies, which is premised on the fact that myoelectric signals from a muscle precede its exertion of force, thereby limb or head acceleration. We thus suggest utilizing neck-muscles' myoelectric signals to anticipate head motion. We trained a neural network to map such signals onto equivalent time-advanced inertial outputs. The resulting network can achieve time advances of up to 70 ms.
Using EMG to anticipate head motion for virtual-environment applications.
Barniv, Yair; Aguilar, Mario; Hasanbelliu, Erion
2005-06-01
In virtual environment (VE) applications, where virtual objects are presented in a see-through head-mounted display, virtual images must be continuously stabilized in space in response to user's head motion. Time delays in head-motion compensation cause virtual objects to "swim" around instead of being stable in space which results in misalignment errors when overlaying virtual and real objects. Visual update delays are a critical technical obstacle for implementing head-mounted displays in applications such as battlefield simulation/training, telerobotics, and telemedicine. Head motion is currently measurable by a head-mounted 6-degrees-of-freedom inertial measurement unit. However, even given this information, overall VE-system latencies cannot be reduced under about 25 ms. We present a novel approach to eliminating latencies, which is premised on the fact that myoelectric signals from a muscle precede its exertion of force, thereby limb or head acceleration. We thus suggest utilizing neck-muscles' myoelectric signals to anticipate head motion. We trained a neural network to map such signals onto equivalent time-advanced inertial outputs. The resulting network can achieve time advances of up to 70 ms.
NASA Astrophysics Data System (ADS)
Teng, W. L.; Rui, H.; Strub, R. F.; Vollmer, B.
2015-12-01
A "Digital Divide" has long stood between how NASA and other satellite-derived data are typically archived (time-step arrays or "maps") and how hydrology and other point-time series oriented communities prefer to access those data. In essence, the desired method of data access is orthogonal to the way the data are archived. Our approach to bridging the Divide is part of a larger NASA-supported "data rods" project to enhance access to and use of NASA and other data by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) and the larger hydrology community. Our main objective was to determine a way to reorganize data that is optimal for these communities. Two related objectives were to optimally reorganize data in a way that (1) is operational and fits in and leverages the existing Goddard Earth Sciences Data and Information Services Center (GES DISC) operational environment and (2) addresses the scaling up of data sets available as time series from those archived at the GES DISC to potentially include those from other Earth Observing System Data and Information System (EOSDIS) data archives. Through several prototype efforts and lessons learned, we arrived at a non-database solution that satisfied our objectives/constraints. We describe, in this presentation, how we implemented the operational production of pre-generated data rods and, considering the tradeoffs between length of time series (or number of time steps), resources needed, and performance, how we implemented the operational production of on-the-fly ("virtual") data rods. For the virtual data rods, we leveraged a number of existing resources, including the NASA Giovanni Cache and NetCDF Operators (NCO) and used data cubes processed in parallel. Our current benchmark performance for virtual generation of data rods is about a year's worth of time series for hourly data (~9,000 time steps) in ~90 seconds. Our approach is a specific implementation of the general optimal strategy of reorganizing data to match the desired means of access. Results from our project have already significantly extended NASA data to the large and important hydrology user community that has been, heretofore, mostly unable to easily access and use NASA data.
The study of early human embryos using interactive 3-dimensional computer reconstructions.
Scarborough, J; Aiton, J F; McLachlan, J C; Smart, S D; Whiten, S C
1997-07-01
Tracings of serial histological sections from 4 human embryos at different Carnegie stages were used to create 3-dimensional (3D) computer models of the developing heart. The models were constructed using commercially available software developed for graphic design and the production of computer generated virtual reality environments. They are available as interactive objects which can be downloaded via the World Wide Web. This simple method of 3D reconstruction offers significant advantages for understanding important events in morphological sciences.
Kwan, T.J.T.; Snell, C.M.
1987-03-31
A microwave generator is provided for generating microwaves substantially from virtual cathode oscillation. Electrons are emitted from a cathode and accelerated to an anode which is spaced apart from the cathode. The anode has an annular slit there through effective to form the virtual cathode. The anode is at least one range thickness relative to electrons reflecting from the virtual cathode. A magnet is provided to produce an optimum magnetic field having the field strength effective to form an annular beam from the emitted electrons in substantial alignment with the annular anode slit. The magnetic field, however, does permit the reflected electrons to axially diverge from the annular beam. The reflected electrons are absorbed by the anode in returning to the real cathode, such that substantially no reflexing electrons occur. The resulting microwaves are produced with a single dominant mode and are substantially monochromatic relative to conventional virtual cathode microwave generators. 6 figs.
NASA Astrophysics Data System (ADS)
Knight, Claire; Munro, Malcolm
2001-07-01
Distributed component based systems seem to be the immediate future for software development. The use of such techniques, object oriented languages, and the combination with ever more powerful higher-level frameworks has led to the rapid creation and deployment of such systems to cater for the demand of internet and service driven business systems. This diversity of solution through both components utilised and the physical/virtual locations of those components can provide powerful resolutions to the new demand. The problem lies in the comprehension and maintenance of such systems because they then have inherent uncertainty. The components combined at any given time for a solution may differ, the messages generated, sent, and/or received may differ, and the physical/virtual locations cannot be guaranteed. Trying to account for this uncertainty and to build in into analysis and comprehension tools is important for both development and maintenance activities.
Inertial Motion-Tracking Technology for Virtual 3-D
NASA Technical Reports Server (NTRS)
2005-01-01
In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.
High-immersion three-dimensional display of the numerical computer model
NASA Astrophysics Data System (ADS)
Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu
2013-08-01
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.
Virtual learning object and environment: a concept analysis.
Salvador, Pétala Tuani Candido de Oliveira; Bezerril, Manacés Dos Santos; Mariz, Camila Maria Santos; Fernandes, Maria Isabel Domingues; Martins, José Carlos Amado; Santos, Viviane Euzébia Pereira
2017-01-01
To analyze the concept of virtual learning object and environment according to Rodgers' evolutionary perspective. Descriptive study with a mixed approach, based on the stages proposed by Rodgers in his concept analysis method. Data collection occurred in August 2015 with the search of dissertations and theses in the Bank of Theses of the Coordination for the Improvement of Higher Education Personnel. Quantitative data were analyzed based on simple descriptive statistics and the concepts through lexicographic analysis with support of the IRAMUTEQ software. The sample was made up of 161 studies. The concept of "virtual learning environment" was presented in 99 (61.5%) studies, whereas the concept of "virtual learning object" was presented in only 15 (9.3%) studies. A virtual learning environment includes several and different types of virtual learning objects in a common pedagogical context. Analisar o conceito de objeto e de ambiente virtual de aprendizagem na perspectiva evolucionária de Rodgers. Estudo descritivo, de abordagem mista, realizado a partir das etapas propostas por Rodgers em seu modelo de análise conceitual. A coleta de dados ocorreu em agosto de 2015 com a busca de dissertações e teses no Banco de Teses e Dissertações da Coordenação de Aperfeiçoamento de Pessoal de Nível Superior. Os dados quantitativos foram analisados a partir de estatística descritiva simples e os conceitos pela análise lexicográfica com suporte do IRAMUTEQ. A amostra é constituída de 161 estudos. O conceito de "ambiente virtual de aprendizagem" foi apresentado em 99 (61,5%) estudos, enquanto o de "objeto virtual de aprendizagem" em apenas 15 (9,3%). Concluiu-se que um ambiente virtual de aprendizagem reúne vários e diferentes tipos de objetos virtuais de aprendizagem em um contexto pedagógico comum.
Robotics virtual rail system and method
Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID
2011-07-05
A virtual track or rail system and method is described for execution by a robot. A user, through a user interface, generates a desired path comprised of at least one segment representative of the virtual track for the robot. Start and end points are assigned to the desired path and velocities are also associated with each of the at least one segment of the desired path. A waypoint file is generated including positions along the virtual track representing the desired path with the positions beginning from the start point to the end point including the velocities of each of the at least one segment. The waypoint file is sent to the robot for traversing along the virtual track.
An Interactive Augmented Reality Implementation of Hijaiyah Alphabet for Children Education
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Akbar, F.; Syahputra, M. F.; Budiman, M. A.; Hizriadi, A.
2018-03-01
Hijaiyah alphabet is letters used in the Qur’an. An attractive and exciting learning process of Hijaiyah alphabet is necessary for the children. One of the alternatives to create attractive and interesting learning process of Hijaiyah alphabet is to develop it into a mobile application using augmented reality technology. Augmented reality is a technology that combines two-dimensional or three-dimensional virtual objects into actual three-dimensional circles and projects them in real time. The purpose of application aims to foster the children interest in learning Hijaiyah alphabet. This application is using Smartphone and marker as the medium. It was built using Unity and augmented reality library, namely Vuforia, then using Blender as the 3D object modeling software. The output generated from this research is the learning application of Hijaiyah letters using augmented reality. How to use it is as follows: first, place marker that has been registered and printed; second, the smartphone camera will track the marker. If the marker is invalid, the user should repeat the tracking process. If the marker is valid and identified, the marker will have projected the objects of Hijaiyah alphabet in three-dimensional form. Lastly, the user can learn and understand the shape and pronunciation of Hijaiyah alphabet by touching the virtual button on the marker
The Virtual Environment for Rapid Prototyping of the Intelligent Environment
Bouzouane, Abdenour; Gaboury, Sébastien
2017-01-01
Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants’ behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs. PMID:29112175
The Virtual Environment for Rapid Prototyping of the Intelligent Environment.
Francillette, Yannick; Boucher, Eric; Bouzouane, Abdenour; Gaboury, Sébastien
2017-11-07
Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants' behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs.
Altering User Movement Behaviour in Virtual Environments.
Simeone, Adalberto L; Mavridou, Ifigeneia; Powell, Wendy
2017-04-01
In immersive Virtual Reality systems, users tend to move in a Virtual Environment as they would in an analogous physical environment. In this work, we investigated how user behaviour is affected when the Virtual Environment differs from the physical space. We created two sets of four environments each, plus a virtual replica of the physical environment as a baseline. The first focused on aesthetic discrepancies, such as a water surface in place of solid ground. The second focused on mixing immaterial objects together with those paired to tangible objects. For example, barring an area with walls or obstacles. We designed a study where participants had to reach three waypoints laid out in such a way to prompt a decision on which path to follow based on the conflict between the mismatching visual stimuli and their awareness of the real layout of the room. We analysed their performances to determine whether their trajectories were altered significantly from the shortest route. Our results indicate that participants altered their trajectories in presence of surfaces representing higher walking difficulty (for example, water instead of grass). However, when the graphical appearance was found to be ambiguous, there was no significant trajectory alteration. The environments mixing immaterial with physical objects had the most impact on trajectories with a mean deviation from the shortest route of 60 cm against the 37 cm of environments with aesthetic alterations. The co-existance of paired and unpaired virtual objects was reported to support the idea that all objects participants saw were backed by physical props. From these results and our observations, we derive guidelines on how to alter user movement behaviour in Virtual Environments.
The perception of spatial layout in real and virtual worlds.
Arthur, E J; Hancock, P A; Chrysler, S T
1997-01-01
As human-machine interfaces grow more immersive and graphically-oriented, virtual environment systems become more prominent as the medium for human-machine communication. Often, virtual environments (VE) are built to provide exact metrical representations of existing or proposed physical spaces. However, it is not known how individuals develop representational models of these spaces in which they are immersed and how those models may be distorted with respect to both the virtual and real-world equivalents. To evaluate the process of model development, the present experiment examined participant's ability to reproduce a complex spatial layout of objects having experienced them previously under different viewing conditions. The layout consisted of nine common objects arranged on a flat plane. These objects could be viewed in a free binocular virtual condition, a free binocular real-world condition, and in a static monocular view of the real world. The first two allowed active exploration of the environment while the latter condition allowed the participant only a passive opportunity to observe from a single viewpoint. Viewing conditions were a between-subject variable with 10 participants randomly assigned to each condition. Performance was assessed using mapping accuracy and triadic comparisons of relative inter-object distances. Mapping results showed a significant effect of viewing condition where, interestingly, the static monocular condition was superior to both the active virtual and real binocular conditions. Results for the triadic comparisons showed a significant interaction for gender by viewing condition in which males were more accurate than females. These results suggest that the situation model resulting from interaction with a virtual environment was indistinguishable from interaction with real objects at least within the constraints of the present procedure.
On consistent inter-view synthesis for autostereoscopic displays
NASA Astrophysics Data System (ADS)
Tran, Lam C.; Bal, Can; Pal, Christopher J.; Nguyen, Truong Q.
2012-03-01
In this paper we present a novel stereo view synthesis algorithm that is highly accurate with respect to inter-view consistency, thus to enabling stereo contents to be viewed on the autostereoscopic displays. The algorithm finds identical occluded regions within each virtual view and aligns them together to extract a surrounding background layer. The background layer for each occluded region is then used with an exemplar based inpainting method to synthesize all virtual views simultaneously. Our algorithm requires the alignment and extraction of background layers for each occluded region; however, these two steps are done efficiently with lower computational complexity in comparison to previous approaches using the exemplar based inpainting algorithms. Thus, it is more efficient than existing algorithms that synthesize one virtual view at a time. This paper also describes the implementation of a simplified GPU accelerated version of the approach and its implementation in CUDA. Our CUDA method has sublinear complexity in terms of the number of views that need to be generated, which makes it especially useful for generating content for autostereoscopic displays that require many views to operate. An objective of our work is to allow the user to change depth and viewing perspective on the fly. Therefore, to further accelerate the CUDA variant of our approach, we present a modified version of our method to warp the background pixels from reference views to a middle view to recover background pixels. We then use an exemplar based inpainting method to fill in the occluded regions. We use warping of the foreground from the reference images and background from the filled regions to synthesize new virtual views on the fly. Our experimental results indicate that the simplified CUDA implementation decreases running time by orders of magnitude with negligible loss in quality. [Figure not available: see fulltext.
Simulating 3D deformation using connected polygons
NASA Astrophysics Data System (ADS)
Tarigan, J. T.; Jaya, I.; Hardi, S. M.; Zamzami, E. M.
2018-03-01
In modern 3D application, interaction between user and the virtual world is one of an important factor to increase the realism. This interaction can be visualized in many forms; one of them is object deformation. There are many ways to simulate object deformation in virtual 3D world; each comes with different level of realism and performance. Our objective is to present a new method to simulate object deformation by using a graph-connected polygon. In this solution, each object contains multiple level of polygons in different level of volume. The proposed solution focusses on performance rather while maintaining the acceptable level of realism. In this paper, we present the design and implementation of our solution and show that this solution is usable in performance sensitive 3D application such as games and virtual reality.
Virtual Education: Guidelines for Using Games Technology
ERIC Educational Resources Information Center
Schofield, Damian
2014-01-01
Advanced three-dimensional virtual environment technology, similar to that used by the film and computer games industry, can allow educational developers to rapidly create realistic online virtual environments. This technology has been used to generate a range of interactive Virtual Reality (VR) learning environments across a spectrum of…
NASA Technical Reports Server (NTRS)
Trolinger, James D.; Lal, Ravindra B.; Rangel, Roger; Witherow, William; Rogers, Jan
2001-01-01
The IML-1 Spaceflight produced over 1000 holograms of a well-defined particle field in the low g Spacelab environment; each containing as much as 1000 megabytes of information. This project took advantage of these data and the concept of holographic "virtual" spaceflight to advance the understanding of convection in the space shuttle environment, g-jitter effects on crystal growth, and complex transport phenomena in low Reynolds number flows. The first objective of the proposed work was to advance the understanding of microgravity effects on crystal growth. This objective was achieved through the use of existing holographic data recorded during the IML-1 Spaceflight. The second objective was to design a spaceflight experiment that exploits the "virtual space chamber concept" in which holograms of space chambers can provide a virtual access to space. This led to a flight definition project, which is now underway under a separate contract known as SHIVA, Spaceflight Holography Investigation in a Virtual Apparatus.
Culbertson, Heather; Kuchenbecker, Katherine J
2017-01-01
Interacting with physical objects through a tool elicits tactile and kinesthetic sensations that comprise your haptic impression of the object. These cues, however, are largely missing from interactions with virtual objects, yielding an unrealistic user experience. This article evaluates the realism of virtual surfaces rendered using haptic models constructed from data recorded during interactions with real surfaces. The models include three components: surface friction, tapping transients, and texture vibrations. We render the virtual surfaces on a SensAble Phantom Omni haptic interface augmented with a Tactile Labs Haptuator for vibration output. We conducted a human-subject study to assess the realism of these virtual surfaces and the importance of the three model components. Following a perceptual discrepancy paradigm, subjects compared each of 15 real surfaces to a full rendering of the same surface plus versions missing each model component. The realism improvement achieved by including friction, tapping, or texture in the rendering was found to directly relate to the intensity of the surface's property in that domain (slipperiness, hardness, or roughness). A subsequent analysis of forces and vibrations measured during interactions with virtual surfaces indicated that the Omni's inherent mechanical properties corrupted the user's haptic experience, decreasing realism of the virtual surface.
ERIC Educational Resources Information Center
Trespalacios, Jesus
2010-01-01
This study investigated the effects of two generative learning activities on students' academic achievement of the part-whole meaning of rational numbers while using virtual manipulatives. Third-grade students were divided randomly in two groups to evaluate the effects of two generative learning activities: answering-questions and…
Real-time tracking of visually attended objects in virtual environments and its application to LOD.
Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon
2009-01-01
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.
A Virtual Education: Guidelines for Using Games Technology
ERIC Educational Resources Information Center
Schofield, Damian
2014-01-01
Advanced three-dimensional virtual environment technology, similar to that used by the film and computer games industry, can allow educational developers to rapidly create realistic online vir-tual environments. This technology has been used to generate a range of interactive Virtual Real-ity (VR) learning environments across a spectrum of…
Rodriguez-Andres, David; Mendez-Lopez, Magdalena; Juan, M-Carmen; Perez-Hernandez, Elena
2018-01-01
The use of virtual reality-based tasks for studying memory has increased considerably. Most of the studies that have looked at child population factors that influence performance on such tasks have been focused on cognitive variables. However, little attention has been paid to the impact of non-cognitive skills. In the present paper, we tested 52 typically-developing children aged 5-12 years in a virtual object-location task. The task assessed their spatial short-term memory for the location of three objects in a virtual city. The virtual task environment was presented using a 3D application consisting of a 120″ stereoscopic screen and a gamepad interface. Measures of learning and displacement indicators in the virtual environment, 3D perception, satisfaction, and usability were obtained. We assessed the children's videogame experience, their visuospatial span, their ability to build blocks, and emotional and behavioral outcomes. The results indicate that learning improved with age. Significant effects on the speed of navigation were found favoring boys and those more experienced with videogames. Visuospatial skills correlated mainly with ability to recall object positions, but the correlation was weak. Longer paths were related with higher scores of withdrawal behavior, attention problems, and a lower visuospatial span. Aggressiveness and experience with the device used for interaction were related with faster navigation. However, the correlations indicated only weak associations among these variables.
ERIC Educational Resources Information Center
Prosser, Dominic; Eddisford, Susan
2004-01-01
This paper examines children's and adults' attitudes to virtual representations of museum objects. Drawing on empirical research data gained from two web-based digital learning environments. The paper explores the characteristics of on-line learning activities that move children from a sense of wonder into meaningful engagement with objects and…
A class Hierarchical, object-oriented approach to virtual memory management
NASA Technical Reports Server (NTRS)
Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.
1989-01-01
The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.
New approaches to virtual environment surgery
NASA Technical Reports Server (NTRS)
Ross, M. D.; Twombly, A.; Lee, A. W.; Cheng, R.; Senger, S.
1999-01-01
This research focused on two main problems: 1) low cost, high fidelity stereoscopic imaging of complex tissues and organs; and 2) virtual cutting of tissue. A further objective was to develop these images and virtual tissue cutting methods for use in a telemedicine project that would connect remote sites using the Next Generation Internet. For goal one we used a CT scan of a human heart, a desktop PC with an OpenGL graphics accelerator card, and LCD stereoscopic glasses. Use of multiresolution meshes ranging from approximately 1,000,000 to 20,000 polygons speeded interactive rendering rates enormously while retaining general topography of the dataset. For goal two, we used a CT scan of an infant skull with premature closure of the right coronal suture, a Silicon Graphics Onyx workstation, a Fakespace Immersive WorkBench and CrystalEyes LCD glasses. The high fidelity mesh of the skull was reduced from one million to 50,000 polygons. The cut path was automatically calculated as the shortest distance along the mesh between a small number of hand selected vertices. The region outlined by the cut path was then separated from the skull and translated/rotated to assume a new position. The results indicate that widespread high fidelity imaging in virtual environment is possible using ordinary PC capabilities if appropriate mesh reduction methods are employed. The software cutting tool is applicable to heart and other organs for surgery planning, for training surgeons in a virtual environment, and for telemedicine purposes.
Risks and Uncertainties in Virtual Worlds: An Educators' Perspective
ERIC Educational Resources Information Center
Farahmand, Fariborz; Yadav, Aman; Spafford, Eugene H.
2013-01-01
Virtual worlds present tremendous advantages to cyberlearning. For example, in virtual worlds users can socialize with others, build objects and share them, customize parts of the world and hold lectures, do experiments, or share data. However, virtual worlds pose a wide range of security, privacy, and safety concerns. This may lead educators to…
Education about Hallucinations Using an Internet Virtual Reality System: A Qualitative Survey
ERIC Educational Resources Information Center
Yellowlees, Peter M.; Cook, James N.
2006-01-01
Objective: The authors evaluate an Internet virtual reality technology as an education tool about the hallucinations of psychosis. Method: This is a pilot project using Second Life, an Internet-based virtual reality system, in which a virtual reality environment was constructed to simulate the auditory and visual hallucinations of two patients…
Next Generation Landsat Products Delivered Using Virtual Globes and OGC Standard Services
NASA Astrophysics Data System (ADS)
Neiers, M.; Dwyer, J.; Neiers, S.
2008-12-01
The Landsat Data Continuity Mission (LDCM) is the next in the series of Landsat satellite missions and is tasked with the objective of delivering data acquired by the Operational Land Imager (OLI). The OLI instrument will provide data continuity to over 30 years of global multispectral data collected by the Landsat series of satellites. The U.S. Geological Survey Earth Resources Observation and Science (USGS EROS) Center has responsibility for the development and operation of the LDCM ground system. One of the mission objectives of the LDCM is to distribute OLI data products electronically over the Internet to the general public on a nondiscriminatory basis and at no cost. To ensure the user community and general public can easily access LDCM data from multiple clients, the User Portal Element (UPE) of the LDCM ground system will use OGC standards and services such as Keyhole Markup Language (KML), Web Map Service (WMS), Web Coverage Service (WCS), and Geographic encoding of Really Simple Syndication (GeoRSS) feeds for both access to and delivery of LDCM products. The USGS has developed and tested the capabilities of several successful UPE prototypes for delivery of Landsat metadata, full resolution browse, and orthorectified (L1T) products from clients such as Google Earth, Google Maps, ESRI ArcGIS Explorer, and Microsoft's Virtual Earth. Prototyping efforts included the following services: using virtual globes to search the historical Landsat archive by dynamic generation of KML; notification of and access to new Landsat acquisitions and L1T downloads from GeoRSS feeds; Google indexing of KML files containing links to full resolution browse and data downloads; WMS delivery of reduced resolution browse, full resolution browse, and cloud mask overlays; and custom data downloads using WCS clients. These various prototypes will be demonstrated and LDCM service implementation plans will be discussed during this session.
NASA Astrophysics Data System (ADS)
Han, Young-Min; Choi, Seung-Bok
2008-12-01
This paper presents the control performance of an electrorheological (ER) fluid-based haptic master device connected to a virtual slave environment that can be used for minimally invasive surgery (MIS). An already developed haptic joint featuring controllable ER fluid and a spherical joint mechanism is adopted for the master system. Medical forceps and an angular position measuring device are devised and integrated with the joint to establish the MIS master system. In order to embody a human organ in virtual space, a volumetric deformable object is used. The virtual object is then mathematically formulated by a shape-retaining chain-linked (S-chain) model. After evaluating the reflection force, computation time and compatibility with real-time control, the haptic architecture for MIS is established by incorporating the virtual slave with the master device so that the reflection force for the object of the virtual slave and the desired position for the master operator are transferred to each other. In order to achieve the desired force trajectories, a sliding mode controller is formulated and then experimentally realized. Tracking control performances for various force trajectories are evaluated and presented in the time domain.
Studies of the field-of-view resolution tradeoff in virtual-reality systems
NASA Technical Reports Server (NTRS)
Piantanida, Thomas P.; Boman, Duane; Larimer, James; Gille, Jennifer; Reed, Charles
1992-01-01
Most virtual-reality systems use LCD-based displays that achieve a large field-of-view at the expense of resolution. A typical display will consist of approximately 86,000 pixels uniformly distributed over an 80-degree by 60-degree image. Thus, each pixel subtends about 13 minutes of arc at the retina; about the same as the resolvable features of the 20/200 line of a Snellen Eye Chart. The low resolution of LCD-based systems limits task performance in some applications. We have examined target-detection performance in a low-resolution virtual world. Our synthesized three-dimensional virtual worlds consisted of target objects that could be positioned at a fixed distance from the viewer, but at random azimuth and constrained elevation. A virtual world could be bounded by chromatic walls or by wire-frame, or it could be unbounded. Viewers scanned these worlds and indicated by appropriate gestures when they had detected the target object. By manipulating the viewer's field size and the chromatic and luminance contrast of annuli surrounding the field-of-view, we were able to assess the effect of field size on the detection of virtual objects in low-resolution synthetic worlds.
A virtual pointer to support the adoption of professional vision in laparoscopic training.
Feng, Yuanyuan; McGowan, Hannah; Semsar, Azin; Zahiri, Hamid R; George, Ivan M; Turner, Timothy; Park, Adrian; Kleinsmith, Andrea; Mentis, Helena M
2018-05-23
To assess a virtual pointer in supporting surgical trainees' development of professional vision in laparoscopic surgery. We developed a virtual pointing and telestration system utilizing the Microsoft Kinect movement sensor as an overlay for any imagine system. Training with the application was compared to a standard condition, i.e., verbal instruction with un-mediated gestures, in a laparoscopic training environment. Seven trainees performed four simulated laparoscopic tasks guided by an experienced surgeon as the trainer. Trainee performance was subjectively assessed by the trainee and trainer, and objectively measured by number of errors, time to task completion, and economy of movement. No significant differences in errors and time to task completion were obtained between virtual pointer and standard conditions. Economy of movement in the non-dominant hand was significantly improved when using virtual pointer ([Formula: see text]). The trainers perceived a significant improvement in trainee performance in virtual pointer condition ([Formula: see text]), while the trainees perceived no difference. The trainers' perception of economy of movement was similar between the two conditions in the initial three runs and became significantly improved in virtual pointer condition in the fourth run ([Formula: see text]). Results show that the virtual pointer system improves the trainer's perception of trainee's performance and this is reflected in the objective performance measures in the third and fourth training runs. The benefit of a virtual pointing and telestration system may be perceived by the trainers early on in training, but this is not evident in objective trainee performance until further mastery has been attained. In addition, the performance improvement of economy of motion specifically shows that the virtual pointer improves the adoption of professional vision- improved ability to see and use laparoscopic video results in more direct instrument movement.
Virtual industrial water usage and wastewater generation in the Middle East/North African region
NASA Astrophysics Data System (ADS)
Sakhel, S. R.; Geissen, S.-U.; Vogelpohl, A.
2013-01-01
This study deals with the quantification of volumes of water usage, wastewater generation, virtual water export, and wastewater generation from export for eight export relevant industries present in the Middle East/North Africa (MENA). It shows that about 3400 million m3 of water is used per annum while around 793 million m3 of wastewater is generated from products that are meant for domestic consumption and export. The difference between volumes of water usage and wastewater generation is due to water evaporation or injecting underground (oil wells pressure maintenance). The wastewater volume generated from production represents a population equivalent of 15.5 million in terms of wastewater quantity and 30.4 million in terms of BOD. About 409 million m3 of virtual water flows from MENA to EU27 (resulting from export of eight commodities) which is equivalent to 12.1% of the water usage of those industries and Libya is the largest virtual water exporter (about 87 million m3). Crude oil and refined petroleum products represent about 89% of the total virtual water flow, fertilizers represent around 10% and 1% remaining industries. EU27 poses the greatest indirect pressure on the Kuwaiti hydrological system where the virtual water export represents about 96% of the actual renewable water resources in this country. The Kuwaiti crude oil water use in relation to domestic water withdrawal is about 89% which is highest among MENA countries. Pollution of water bodies, in terms of BOD, due to production is very relevant for crude oil, slaughterhouses, refineries, olive oil, and tanneries while pollution due to export to EU27 is most relevant for crude oil industry and olive oil mills.
Massetti, Thais; Fávero, Francis Meire; Menezes, Lilian Del Ciello de; Alvarez, Mayra Priscila Boscolo; Crocetta, Tânia Brusque; Guarnieri, Regiani; Nunes, Fátima L S; Monteiro, Carlos Bandeira de Mello; Silva, Talita Dias da
2018-04-01
To evaluate whether people with Duchenne muscular dystrophy (DMD) practicing a task in a virtual environment could improve performance given a similar task in a real environment, as well as distinguishing whether there is transference between performing the practice in virtual environment and then a real environment and vice versa. Twenty-two people with DMD were evaluated and divided into two groups. The goal was to reach out and touch a red cube. Group A began with the real task and had to touch a real object, and Group B began with the virtual task and had to reach a virtual object using the Kinect system. ANOVA showed that all participants decreased the movement time from the first (M = 973 ms) to the last block of acquisition (M = 783 ms) in both virtual and real tasks and motor learning could be inferred by the short-term retention and transfer task (with increasing distance of the target). However, the evaluation of task performance demonstrated that the virtual task provided an inferior performance when compared to the real task in all phases of the study, and there was no effect for sequence. Both virtual and real tasks promoted improvement of performance in the acquisition phase, short-term retention, and transfer. However, there was no transference of learning between environments. In conclusion, it is recommended that the use of virtual environments for individuals with DMD needs to be considered carefully.
Transforming Clinical Imaging Data for Virtual Reality Learning Objects
ERIC Educational Resources Information Center
Trelease, Robert B.; Rosset, Antoine
2008-01-01
Advances in anatomical informatics, three-dimensional (3D) modeling, and virtual reality (VR) methods have made computer-based structural visualization a practical tool for education. In this article, the authors describe streamlined methods for producing VR "learning objects," standardized interactive software modules for anatomical sciences…
Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach
Tian, Yuan; Guan, Tao; Wang, Cheng
2010-01-01
To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278
Hirarchical emotion calculation model for virtual human modellin - biomed 2010.
Zhao, Yue; Wright, David
2010-01-01
This paper introduces a new emotion generation method for virtual human modelling. The method includes a novel hierarchical emotion structure, a group of emotion calculation equations and a simple heuristics decision making mechanism, which enables virtual humans to perform emotionally in real-time according to their internal and external factors. Emotion calculation equations used in this research were derived from psychologic emotion measurements. Virtual humans can utilise the information in virtual memory and emotion calculation equations to generate their own numerical emotion states within the hierarchical emotion structure. Those emotion states are important internal references for virtual humans to adopt appropriate behaviours and also key cues for their decision making. A simple heuristics theory is introduced and integrated into decision making process in order to make the virtual humans decision making more like a real human. A data interface which connects the emotion calculation and the decision making structure together has also been designed and simulated to test the method in Virtools environment.
The Intersection of Virtual Organizations and the Library: A Case Study
ERIC Educational Resources Information Center
Carlson, Jake; Yatcilla, Jane Kinkus
2010-01-01
The proliferation of virtual organizations is changing the nature and practice of research. These changes present a challenge to Libraries, as their traditional roles and services do not translate well to virtual organizations. However, virtual organizations also offer opportunities for librarians to participate in shaping the next generation of…
Virtual Virtuosos: A Case Study in Learning Music in Virtual Learning Environments in Spain
ERIC Educational Resources Information Center
Alberich-Artal, Enric; Sangra, Albert
2012-01-01
In recent years, the development of Information and Communication Technologies (ICT) has contributed to the generation of a number of interesting initiatives in the field of music education and training in virtual learning environments. However, music education initiatives employing virtual learning environments have replicated and perpetuated the…
National randomized controlled trial of virtual house calls for Parkinson disease
Beck, Christopher A.; Beran, Denise B.; Biglan, Kevin M.; Boyd, Cynthia M.; Schmidt, Peter N.; Simone, Richard; Willis, Allison W.; Galifianakis, Nicholas B.; Katz, Maya; Tanner, Caroline M.; Dodenhoff, Kristen; Aldred, Jason; Carter, Julie; Fraser, Andrew; Jimenez-Shahed, Joohi; Hunter, Christine; Spindler, Meredith; Reichwein, Suzanne; Mari, Zoltan; Dunlop, Becky; Morgan, John C.; McLane, Dedi; Hickey, Patrick; Gauger, Lisa; Richard, Irene Hegeman; Mejia, Nicte I.; Bwala, Grace; Nance, Martha; Shih, Ludy C.; Singer, Carlos; Vargas-Parra, Silvia; Zadikoff, Cindy; Okon, Natalia; Feigin, Andrew; Ayan, Jean; Vaughan, Christina; Pahwa, Rajesh; Dhall, Rohit; Hassan, Anhar; DeMello, Steven; Riggare, Sara S.; Wicks, Paul; Achey, Meredith A.; Elson, Molly J.; Goldenthal, Steven; Keenan, H. Tait; Korn, Ryan; Schwarz, Heidi; Sharma, Saloni; Stevenson, E. Anna; Zhu, William
2017-01-01
Objective: To determine whether providing remote neurologic care into the homes of people with Parkinson disease (PD) is feasible, beneficial, and valuable. Methods: In a 1-year randomized controlled trial, we compared usual care to usual care supplemented by 4 virtual visits via video conferencing from a remote specialist into patients' homes. Primary outcome measures were feasibility, as measured by the proportion who completed at least one virtual visit and the proportion of virtual visits completed on time; and efficacy, as measured by the change in the Parkinson's Disease Questionnaire–39, a quality of life scale. Secondary outcomes included quality of care, caregiver burden, and time and travel savings. Results: A total of 927 individuals indicated interest, 210 were enrolled, and 195 were randomized. Participants had recently seen a specialist (73%) and were largely college-educated (73%) and white (96%). Ninety-five (98% of the intervention group) completed at least one virtual visit, and 91% of 388 virtual visits were completed. Quality of life did not improve in those receiving virtual house calls (0.3 points worse on a 100-point scale; 95% confidence interval [CI] −2.0 to 2.7 points; p = 0.78) nor did quality of care or caregiver burden. Each virtual house call saved patients a median of 88 minutes (95% CI 70–120; p < 0.0001) and 38 miles per visit (95% CI 36–56; p < 0.0001). Conclusions: Providing remote neurologic care directly into the homes of people with PD was feasible and was neither more nor less efficacious than usual in-person care. Virtual house calls generated great interest and provided substantial convenience. ClinicalTrials.gov identifier: NCT02038959. Classification of evidence: This study provides Class III evidence that for patients with PD, virtual house calls from a neurologist are feasible and do not significantly change quality of life compared to in-person visits. The study is rated Class III because it was not possible to mask patients to visit type. PMID:28814455
A review of the use of simulation in dental education.
Perry, Suzanne; Bridges, Susan Margaret; Burrow, Michael Francis
2015-02-01
In line with the advances in technology and communication, medical simulations are being developed to support the acquisition of requisite psychomotor skills before real-life clinical applications. This review article aimed to give a general overview of simulation in a cognate field, clinical dental education. Simulations in dentistry are not a new phenomenon; however, recent developments in virtual-reality technology using computer-generated medical simulations of 3-dimensional images or environments are providing more optimal practice conditions to smooth the transition from the traditional model-based simulation laboratory to the clinic. Evidence as to the positive aspects of virtual reality include increased effectiveness in comparison with traditional simulation teaching techniques, more efficient learning, objective and reproducible feedback, unlimited training hours, and enhanced cost-effectiveness for teaching establishments. Negative aspects have been indicated as initial setup costs, faculty training, and the lack of a variety of content and current educational simulation programs.
Quadrado, Virgínia Helena; Silva, Talita Dias da; Favero, Francis Meire; Tonks, James; Massetti, Thais; Monteiro, Carlos Bandeira de Mello
2017-11-10
To examine whether performance improvements in the virtual environment generalize to the natural environment. we had 64 individuals, 32 of which were individuals with DMD and 32 were typically developing individuals. The groups practiced two coincidence timing tasks. In the more tangible button-press task, the individuals were required to 'intercept' a falling virtual object at the moment it reached the interception point by pressing a key on the computer. In the more abstract task, they were instructed to 'intercept' the virtual object by making a hand movement in a virtual environment using a webcam. For individuals with DMD, conducting a coincidence timing task in a virtual environment facilitated transfer to the real environment. However, we emphasize that a task practiced in a virtual environment should have higher rates of difficulties than a task practiced in a real environment. IMPLICATIONS FOR REHABILITATION Virtual environments can be used to promote improved performance in ?real-world? environments. Virtual environments offer the opportunity to create paradigms similar ?real-life? tasks, however task complexity and difficulty levels can be manipulated, graded and enhanced to increase likelihood of success in transfer of learning and performance. Individuals with DMD, in particular, showed immediate performance benefits after using virtual reality.
New virtual laboratories presenting advanced motion control concepts
NASA Astrophysics Data System (ADS)
Goubej, Martin; Krejčí, Alois; Reitinger, Jan
2015-11-01
The paper deals with development of software framework for rapid generation of remote virtual laboratories. Client-server architecture is chosen in order to employ real-time simulation core which is running on a dedicated server. Ordinary web browser is used as a final renderer to achieve hardware independent solution which can be run on different target platforms including laptops, tablets or mobile phones. The provided toolchain allows automatic generation of the virtual laboratory source code from the configuration file created in the open- source Inkscape graphic editor. Three virtual laboratories presenting advanced motion control algorithms have been developed showing the applicability of the proposed approach.
Virtual reality for intelligent and interactive operating, training, and visualization systems
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Schluse, Michael
2000-10-01
Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.
Global Village as Virtual Community (On Writing, Thinking, and Teacher Education).
ERIC Educational Resources Information Center
Polin, Linda
1993-01-01
Describes virtual communities known as Multi-User Simulated Environment (MUSE) or Multi-User Object Oriented environment (MOO), text-based computer "communities" whose inhabitants are a combination of the real people and constructed objects that people agree to treat as real. Describes their uses in the classroom. (SR)
Virtual Visits and Patient-Centered Care: Results of a Patient Survey and Observational Study
2017-01-01
Background Virtual visits are clinical interactions in health care that do not involve the patient and provider being in the same room at the same time. The use of virtual visits is growing rapidly in health care. Some health systems are integrating virtual visits into primary care as a complement to existing modes of care, in part reflecting a growing focus on patient-centered care. There is, however, limited empirical evidence about how patients view this new form of care and how it affects overall health system use. Objective Descriptive objectives were to assess users and providers of virtual visits, including the reasons patients give for use. The analytic objective was to assess empirically the influence of virtual visits on overall primary care use and costs, including whether virtual care is with a known or a new primary care physician. Methods The study took place in British Columbia, Canada, where virtual visits have been publicly funded since October 2012. A survey of patients who used virtual visits and an observational study of users and nonusers of virtual visits were conducted. Comparison groups included two groups: (1) all other BC residents, and (2) a group matched (3:1) to the cohort. The first virtual visit was used as the intervention and the main outcome measures were total primary care visits and costs. Results During 2013-2014, there were 7286 virtual visit encounters, involving 5441 patients and 144 physicians. Younger patients and physicians were more likely to use and provide virtual visits (P<.001), with no differences by sex. Older and sicker patients were more likely to see a known provider, whereas the lowest socioeconomic groups were the least likely (P<.001). The survey of 399 virtual visit patients indicated that virtual visits were liked by patients, with 372 (93.2%) of respondents saying their virtual visit was of high quality and 364 (91.2%) reporting their virtual visit was “very” or “somewhat” helpful to resolve their health issue. Segmented regression analysis and the corresponding regression parameter estimates suggested virtual visits appear to have the potential to decrease primary care costs by approximately Can $4 per quarter (Can –$3.79, P=.12), but that benefit is most associated with seeing a known provider (Can –$8.68, P<.001). Conclusions Virtual visits may be one means of making the health system more patient-centered, but careful attention needs to be paid to how these services are integrated into existing health care delivery systems. PMID:28550006
Virtual Factory Framework for Supporting Production Planning and Control.
Kibira, Deogratias; Shao, Guodong
2017-01-01
Developing optimal production plans for smart manufacturing systems is challenging because shop floor events change dynamically. A virtual factory incorporating engineering tools, simulation, and optimization generates and communicates performance data to guide wise decision making for different control levels. This paper describes such a platform specifically for production planning. We also discuss verification and validation of the constituent models. A case study of a machine shop is used to demonstrate data generation for production planning in a virtual factory.
E-Learning Application of Tarsier with Virtual Reality using Android Platform
NASA Astrophysics Data System (ADS)
Oroh, H. N.; Munir, R.; Paseru, D.
2017-01-01
Spectral Tarsier is a primitive primate that can only be found in the province of North Sulawesi. To study these primates can be used an e-learning application with Augmented Reality technology that uses a marker to confronted the camera computer to interact with three dimensions Tarsier object. But that application only shows tarsier object in three dimensions without habitat and requires a lot of resources because it runs on a Personal Computer. The same technology can be shown three dimensions’ objects is Virtual Reality to excess can make the user like venturing into the virtual world with Android platform that requires fewer resources. So, put on Virtual Reality technology using the Android platform that can make users not only to view and interact with the tarsiers but also the habitat. The results of this research indicate that the user can learn the Tarsier and habitat with good. Thus, the use of Virtual Reality technology in the e-learning application of tarsiers can help people to see, know, and learn about Spectral Tarsier.
Rodriguez-Andres, David; Mendez-Lopez, Magdalena; Juan, M.-Carmen; Perez-Hernandez, Elena
2018-01-01
The use of virtual reality-based tasks for studying memory has increased considerably. Most of the studies that have looked at child population factors that influence performance on such tasks have been focused on cognitive variables. However, little attention has been paid to the impact of non-cognitive skills. In the present paper, we tested 52 typically-developing children aged 5–12 years in a virtual object-location task. The task assessed their spatial short-term memory for the location of three objects in a virtual city. The virtual task environment was presented using a 3D application consisting of a 120″ stereoscopic screen and a gamepad interface. Measures of learning and displacement indicators in the virtual environment, 3D perception, satisfaction, and usability were obtained. We assessed the children’s videogame experience, their visuospatial span, their ability to build blocks, and emotional and behavioral outcomes. The results indicate that learning improved with age. Significant effects on the speed of navigation were found favoring boys and those more experienced with videogames. Visuospatial skills correlated mainly with ability to recall object positions, but the correlation was weak. Longer paths were related with higher scores of withdrawal behavior, attention problems, and a lower visuospatial span. Aggressiveness and experience with the device used for interaction were related with faster navigation. However, the correlations indicated only weak associations among these variables. PMID:29674988
Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.
Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O
2015-10-01
Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.
Digital fabrication of multi-material biomedical objects.
Cheung, H H; Choi, S H
2009-12-01
This paper describes a multi-material virtual prototyping (MMVP) system for modelling and digital fabrication of discrete and functionally graded multi-material objects for biomedical applications. The MMVP system consists of a DMMVP module, an FGMVP module and a virtual reality (VR) simulation module. The DMMVP module is used to model discrete multi-material (DMM) objects, while the FGMVP module is for functionally graded multi-material (FGM) objects. The VR simulation module integrates these two modules to perform digital fabrication of multi-material objects, which can be subsequently visualized and analysed in a virtual environment to optimize MMLM processes for fabrication of product prototypes. Using the MMVP system, two biomedical objects, including a DMM human spine and an FGM intervertebral disc spacer are modelled and digitally fabricated for visualization and analysis in a VR environment. These studies show that the MMVP system is a practical tool for modelling, visualization, and subsequent fabrication of biomedical objects of discrete and functionally graded multi-materials for biomedical applications. The system may be adapted to control MMLM machines with appropriate hardware for physical fabrication of biomedical objects.
Intraoperative virtual brain counseling
NASA Astrophysics Data System (ADS)
Jiang, Zhaowei; Grosky, William I.; Zamorano, Lucia J.; Muzik, Otto; Diaz, Fernando
1997-06-01
Our objective is to offer online real-tim e intelligent guidance to the neurosurgeon. Different from traditional image-guidance technologies that offer intra-operative visualization of medical images or atlas images, virtual brain counseling goes one step further. It can distinguish related brain structures and provide information about them intra-operatively. Virtual brain counseling is the foundation for surgical planing optimization and on-line surgical reference. It can provide a warning system that alerts the neurosurgeon if the chosen trajectory will pass through eloquent brain areas. In order to fulfill this objective, tracking techniques are involved for intra- operativity. Most importantly, a 3D virtual brian environment, different from traditional 3D digitized atlases, is an object-oriented model of the brain that stores information about different brain structures together with their elated information. An object-oriented hierarchical hyper-voxel space (HHVS) is introduced to integrate anatomical and functional structures. Spatial queries based on position of interest, line segment of interest, and volume of interest are introduced in this paper. The virtual brain environment is integrated with existing surgical pre-planning and intra-operative tracking systems to provide information for planning optimization and on-line surgical guidance. The neurosurgeon is alerted automatically if the planned treatment affects any critical structures. Architectures such as HHVS and algorithms, such as spatial querying, normalizing, and warping are presented in the paper. A prototype has shown that the virtual brain is intuitive in its hierarchical 3D appearance. It also showed that HHVS, as the key structure for virtual brain counseling, efficiently integrates multi-scale brain structures based on their spatial relationships.This is a promising development for optimization of treatment plans and online surgical intelligent guidance.
Creating photorealistic virtual model with polarization-based vision system
NASA Astrophysics Data System (ADS)
Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi
2005-08-01
Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.
Virtual Schools in the U.S. 2014: Politics, Performance, Policy, and Research Evidence
ERIC Educational Resources Information Center
Huerta, Luis; Rice, Jennifer King; Shafer, Sheryl Rankin; Barbour, Michael K.; Miron, Gary; Gulosino, Charisse; Horvitz, Brian
2014-01-01
This report is the second of a series of annual reports by the National Education Policy Center (NEPC) on virtual education in the U.S. The NEPC reports contribute to the existing evidence and discourse on virtual education by providing an objective analysis of the evolution and performance of full-time, publicly funded K-12 virtual schools. This…
Introducing and Evaluating the Behavior of Non-Verbal Features in the Virtual Learning
ERIC Educational Resources Information Center
Dharmawansa, Asanka D.; Fukumura, Yoshimi; Marasinghe, Ashu; Madhuwanthi, R. A. M.
2015-01-01
The objective of this research is to introduce the behavior of non-verbal features of e-Learners in the virtual learning environment to establish a fair representation of the real user by an avatar who represents the e-Learner in the virtual environment and to distinguish the deportment of the non-verbal features during the virtual learning…
ERIC Educational Resources Information Center
Hwang, Wu-Yuin; Su, Jia-Han; Huang, Yueh-Min; Dong, Jian-Jie
2009-01-01
In this paper, the development of an innovative Virtual Manipulatives and Whiteboard (VMW) system is described. The VMW system allowed users to manipulate virtual objects in 3D space and find clues to solve geometry problems. To assist with multi-representation transformation, translucent multimedia whiteboards were used to provide a virtual 3D…
Creating 3D models of historical buildings using geospatial data
NASA Astrophysics Data System (ADS)
Alionescu, Adrian; Bǎlǎ, Alina Corina; Brebu, Floarea Maria; Moscovici, Anca-Maria
2017-07-01
Recently, a lot of interest has been shown to understand a real world object by acquiring its 3D images of using laser scanning technology and panoramic images. A realistic impression of geometric 3D data can be generated by draping real colour textures simultaneously captured by a colour camera images. In this context, a new concept of geospatial data acquisition has rapidly revolutionized the method of determining the spatial position of objects, which is based on panoramic images. This article describes an approach that comprises inusing terrestrial laser scanning and panoramic images captured with Trimble V10 Imaging Rover technology to enlarge the details and realism of the geospatial data set, in order to obtain 3D urban plans and virtual reality applications.
“A Tree Must Be Bent While It Is Young”: Teaching Urological Surgical Techniques to Schoolchildren
Buntrock, Stefan
2012-01-01
Background Playing video games in childhood may help achieve advanced laparoscopic skills later in life. The virtual operating room will soon become a reality, as “doctor games 2.0” will doubtlessly begin to incorporate virtual laparoscopic techniques. Objectives To teach surgical skills to schoolchildren in order to attract them to urology as a professional choice later in life. Materials and Methods As part of EAU Urology Week 2010, 108 school children aged 15–19 attended a seminar with lectures and simulators (laparoscopy, TUR, cystoscopy, and suture sets) at the 62nd Congress of the German Society of Urology in Düsseldorf. A Pub-Med and Google Scholar search was also performed in order to review the beneficial effects of early virtual surgical training. MeSh terms used were “video games,” “children,” and “surgical skills.” Searches were performed without restriction for a certain period of time. Results In terms of publicity for urology, EAU Urology Week, and the German Society of Urology, the event was immensely successful. Regarding the literature search, four relevant publications were found involving children. An additional three articles evaluated the usefulness of video gaming in medical students and residents. Conclusions Making use of virtual reality to attract and educate a new generation of urologists is an important step in designing the future of urology. PMID:23573467
Physical environment virtualization for human activities recognition
NASA Astrophysics Data System (ADS)
Poshtkar, Azin; Elangovan, Vinayak; Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen
2015-05-01
Human activity recognition research relies heavily on extensive datasets to verify and validate performance of activity recognition algorithms. However, obtaining real datasets are expensive and highly time consuming. A physics-based virtual simulation can accelerate the development of context based human activity recognition algorithms and techniques by generating relevant training and testing videos simulating diverse operational scenarios. In this paper, we discuss in detail the requisite capabilities of a virtual environment to aid as a test bed for evaluating and enhancing activity recognition algorithms. To demonstrate the numerous advantages of virtual environment development, a newly developed virtual environment simulation modeling (VESM) environment is presented here to generate calibrated multisource imagery datasets suitable for development and testing of recognition algorithms for context-based human activities. The VESM environment serves as a versatile test bed to generate a vast amount of realistic data for training and testing of sensor processing algorithms. To demonstrate the effectiveness of VESM environment, we present various simulated scenarios and processed results to infer proper semantic annotations from the high fidelity imagery data for human-vehicle activity recognition under different operational contexts.
[Virtual otoscopy--technique, indications and initial experiences with multislice spiral CT].
Klingebiel, R; Bauknecht, H C; Lehmann, R; Rogalla, P; Werbs, M; Behrbohm, H; Kaschke, O
2000-11-01
We report the standardized postprocessing of high-resolution CT data acquired by incremental CT and multi-slice CT in patients with suspected middle ear disorders to generate three-dimensional endoluminal views known as virtual otoscopy. Subsequent to the definition of a postprocessing protocol, standardized endoluminal views of the middle ear were generated according to their otological relevance. The HRCT data sets of 26 ENT patients were transferred to a workstation and postprocessed to 52 virtual otoscopies. Generation of predefined endoluminal views from the HRCT data sets was possible in all patients. Virtual endoscopic views added meaningful information to the primary cross-sectional data in patients suffering from ossicular pathology, having contraindications for invasive tympanic endoscopy or being assessed for surgery of the tympanic cavity. Multi slice CT improved the visualization of subtle anatomic details such as the stapes suprastructure and reduced the scanning time. Virtual endoscopy allows for the non invasive endoluminal visualization of various tympanic lesions. Use of the multi-slice CT technique reduces the scanning time and improves image quality in terms of detail resolution.
NASA Astrophysics Data System (ADS)
Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam
2018-03-01
We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.
Miniaturized video-rate epi-third-harmonic-generation fiber-microscope.
Chia, Shih-Hsuan; Yu, Che-Hang; Lin, Chih-Han; Cheng, Nai-Chia; Liu, Tzu-Ming; Chan, Ming-Che; Chen, I-Hsiu; Sun, Chi-Kuang
2010-08-02
With a micro-electro-mechanical system (MEMS) mirror, we successfully developed a miniaturized epi-third-harmonic-generation (epi-THG) fiber-microscope with a video frame rate (31 Hz), which was designed for in vivo optical biopsy of human skin. With a large-mode-area (LMA) photonic crystal fiber (PCF) and a regular microscopic objective, the nonlinear distortion of the ultrafast pulses delivery could be much reduced while still achieving a 0.4 microm lateral resolution for epi-THG signals. In vivo real time virtual biopsy of the Asian skin with a video rate (31 Hz) and a sub-micron resolution was obtained. The result indicates that this miniaturized system was compact enough for the least invasive hand-held clinical use.
Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments
NASA Astrophysics Data System (ADS)
Pretto, N.; Poiesi, F.
2017-11-01
We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.
Production of the next-generation library virtual tour.
Duncan, J M; Roth, L K
2001-10-01
While many libraries offer overviews of their services through their Websites, only a small number of health sciences libraries provide Web-based virtual tours. These tours typically feature photographs of major service areas along with textual descriptions. This article describes the process for planning, producing, and implementing a next-generation virtual tour in which a variety of media elements are integrated: photographic images, 360-degree "virtual reality" views, textual descriptions, and contextual floor plans. Hardware and software tools used in the project are detailed, along with a production timeline and budget, tips for streamlining the process, and techniques for improving production. This paper is intended as a starting guide for other libraries considering an investment in such a project.
Applicability of three-dimensional imaging techniques in fetal medicine*
Werner Júnior, Heron; dos Santos, Jorge Lopes; Belmonte, Simone; Ribeiro, Gerson; Daltro, Pedro; Gasparetto, Emerson Leandro; Marchiori, Edson
2016-01-01
Objective To generate physical models of fetuses from images obtained with three-dimensional ultrasound (3D-US), magnetic resonance imaging (MRI), and, occasionally, computed tomography (CT), in order to guide additive manufacturing technology. Materials and Methods We used 3D-US images of 31 pregnant women, including 5 who were carrying twins. If abnormalities were detected by 3D-US, both MRI and in some cases CT scans were then immediately performed. The images were then exported to a workstation in DICOM format. A single observer performed slice-by-slice manual segmentation using a digital high resolution screen. Virtual 3D models were obtained from software that converts medical images into numerical models. Those models were then generated in physical form through the use of additive manufacturing techniques. Results Physical models based upon 3D-US, MRI, and CT images were successfully generated. The postnatal appearance of either the aborted fetus or the neonate closely resembled the physical models, particularly in cases of malformations. Conclusion The combined use of 3D-US, MRI, and CT could help improve our understanding of fetal anatomy. These three screening modalities can be used for educational purposes and as tools to enable parents to visualize their unborn baby. The images can be segmented and then applied, separately or jointly, in order to construct virtual and physical 3D models. PMID:27818540
The sense of body ownership relaxes temporal constraints for multisensory integration.
Maselli, Antonella; Kilteni, Konstantina; López-Moliner, Joan; Slater, Mel
2016-08-03
Experimental work on body ownership illusions showed how simple multisensory manipulation can generate the illusory experience of an artificial limb as being part of the own-body. This work highlighted how own-body perception relies on a plastic brain representation emerging from multisensory integration. The flexibility of this representation is reflected in the short-term modulations of physiological states and perceptual processing observed during these illusions. Here, we explore the impact of ownership illusions on the temporal dimension of multisensory integration. We show that, during the illusion, the temporal window for integrating touch on the physical body with touch seen on a virtual body representation, increases with respect to integration with visual events seen close but separated from the virtual body. We show that this effect is mediated by the ownership illusion. Crucially, the temporal window for visuotactile integration was positively correlated with participants' scores rating the illusory experience of owning the virtual body and touching the object seen in contact with it. Our results corroborate the recently proposed causal inference mechanism for illusory body ownership. As a novelty, they show that the ensuing illusory causal binding between stimuli from the real and fake body relaxes constraints for the integration of bodily signals.
NASA Astrophysics Data System (ADS)
Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu
2018-03-01
During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.
Virtual imaging in sports broadcasting: an overview
NASA Astrophysics Data System (ADS)
Tan, Yi
2003-04-01
Virtual imaging technology is being used to augment television broadcasts -- virtual objects are seamlessly inserted into the video stream to appear as real entities to TV audiences. Virtual advertisements, the main application of this technology, are providing opportunities to improve the commercial value of television programming while enhancing the contents and the entertainment aspect of these programs. State-of-the-art technologies, such as image recognition, motion tracking and chroma keying, are central to a virtual imaging system. This paper reviews the general framework, the key techniques, and the sports broadcasting applications of virtual imaging technology.
Gao, Changwei; Liu, Xiaoming; Chen, Hai
2017-08-22
This paper focus on the power fluctuations of the virtual synchronous generator(VSG) during the transition process. An improved virtual synchronous generator(IVSG) control strategy based on feed-forward compensation is proposed. Adjustable parameter of the compensation section can be modified to achieve the goal of reducing the order of the system. It can effectively suppress the power fluctuations of the VSG in transient process. To verify the effectiveness of the proposed control strategy for distributed energy resources inverter, the simulation model is set up in MATLAB/SIMULINK platform and physical experiment platform is established. Simulation and experiment results demonstrate the effectiveness of the proposed IVSG control strategy.
Statistical scaling of geometric characteristics in stochastically generated pore microstructures
Hyman, Jeffrey D.; Guadagnini, Alberto; Winter, C. Larrabee
2015-05-21
In this study, we analyze the statistical scaling of structural attributes of virtual porous microstructures that are stochastically generated by thresholding Gaussian random fields. Characterization of the extent at which randomly generated pore spaces can be considered as representative of a particular rock sample depends on the metrics employed to compare the virtual sample against its physical counterpart. Typically, comparisons against features and/patterns of geometric observables, e.g., porosity and specific surface area, flow-related macroscopic parameters, e.g., permeability, or autocorrelation functions are used to assess the representativeness of a virtual sample, and thereby the quality of the generation method. Here, wemore » rely on manifestations of statistical scaling of geometric observables which were recently observed in real millimeter scale rock samples [13] as additional relevant metrics by which to characterize a virtual sample. We explore the statistical scaling of two geometric observables, namely porosity (Φ) and specific surface area (SSA), of porous microstructures generated using the method of Smolarkiewicz and Winter [42] and Hyman and Winter [22]. Our results suggest that the method can produce virtual pore space samples displaying the symptoms of statistical scaling observed in real rock samples. Order q sample structure functions (statistical moments of absolute increments) of Φ and SSA scale as a power of the separation distance (lag) over a range of lags, and extended self-similarity (linear relationship between log structure functions of successive orders) appears to be an intrinsic property of the generated media. The width of the range of lags where power-law scaling is observed and the Hurst coefficient associated with the variables we consider can be controlled by the generation parameters of the method.« less
NASA Astrophysics Data System (ADS)
Pilone, D.; Gilman, J.; Baynes, K.; Shum, D.
2015-12-01
This talk introduces a new NASA Earth Observing System Data and Information System (EOSDIS) capability to automatically generate and maintain derived, Virtual Product information allowing DAACs and Data Providers to create tailored and more discoverable variations of their products. After this talk the audience will be aware of the new EOSDIS Virtual Product capability, applications of it, and how to take advantage of it. Much of the data made available in the EOSDIS are organized for generation and archival rather than for discovery and use. The EOSDIS Common Metadata Repository (CMR) is launching a new capability providing automated generation and maintenance of user-oriented Virtual Product information. DAACs can easily surface variations on established data products tailored to specific uses cases and users, leveraging DAAC exposed services such as custom ordering or access services like OPeNDAP for on-demand product generation and distribution. Virtual Data Products enjoy support for spatial and temporal information, keyword discovery, association with imagery, and are fully discoverable by tools such as NASA Earthdata Search, Worldview, and Reverb. Virtual Product generation has applicability across many use cases: - Describing derived products such as Surface Kinetic Temperature information (AST_08) from source products (ASTER L1A) - Providing streamlined access to data products (e.g. AIRS) containing many (>800) data variables covering an enormous variety of physical measurements - Attaching additional EOSDIS offerings such as Visual Metadata, external services, and documentation metadata - Publishing alternate formats for a product (e.g. netCDF for HDF products) with the actual conversion happening on request - Publishing granules to be modified by on-the-fly services, like GES-DISC's Data Quality Screening Service - Publishing "bundled" products where granules from one product correspond to granules from one or more other related products
Constraint, Intelligence, and Control Hierarchy in Virtual Environments. Chapter 1
NASA Technical Reports Server (NTRS)
Sheridan, Thomas B.
2007-01-01
This paper seeks to deal directly with the question of what makes virtual actors and objects that are experienced in virtual environments seem real. (The term virtual reality, while more common in public usage, is an oxymoron; therefore virtual environment is the preferred term in this paper). Reality is difficult topic, treated for centuries in those sub-fields of philosophy called ontology- "of or relating to being or existence" and epistemology- "the study of the method and grounds of knowledge, especially with reference to its limits and validity" (both from Webster s, 1965). Advances in recent decades in the technologies of computers, sensors and graphics software have permitted human users to feel present or experience immersion in computer-generated virtual environments. This has motivated a keen interest in probing this phenomenon of presence and immersion not only philosophically but also psychologically and physiologically in terms of the parameters of the senses and sensory stimulation that correlate with the experience (Ellis, 1991). The pages of the journal Presence: Teleoperators and Virtual Environments have seen much discussion of what makes virtual environments seem real (see, e.g., Slater, 1999; Slater et al. 1994; Sheridan, 1992, 2000). Stephen Ellis, when organizing the meeting that motivated this paper, suggested to invited authors that "We may adopt as an organizing principle for the meeting that the genesis of apparently intelligent interaction arises from an upwelling of constraints determined by a hierarchy of lower levels of behavioral interaction. "My first reaction was "huh?" and my second was "yeah, that seems to make sense." Accordingly the paper seeks to explain from the author s viewpoint, why Ellis s hypothesis makes sense. What is the connection of "presence" or "immersion" of an observer in a virtual environment, to "constraints" and what types of constraints. What of "intelligent interaction," and is it the intelligence of the observer or the intelligence of the environment (whatever the latter may mean) that is salient? And finally, what might be relevant about "upwelling" of constraints as determined by a hierarchy of levels of interaction?
ChemScreener: A Distributed Computing Tool for Scaffold based Virtual Screening.
Karthikeyan, Muthukumarasamy; Pandit, Deepak; Vyas, Renu
2015-01-01
In this work we present ChemScreener, a Java-based application to perform virtual library generation combined with virtual screening in a platform-independent distributed computing environment. ChemScreener comprises a scaffold identifier, a distinct scaffold extractor, an interactive virtual library generator as well as a virtual screening module for subsequently selecting putative bioactive molecules. The virtual libraries are annotated with chemophore-, pharmacophore- and toxicophore-based information for compound prioritization. The hits selected can then be further processed using QSAR, docking and other in silico approaches which can all be interfaced within the ChemScreener framework. As a sample application, in this work scaffold selectivity, diversity, connectivity and promiscuity towards six important therapeutic classes have been studied. In order to illustrate the computational power of the application, 55 scaffolds extracted from 161 anti-psychotic compounds were enumerated to produce a virtual library comprising 118 million compounds (17 GB) and annotated with chemophore, pharmacophore and toxicophore based features in a single step which would be non-trivial to perform with many standard software tools today on libraries of this size.
Reynolds, Christopher R; Muggleton, Stephen H; Sternberg, Michael J E
2015-01-01
The use of virtual screening has become increasingly central to the drug development pipeline, with ligand-based virtual screening used to screen databases of compounds to predict their bioactivity against a target. These databases can only represent a small fraction of chemical space, and this paper describes a method of exploring synthetic space by applying virtual reactions to promising compounds within a database, and generating focussed libraries of predicted derivatives. A ligand-based virtual screening tool Investigational Novel Drug Discovery by Example (INDDEx) is used as the basis for a system of virtual reactions. The use of virtual reactions is estimated to open up a potential space of 1.21×1012 potential molecules. A de novo design algorithm known as Partial Logical-Rule Reactant Selection (PLoRRS) is introduced and incorporated into the INDDEx methodology. PLoRRS uses logical rules from the INDDEx model to select reactants for the de novo generation of potentially active products. The PLoRRS method is found to increase significantly the likelihood of retrieving molecules similar to known actives with a p-value of 0.016. Case studies demonstrate that the virtual reactions produce molecules highly similar to known actives, including known blockbuster drugs. PMID:26583052
Exorcising the Ghost in the Machine: Synthetic Spectral Data Cubes for Assessing Big Data Algorithms
NASA Astrophysics Data System (ADS)
Araya, M.; Solar, M.; Mardones, D.; Hochfärber, T.
2015-09-01
The size and quantity of the data that is being generated by large astronomical projects like ALMA, requires a paradigm change in astronomical data analysis. Complex data, such as highly sensitive spectroscopic data in the form of large data cubes, are not only difficult to manage, transfer and visualize, but they make traditional data analysis techniques unfeasible. Consequently, the attention has been placed on machine learning and artificial intelligence techniques, to develop approximate and adaptive methods for astronomical data analysis within a reasonable computational time. Unfortunately, these techniques are usually sub optimal, stochastic and strongly dependent of the parameters, which could easily turn into “a ghost in the machine” for astronomers and practitioners. Therefore, a proper assessment of these methods is not only desirable but mandatory for trusting them in large-scale usage. The problem is that positively verifiable results are scarce in astronomy, and moreover, science using bleeding-edge instrumentation naturally lacks of reference values. We propose an Astronomical SYnthetic Data Observations (ASYDO), a virtual service that generates synthetic spectroscopic data in the form of data cubes. The objective of the tool is not to produce accurate astrophysical simulations, but to generate a large number of labelled synthetic data, to assess advanced computing algorithms for astronomy and to develop novel Big Data algorithms. The synthetic data is generated using a set of spectral lines, template functions for spatial and spectral distributions, and simple models that produce reasonable synthetic observations. Emission lines are obtained automatically using IVOA's SLAP protocol (or from a relational database) and their spectral profiles correspond to distributions in the exponential family. The spatial distributions correspond to simple functions (e.g., 2D Gaussian), or to scalable template objects. The intensity, broadening and radial velocity of each line is given by very simple and naive physical models, yet ASYDO's generic implementation supports new user-made models, which potentially allows adding more realistic simulations. The resulting data cube is saved as a FITS file, also including all the tables and images used for generating the cube. We expect to implement ASYDO as a virtual observatory service in the near future.
Virtual Mobility in Higher Education. The UNED Campus Net Program
ERIC Educational Resources Information Center
Aguado, Teresa; Monge, Fernando; Del Olmo, Alicia
2014-01-01
We present the UNED Virtual Mobility Campus Net Program, implemented since 2012 in collaboration with European and Latin American universities. Program's objectives, participating institutions, procedures, and evaluation are exposed. Virtual mobility is understood as a meaningful strategy for intercultural learning by studying an undergraduate or…
Ethmoidectomy combined with superior meatus enlargement increases olfactory airflow
Kondo, Kenji; Nomura, Tsutomu; Yamasoba, Tatsuya
2017-01-01
Objectives The relationship between a particular surgical technique in endoscopic sinus surgery (ESS) and airflow changes in the post‐operative olfactory region has not been assessed. The present study aimed to compare olfactory airflow after ESS between conventional ethmoidectomy and ethmoidectomy with superior meatus enlargement, using virtual ESS and computational fluid dynamics (CFD) analysis. Study Design Prospective computational study. Materials and Methods Nasal computed tomography images of four adult subjects were used to generate models of the nasal airway. The original preoperative model was digitally edited as virtual ESS by performing uncinectomy, ethmoidectomy, antrostomy, and frontal sinusotomy. The following two post‐operative models were prepared: conventional ethmoidectomy with normal superior meatus (ESS model) and ethmoidectomy with superior meatus enlargement (ESS‐SM model). The calculated three‐dimensional nasal geometries were confirmed using virtual endoscopy to ensure that they corresponded to the post‐operative anatomy observed in the clinical setting. Steady‐state, laminar, inspiratory airflow was simulated, and the velocity, streamline, and mass flow rate in the olfactory region were compared among the preoperative and two postoperative models. Results The mean velocity in the olfactory region, number of streamlines bound to the olfactory region, and mass flow rate were higher in the ESS‐SM model than in the other models. Conclusion We successfully used an innovative approach involving virtual ESS, virtual endoscopy, and CFD to assess postoperative outcomes after ESS. It is hypothesized that the increased airflow to the olfactory fossa achieved with ESS‐SM may lead to improved olfactory function; however, further studies are required. Level of Evidence NA. PMID:28894833
NASA Astrophysics Data System (ADS)
Navvab, Mojtaba; Bisegna, Fabio; Gugliermetti, Franco
2013-05-01
Saint Rocco Museum, a historical building in Venice, Italy is used as a case study to explore the performance of its' lighting system and visible light impact on viewing the large size art works. The transition from threedimensional architectural rendering to the three-dimensional virtual luminance mapping and visualization within a virtual environment is described as an integrated optical method for its application toward preservation of the cultural heritage of the space. Lighting simulation programs represent color as RGB triplets in a devicedependent color space such as ITU-R BT709. Prerequisite for this is a 3D-model which can be created within this computer aided virtual environment. The onsite measured surface luminance, chromaticity and spectral data were used as input to an established real-time indirect illumination and a physically based algorithms to produce the best approximation for RGB to be used as an input to generate the image of the objects. Conversion of RGB to and from spectra has been a major undertaking in order to match the infinite number of spectra to create the same colors that were defined by RGB in the program. The ability to simulate light intensity, candle power and spectral power distributions provide opportunity to examine the impact of color inter-reflections on historical paintings. VR offers an effective technique to quantify the visible light impact on human visual performance under precisely controlled representation of light spectrum that could be experienced in 3D format in a virtual environment as well as historical visual archives. The system can easily be expanded to include other measurements and stimuli.
Predictive encoding of moving target trajectory by neurons in the parabigeminal nucleus
Ma, Rui; Cui, He; Lee, Sang-Hun; Anastasio, Thomas J.
2013-01-01
Intercepting momentarily invisible moving objects requires internally generated estimations of target trajectory. We demonstrate here that the parabigeminal nucleus (PBN) encodes such estimations, combining sensory representations of target location, extrapolated positions of briefly obscured targets, and eye position information. Cui and Malpeli (Cui H, Malpeli JG. J Neurophysiol 89: 3128–3142, 2003) reported that PBN activity for continuously visible tracked targets is determined by retinotopic target position. Here we show that when cats tracked moving, blinking targets the relationship between activity and target position was similar for ON and OFF phases (400 ms for each phase). The dynamic range of activity evoked by virtual targets was 94% of that of real targets for the first 200 ms after target offset and 64% for the next 200 ms. Activity peaked at about the same best target position for both real and virtual targets. PBN encoding of target position takes into account changes in eye position resulting from saccades, even without visual feedback. Since PBN response fields are retinotopically organized, our results suggest that activity foci associated with real and virtual targets at a given target position lie in the same physical location in the PBN, i.e., a retinotopic as well as a rate encoding of virtual-target position. We also confirm that PBN activity is specific to the intended target of a saccade and is predictive of which target will be chosen if two are offered. A Bayesian predictor-corrector model is presented that conceptually explains the differences in the dynamic ranges of PBN neuronal activity evoked during tracking of real and virtual targets. PMID:23365185
Do we perform surgical programming well? How can we improve it?
Albareda, J; Clavel, D; Mahulea, C; Blanco, N; Ezquerra, L; Gómez, J; Silva, J M
The objective is to establish the duration of our interventions, intermediate times and surgical performance. This will create a virtual waiting list to apply a mathematical programme that performs programming with maximum performance. Retrospective review of 49 surgical sessions obtaining the delay in start time, intermediate time and surgical performance. Retrospective review of 4,045 interventions performed in the last 3 years to obtain the average duration of each type of surgery. Creation of a virtual waiting list of 700 patients in order to perform virtual programming through the MIQCP-P until achieving optimal performance. Our surgical performance with manual programming was 75.9%, ending 22.4% later than 3pm. The performance in the days without suspensions was 78.4%. The delay at start time was 9.7min. The optimum performance was 77.5% with a confidence of finishing before 15h of 80.6%. The waiting list has been scheduled in 254 sessions. Our manual surgical performance without suspensions (78.4%) was superior to the optimal (77.5%), generating days finished later than 3pm and suspensions. The possibilities for improvement are to achieve punctuality at the start time and adjust the schedule to the ideal performance. The virtual programming has allowed us to obtain our ideal performance and to establish the number of operating rooms necessary to solve the waiting list created. The data obtained in virtual mathematical programming are reliable enough to implement this model with guarantees. Copyright © 2017 SECOT. Publicado por Elsevier España, S.L.U. All rights reserved.
Virtual Ed. Biz Seeks Mainstream
ERIC Educational Resources Information Center
Gustke, Constance
2010-01-01
The for-profit e-learning company K12 Inc. grew 40 percent last year, generating $385 million in revenue by providing virtual courses to 70,000 students across the country. Connections Academy, another such provider, generated about $120 million in revenue serving up online courses to some 20,000 students. And last month, the education technology…
NASA Astrophysics Data System (ADS)
Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.
2017-09-01
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
Wang, Yu; Helminen, Emily; Jiang, Jingfeng
2015-09-01
Quasistatic ultrasound elastography (QUE) is being used to augment in vivo characterization of breast lesions. Results from early clinical trials indicated that there was a lack of confidence in image interpretation. Such confidence can only be gained through rigorous imaging tests using complex, heterogeneous but known media. The objective of this study is to build a virtual breast QUE simulation platform in the public domain that can be used not only for innovative QUE research but also for rigorous imaging tests. The main thrust of this work is to streamline biomedical ultrasound simulations by leveraging existing open source software packages including Field II (ultrasound simulator), VTK (geometrical visualization and processing), FEBio [finite element (FE) analysis], and Tetgen (mesh generator). However, integration of these open source packages is nontrivial and requires interdisciplinary knowledge. In the first step, a virtual breast model containing complex anatomical geometries was created through a novel combination of image-based landmark structures and randomly distributed (small) structures. Image-based landmark structures were based on data from the NIH Visible Human Project. Subsequently, an unstructured FE-mesh was created by Tetgen. In the second step, randomly positioned point scatterers were placed within the meshed breast model through an octree-based algorithm to make a virtual breast ultrasound phantom. In the third step, an ultrasound simulator (Field II) was used to interrogate the virtual breast phantom to obtain simulated ultrasound echo data. Of note, tissue deformation generated using a FE-simulator (FEBio) was the basis of deforming the original virtual breast phantom in order to obtain the postdeformation breast phantom for subsequent ultrasound simulations. Using the procedures described above, a full cycle of QUE simulations involving complex and highly heterogeneous virtual breast phantoms can be accomplished for the first time. Representative examples were used to demonstrate capabilities of this virtual simulation platform. In the first set of three ultrasound simulation examples, three heterogeneous volumes of interest were selected from a virtual breast ultrasound phantom to perform sophisticated ultrasound simulations. These resultant B-mode images realistically represented the underlying complex but known media. In the second set of three QUE examples, advanced applications in QUE were simulated. The first QUE example was to show breast tumors with complex shapes and/or compositions. The resultant strain images showed complex patterns that were normally seen in freehand clinical ultrasound data. The second and third QUE examples demonstrated (deformation-dependent) nonlinear strain imaging and time-dependent strain imaging, respectively. The proposed virtual QUE platform was implemented and successfully tested in this study. Through show-case examples, the proposed work has demonstrated its capabilities of creating sophisticated QUE data in a way that cannot be done through the manufacture of physical tissue-mimicking phantoms and other software. This open software architecture will soon be made available in the public domain and can be readily adapted to meet specific needs of different research groups to drive innovations in QUE.
A kickball game for ankle rehabilitation by JAVA, JNI, and VRML
NASA Astrophysics Data System (ADS)
Choi, Hyungjeen; Ryu, Jeha; Lee, Chansu
2004-03-01
This paper presents development of a virtual environment that can be applied to the ankle rehabilitation procedure. We developed a virtual football stadium to intrigue a patient, where two degree of freedom (DOF) plate-shaped object is oriented to kick a ball falling from the sky in accordance with the data from the ankle's dorisflexion/plantarflexion and inversion/eversion motion on the moving platform of the K-Platform. This Kickball Game is implemented by Virtual Reality Modeling Language (VRML). To control virtual objects, data from the K-Platform are transmitted through the communication module implemented in C++. Java, Java Native Interface (JNI) and VRML plug-in are combined together so as to interface the communication module with the virtual environment by VRML. This game may be applied to the Active Range of Motion (AROM) exercise procedure that is one of the ankle rehabilitation procedures.
Intercepting real and simulated falling objects: what is the difference?
Baurès, Robin; Benguigui, Nicolas; Amorim, Michel-Ange; Hecht, Heiko
2009-10-30
The use of virtual reality is nowadays common in many studies in the field of human perception and movement control, particularly in interceptive actions. However, the ecological validity of the simulation is often taken for granted without having been formally established. If participants were to perceive the real situation and its virtual equivalent in a different fashion, the generalization of the results obtained in virtual reality to real life would be highly questionable. We tested the ecological validity of virtual reality in this context by comparing the timing of interceptive actions based upon actually falling objects and their simulated counterparts. The results show very limited differences as a function of whether participants were confronted with a real ball or a simulation thereof. And when present, such differences were limited to the first trial only. This result validates the use of virtual reality when studying interceptive actions of accelerated stimuli.
A Model for the Design of Puzzle-Based Games Including Virtual and Physical Objects
ERIC Educational Resources Information Center
Melero, Javier; Hernandez-Leo, Davinia
2014-01-01
Multiple evidences in the Technology-Enhanced Learning domain indicate that Game-Based Learning can lead to positive effects in students' performance and motivation. Educational games can be completely virtual or can combine the use of physical objects or spaces in the real world. However, the potential effectiveness of these approaches…
ERIC Educational Resources Information Center
Rau, Martina A.
2017-01-01
STEM instruction often uses visual representations. To benefit from these, students need to understand how representations show domain-relevant concepts. Yet, this is difficult for students. Prior research shows that physical representations (objects that students manipulate by hand) and virtual representations (objects on a computer screen that…
Hiding and Searching Strategies of Adult Humans in a Virtual and a Real-Space Room
ERIC Educational Resources Information Center
Talbot, Katherine J.; Legge, Eric L. G.; Bulitko, Vadim; Spetch, Marcia L.
2009-01-01
Adults searched for or cached three objects in nine hiding locations in a virtual room or a real-space room. In both rooms, the locations selected by participants differed systematically between searching and hiding. Specifically, participants moved farther from origin and dispersed their choices more when hiding objects than when searching for…
Nawrotek, Joanna; Deschenes, Emilie; Giguere, Tia; Serafin, Julie; Bilodeau, Martin; Sveistrup, Heidi
2016-01-01
Background Virtual reality active video games are increasingly popular physical therapy interventions for children with cerebral palsy. However, physical therapists require educational resources to support decision making about game selection to match individual patient goals. Quantifying the movements elicited during virtual reality active video game play can inform individualized game selection in pediatric rehabilitation. Objective The objectives of this study were to develop and evaluate the feasibility and reliability of the Movement Rating Instrument for Virtual Reality Game Play (MRI-VRGP). Methods Item generation occurred through an iterative process of literature review and sample videotape viewing. The MRI-VRGP includes 25 items quantifying upper extremity, lower extremity, and total body movements. A total of 176 videotaped 90-second game play sessions involving 7 typically developing children and 4 children with cerebral palsy were rated by 3 raters trained in MRI-VRGP use. Children played 8 games on 2 virtual reality and active video game systems. Intraclass correlation coefficients (ICCs) determined intra-rater and interrater reliability. Results Excellent intrarater reliability was evidenced by ICCs of >0.75 for 17 of the 25 items across the 3 raters. Interrater reliability estimates were less precise. Excellent interrater reliability was achieved for far reach upper extremity movements (ICC=0.92 [for right and ICC=0.90 for left) and for squat (ICC=0.80) and jump items (ICC=0.99), with 9 items achieving ICCs of >0.70, 12 items achieving ICCs of between 0.40 and 0.70, and 4 items achieving poor reliability (close-reach upper extremity-ICC=0.14 for right and ICC=0.07 for left) and single-leg stance (ICC=0.55 for right and ICC=0.27 for left). Conclusions Poor video quality, differing item interpretations between raters, and difficulty quantifying the high-speed movements involved in game play affected reliability. With item definition clarification and further psychometric property evaluation, the MRI-VRGP could inform the content of educational resources for therapists by ranking games according to frequency and type of elicited body movements. PMID:27251029
Simulators and virtual reality in surgical education.
Chou, Betty; Handa, Victoria L
2006-06-01
This article explores the pros and cons of virtual reality simulators, their abilities to train and assess surgical skills, and their potential future applications. Computer-based virtual reality simulators and more conventional box trainers are compared and contrasted. The virtual reality simulator provides objective assessment of surgical skills and immediate feedback further to enhance training. With this ability to provide standardized, unbiased assessment of surgical skills, the virtual reality trainer has the potential to be a tool for selecting, instructing, certifying, and recertifying gynecologists.
Friedman, Jason; Latash, Mark L.; Zatsiorsky, Vladimir M.
2009-01-01
We examined how the digit forces adjust when a load force acting on a hand-held object continuously varies. The subjects were required to hold the handle still while a linearly increasing and then decreasing force was applied to the handle. The handle was constrained, such that it could only move up and down, and rotate about a horizontal axis. In addition the moment arm of the thumb tangential force was 1.5 times the moment arm of the virtual finger (VF, an imagined finger with the mechanical action equal to that of the four fingers) force. Unlike the situation when there are equal moment arms, the experimental setup forced the subjects to choose between (a) sharing equally the increase in load force between the thumb and virtual finger but generating a moment of tangential force, which had to be compensated by negatively covarying the moment due to normal forces, or (b) sharing unequally the load force increase between the thumb and VF but preventing generation of a moment of tangential forces. We found that different subjects tended to use one of these two strategies. These findings suggest that the selection by the CNS of prehension synergies at the VF-thumb level with respect to the moment of force are non-obligatory and reflect individual subject preferences. This unequal sharing of the load by the tangential forces, in contrast to the previously observed equal sharing, suggests that the invariant feature of prehension may be a correlated increase in tangential forces rather than an equal increase. PMID:19554319
NASA Astrophysics Data System (ADS)
Schaverien, Lynette
2003-12-01
This paper reports the use of a research-based, web-delivered, technology-and-science education context (the Generative Virtual Classroom) in which student-teachers can develop their ability to recognize, describe, analyse and theorize learning. Addressing well-recognized concerns about narrowly conceived, anachronistic and ineffective technology-and-science education, this e-learning environment aims to use advanced technologies for learning, to bring about larger scale improvement in classroom practice than has so far been effected by direct intervention through teacher education. Student-teachers' short, intensive engagement with the Generative Virtual Classroom during their practice teaching is examined. Findings affirm the worth of this research-based e-learning system for teacher education and the power of a biologically based, generative theory to make sense of the learning that occurred.
Selecting a Virtual World Platform for Learning
ERIC Educational Resources Information Center
Robbins, Russell W.; Butler, Brian S.
2009-01-01
Like any infrastructure technology, Virtual World (VW) platforms provide affordances that facilitate some activities and hinder others. Although it is theoretically possible for a VW platform to support all types of activities, designers make choices that lead technologies to be more or less suited for different learning objectives. Virtual World…
NASA Astrophysics Data System (ADS)
Oh, Jong-Seok; Choi, Seung-Hyun; Choi, Seung-Bok
2014-01-01
This paper presents control performances of a new type of four-degrees-of-freedom (4-DOF) haptic master that can be used for robot-assisted minimally invasive surgery (RMIS). By adopting a controllable electrorheological (ER) fluid, the function of the proposed master is realized as a haptic feedback as well as remote manipulation. In order to verify the efficacy of the proposed master and method, an experiment is conducted with deformable objects featuring human organs. Since the use of real human organs is difficult for control due to high cost and moral hazard, an excellent alternative method, the virtual reality environment, is used for control in this work. In order to embody a human organ in the virtual space, the experiment adopts a volumetric deformable object represented by a shape-retaining chain linked (S-chain) model which has salient properties such as fast and realistic deformation of elastic objects. In haptic architecture for RMIS, the desired torque/force and desired position originating from the object of the virtual slave and operator of the haptic master are transferred to each other. In order to achieve the desired torque/force trajectories, a sliding mode controller (SMC) which is known to be robust to uncertainties is designed and empirically implemented. Tracking control performances for various torque/force trajectories from the virtual slave are evaluated and presented in the time domain.
Alverson, Dale C; Saiki, Stanley M; Jacobs, Joshua; Saland, Linda; Keep, Marcus F; Norenberg, Jeffrey; Baker, Rex; Nakatsu, Curtis; Kalishman, Summers; Lindberg, Marlene; Wax, Diane; Mowafi, Moad; Summers, Kenneth L; Holten, James R; Greenfield, John A; Aalseth, Edward; Nickles, David; Sherstyuk, Andrei; Haines, Karen; Caudell, Thomas P
2004-01-01
Medical knowledge and skills essential for tomorrow's healthcare professionals continue to change faster than ever before creating new demands in medical education. Project TOUCH (Telehealth Outreach for Unified Community Health) has been developing methods to enhance learning by coupling innovations in medical education with advanced technology in high performance computing and next generation Internet2 embedded in virtual reality environments (VRE), artificial intelligence and experiential active learning. Simulations have been used in education and training to allow learners to make mistakes safely in lieu of real-life situations, learn from those mistakes and ultimately improve performance by subsequent avoidance of those mistakes. Distributed virtual interactive environments are used over distance to enable learning and participation in dynamic, problem-based, clinical, artificial intelligence rules-based, virtual simulations. The virtual reality patient is programmed to dynamically change over time and respond to the manipulations by the learner. Participants are fully immersed within the VRE platform using a head-mounted display and tracker system. Navigation, locomotion and handling of objects are accomplished using a joy-wand. Distribution is managed via the Internet2 Access Grid using point-to-point or multi-casting connectivity through which the participants can interact. Medical students in Hawaii and New Mexico (NM) participated collaboratively in problem solving and managing of a simulated patient with a closed head injury in VRE; dividing tasks, handing off objects, and functioning as a team. Students stated that opportunities to make mistakes and repeat actions in the VRE were extremely helpful in learning specific principles. VRE created higher performance expectations and some anxiety among VRE users. VRE orientation was adequate but students needed time to adapt and practice in order to improve efficiency. This was also demonstrated successfully between Western Australia and UNM. We successfully demonstrated the ability to fully immerse participants in a distributed virtual environment independent of distance for collaborative team interaction in medical simulation designed for education and training. The ability to make mistakes in a safe environment is well received by students and has a positive impact on their understanding, as well as memory of the principles involved in correcting those mistakes. Bringing people together as virtual teams for interactive experiential learning and collaborative training, independent of distance, provides a platform for distributed "just-in-time" training, performance assessment and credentialing. Further validation is necessary to determine the potential value of the distributed VRE in knowledge transfer, improved future performance and should entail training participants to competence in using these tools.
Using a virtual world for robot planning
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian
2012-06-01
We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.
Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun
2017-01-01
The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device’s built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction. PMID:28837096
Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun
2017-08-24
The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device's built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction.
Shoemaker, Michael J; Platko, Christina M; Cleghorn, Susan M; Booth, Andrew
2014-07-01
The purpose of this retrospective qualitative case report is to describe how a case-based, virtual patient interprofessional education (IPE) simulation activity was utilized to achieve physician assistant (PA), physical therapy (PT) and occupational therapy (OT) student IPE learning outcomes. Following completion of a virtual patient case, 30 PA, 46 PT and 24 OT students were required to develop a comprehensive, written treatment plan and respond to reflective questions. A qualitative analysis of the submitted written assignment was used to determine whether IPE learning objectives were met. Student responses revealed three themes that supported the learning objectives of the IPE experience: benefits of collaborative care, role clarification and relevance of the IPE experience for future practice. A case-based, IPE simulation activity for physician assistant and rehabilitation students using a computerized virtual patient software program effectively facilitated achievement of the IPE learning objectives, including development of greater student awareness of other professions and ways in which collaborative patient care can be provided.
Low-complexity piecewise-affine virtual sensors: theory and design
NASA Astrophysics Data System (ADS)
Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco
2014-03-01
This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.
Dynamic Virtual Credit Card Numbers
NASA Astrophysics Data System (ADS)
Molloy, Ian; Li, Jiangtao; Li, Ninghui
Theft of stored credit card information is an increasing threat to e-commerce. We propose a dynamic virtual credit card number scheme that reduces the damage caused by stolen credit card numbers. A user can use an existing credit card account to generate multiple virtual credit card numbers that are either usable for a single transaction or are tied with a particular merchant. We call the scheme dynamic because the virtual credit card numbers can be generated without online contact with the credit card issuers. These numbers can be processed without changing any of the infrastructure currently in place; the only changes will be at the end points, namely, the card users and the card issuers. We analyze the security requirements for dynamic virtual credit card numbers, discuss the design space, propose a scheme using HMAC, and prove its security under the assumption the underlying function is a PRF.
NASA Astrophysics Data System (ADS)
Bolodurina, I. P.; Parfenov, D. I.
2017-10-01
The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.
Working Group Reports and Presentations: Virtual Worlds and Virtual Exploration
NASA Technical Reports Server (NTRS)
LAmoreaux, Claudia
2006-01-01
Scientists and engineers are continually developing innovative methods to capitalize on recent developments in computational power. Virtual worlds and virtual exploration present a new toolset for project design, implementation, and resolution. Replication of the physical world in the virtual domain provides stimulating displays to augment current data analysis techniques and to encourage public participation. In addition, the virtual domain provides stakeholders with a low cost, low risk design and test environment. The following document defines a virtual world and virtual exploration, categorizes the chief motivations for virtual exploration, elaborates upon specific objectives, identifies roadblocks and enablers for realizing the benefits, and highlights the more immediate areas of implementation (i.e. the action items). While the document attempts a comprehensive evaluation of virtual worlds and virtual exploration, the innovative nature of the opportunities presented precludes completeness. The authors strongly encourage readers to derive additional means of utilizing the virtual exploration toolset.
NASA Astrophysics Data System (ADS)
Lino, A. C. L.; Dal Fabbro, I. M.
2008-04-01
The conception of a tridimensional digital model of solid figures and plant organs started from topographic survey of virtual surfaces [1], followed by topographic survey of solid figures [2], fruit surface survey [3] and finally the generation of a 3D digital model [4] as presented by [1]. In this research work, i.e. step number [4] tested objects included cylinders, cubes, spheres and fruits. A Ronchi grid named G1 was generated in a PC, from which other grids referred as G2, G3, and G4 were set out of phase by 1/4, 1/2 and 3/4 of period from G1. Grid G1 was then projected onto the samples surface. Projected grid was named Gd. The difference between Gd and G1 followed by filtration generated de moiré fringes M1 and so on, obtaining the fringes M2, M3 and M4 from Gd. Fringes are out of phase one from each other by 1/4 of period, which were processed by the Rising Sun Moiré software to produce packed phase and further on, the unpacked fringes. Tested object was placed on a goniometer and rotate to generate four surfaces topography. These four surveyed surfaces were assembled by means of a SCILAB software, obtaining a three column matrix, corresponding to the object coordinates xi, also having elevation values and coordinates corrected as well. The work includes conclusions on the reliability of the proposed method as well as the setup simplicity and of low cost.
Production of the next-generation library virtual tour
Duncan, James M.; Roth, Linda K.
2001-01-01
While many libraries offer overviews of their services through their Websites, only a small number of health sciences libraries provide Web-based virtual tours. These tours typically feature photographs of major service areas along with textual descriptions. This article describes the process for planning, producing, and implementing a next-generation virtual tour in which a variety of media elements are integrated: photographic images, 360-degree “virtual reality” views, textual descriptions, and contextual floor plans. Hardware and software tools used in the project are detailed, along with a production timeline and budget, tips for streamlining the process, and techniques for improving production. This paper is intended as a starting guide for other libraries considering an investment in such a project. PMID:11837254
A Vision for Future Virtual Training
2006-06-15
Future Virtual Training. In Virtual Media for Military Applications (pp. KN2-1 – KN2-12). Meeting Proceedings RTO-MP-HFM-136, Keynote 2. Neuilly-sur...Spin Out. By 2017 , the FCS program will meet Full Operation Capability (FOC). The force structure of the Army at this time will include two BCTs...training environment, allowing them to meet preparatory training proficiency objectives virtually while minimizing the use of costly live ammunition. In
Transfer of motor learning from virtual to natural environments in individuals with cerebral palsy.
de Mello Monteiro, Carlos Bandeira; Massetti, Thais; da Silva, Talita Dias; van der Kamp, John; de Abreu, Luiz Carlos; Leone, Claudio; Savelsbergh, Geert J P
2014-10-01
With the growing accessibility of computer-assisted technology, rehabilitation programs for individuals with cerebral palsy (CP) increasingly use virtual reality environments to enhance motor practice. Thus, it is important to examine whether performance improvements in the virtual environment generalize to the natural environment. To examine this issue, we had 64 individuals, 32 of which were individuals with CP and 32 typically developing individuals, practice two coincidence-timing tasks. In the more tangible button-press task, the individuals were required to 'intercept' a falling virtual object at the moment it reached the interception point by pressing a key. In the more abstract, less tangible task, they were instructed to 'intercept' the virtual object by making a hand movement in a virtual environment. The results showed that individuals with CP timed less accurate than typically developing individuals, especially for the more abstract task in the virtual environment. The individuals with CP did-as did their typically developing peers-improve coincidence timing with practice on both tasks. Importantly, however, these improvements were specific to the practice environment; there was no transfer of learning. It is concluded that the implementation of virtual environments for motor rehabilitation in individuals with CP should not be taken for granted but needs to be considered carefully. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gulliver, Amelia; Chan, Jade KY; Bennett, Kylie; Griffiths, Kathleen M
2015-01-01
Background Help seeking for mental health problems among university students is low, and Internet-based interventions such as virtual clinics have the potential to provide private, streamlined, and high quality care to this vulnerable group. Objective The objective of this study was to conduct focus groups with university students to obtain input on potential functions and features of a university-specific virtual clinic for mental health. Methods Participants were 19 undergraduate students from an Australian university between 19 and 24 years of age. Focus group discussion was structured by questions that addressed the following topics: (1) the utility and acceptability of a virtual mental health clinic for students, and (2) potential features of a virtual mental health clinic. Results Participants viewed the concept of a virtual clinic for university students favorably, despite expressing concerns about privacy of personal information. Participants expressed a desire to connect with professionals through the virtual clinic, for the clinic to provide information tailored to issues faced by students, and for the clinic to enable peer-to-peer interaction. Conclusions Overall, results of the study suggest the potential for virtual clinics to play a positive role in providing students with access to mental health support. PMID:26543908
ERIC Educational Resources Information Center
Schaverien, Lynette
This paper describes a research-based, Web-delivered context, the Generative Virtual Classroom (GVC), in which student teachers can develop their ability to recognize, describe, analyze, and theorize learning, and it reports findings of three investigations into its use. The learning environment aims to exploit the possibilities of advanced…
Combining 3D structure of real video and synthetic objects
NASA Astrophysics Data System (ADS)
Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon
1998-04-01
This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.
Learning to Drive a Wheelchair in Virtual Reality
ERIC Educational Resources Information Center
Inman, Dean P.; Loge, Ken; Cram, Aaron; Peterson, Missy
2011-01-01
This research project studied the effect that a technology-based training program, WheelchairNet, could contribute to the education of children with physical disabilities by providing a chance to practice driving virtual motorized wheelchairs safely within a computer-generated world. Programmers created three virtual worlds for training. Scenarios…
Learning Together and Working Apart: Routines for Organizational Learning in Virtual Teams
ERIC Educational Resources Information Center
Dixon, Nancy
2017-01-01
Purpose: Research suggests that teaming routines facilitate learning in teams. This paper identifies and details how specific teaming routines, implemented in a virtual team, support its continual learning. The study's focus was to generate authentic and descriptive accounts of the interviewees' experiences with virtual teaming routines.…
Educating Avatars: On Virtual Worlds and Pedagogical Intent
ERIC Educational Resources Information Center
Wang, Tsung Juang
2011-01-01
Virtual world technology is now being incorporated into various higher education programs, often with enthusiastic claims about the improvement of students' abilities to experience learning problems and tasks in computer-mediated virtual reality through the use of computer-generated personal agents or avatars. The interactivity of the avatars with…
Learning to explore the structure of kinematic objects in a virtual environment
Buckmann, Marcus; Gaschler, Robert; Höfer, Sebastian; Loeben, Dennis; Frensch, Peter A.; Brock, Oliver
2015-01-01
The current study tested the quantity and quality of human exploration learning in a virtual environment. Given the everyday experience of humans with physical object exploration, we document substantial practice gains in the time, force, and number of actions needed to classify the structure of virtual chains, marking the joints as revolute, prismatic, or rigid. In line with current work on skill acquisition, participants could generalize the new and efficient psychomotor patterns of object exploration to novel objects. On the one hand, practice gains in exploration performance could be captured by a negative exponential practice function. On the other hand, they could be linked to strategies and strategy change. After quantifying how much was learned in object exploration and identifying the time course of practice-related gains in exploration efficiency (speed), we identified what was learned. First, we identified strategy components that were associated with efficient (fast) exploration performance: sequential processing, simultaneous use of both hands, low use of pulling rather than pushing, and low use of force. Only the latter was beneficial irrespective of the characteristics of the other strategy components. Second, we therefore characterized efficient exploration behavior by strategies that simultaneously take into account the abovementioned strategy components. We observed that participants maintained a high level of flexibility, sampling from a pool of exploration strategies trading the level of psycho-motoric challenges with exploration speed. We discuss the findings pursuing the aim of advancing intelligent object exploration by combining analytic (object exploration in humans) and synthetic work (object exploration in robots) in the same virtual environment. PMID:25904878
Software for Managing Parametric Studies
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; McCann, Karen M.; DeVivo, Adrian
2003-01-01
The Information Power Grid Virtual Laboratory (ILab) is a Practical Extraction and Reporting Language (PERL) graphical-user-interface computer program that generates shell scripts to facilitate parametric studies performed on the Grid. (The Grid denotes a worldwide network of supercomputers used for scientific and engineering computations involving data sets too large to fit on desktop computers.) Heretofore, parametric studies on the Grid have been impeded by the need to create control language scripts and edit input data files painstaking tasks that are necessary for managing multiple jobs on multiple computers. ILab reflects an object-oriented approach to automation of these tasks: All data and operations are organized into packages in order to accelerate development and debugging. A container or document object in ILab, called an experiment, contains all the information (data and file paths) necessary to define a complex series of repeated, sequenced, and/or branching processes. For convenience and to enable reuse, this object is serialized to and from disk storage. At run time, the current ILab experiment is used to generate required input files and shell scripts, create directories, copy data files, and then both initiate and monitor the execution of all computational processes.
NASA Astrophysics Data System (ADS)
Kamstra, Rhiannon L.; Dadgar, Saedeh; Wigg, John; Chowdhury, Morshed A.; Phenix, Christopher P.; Floriano, Wely B.
2014-11-01
Our group has recently demonstrated that virtual screening is a useful technique for the identification of target-specific molecular probes. In this paper, we discuss some of our proof-of-concept results involving two biologically relevant target proteins, and report the development of a computational script to generate large databases of fluorescence-labelled compounds for computer-assisted molecular design. The virtual screening of a small library of 1,153 fluorescently-labelled compounds against two targets, and the experimental testing of selected hits reveal that this approach is efficient at identifying molecular probes, and that the screening of a labelled library is preferred over the screening of base compounds followed by conjugation of confirmed hits. The automated script for library generation explores the known reactivity of commercially available dyes, such as NHS-esters, to create large virtual databases of fluorescence-tagged small molecules that can be easily synthesized in a laboratory. A database of 14,862 compounds, each tagged with the ATTO680 fluorophore was generated with the automated script reported here. This library is available for downloading and it is suitable for virtual ligand screening aiming at the identification of target-specific fluorescent molecular probes.
ERIC Educational Resources Information Center
Trelease, Robert B.; Nieder, Gary L.
2013-01-01
Web deployable anatomical simulations or "virtual reality learning objects" can easily be produced with QuickTime VR software, but their use for online and mobile learning is being limited by the declining support for web browser plug-ins for personal computers and unavailability on popular mobile devices like Apple iPad and Android…
Virtual Team Governance: Addressing the Governance Mechanisms and Virtual Team Performance
NASA Astrophysics Data System (ADS)
Zhan, Yihong; Bai, Yu; Liu, Ziheng
As technology has improved and collaborative software has been developed, virtual teams with geographically dispersed members spread across diverse physical locations have become increasingly prominent. Virtual team is supported by advancing communication technologies, which makes virtual teams able to largely transcend time and space. Virtual teams have changed the corporate landscape, which are more complex and dynamic than traditional teams since the members of virtual teams are spread on diverse geographical locations and their roles in the virtual team are different. Therefore, how to realize good governance of virtual team and arrive at good virtual team performance is becoming critical and challenging. Good virtual team governance is essential for a high-performance virtual team. This paper explores the performance and the governance mechanism of virtual team. It establishes a model to explain the relationship between the performance and the governance mechanisms in virtual teams. This paper is focusing on managing virtual teams. It aims to find the strategies to help business organizations to improve the performance of their virtual teams and arrive at the objectives of good virtual team management.
Declarative Knowledge Acquisition in Immersive Virtual Learning Environments
ERIC Educational Resources Information Center
Webster, Rustin
2016-01-01
The author investigated the interaction effect of immersive virtual reality (VR) in the classroom. The objective of the project was to develop and provide a low-cost, scalable, and portable VR system containing purposely designed and developed immersive virtual learning environments for the US Army. The purpose of the mixed design experiment was…
A Study of Students' Attitude Towards Virtual Education in Pakistan
ERIC Educational Resources Information Center
Hussain, Irshad
2007-01-01
Virtual education paradigm has been developing as a form of distance education to provide education across the boundaries of a nation and/or country. It imparts education through information and communication technologies. In Pakistan the Virtual University of Pakistan imparts it. The main objective of the study was to evaluate the students'…
The Development of a Virtual Marine Museum for Educational Applications
ERIC Educational Resources Information Center
Tarng, Wermhuar; Change, Mei-Yu; Ou, Kuo-Liang; Chang, Ya-Wen; Liou, Hsin-Hun
2009-01-01
The objective of this article is to investigate the computer animation and virtual reality technologies for developing a virtual marine museum. The museum consists of three exhibition areas. The first area displays fishes in freshwater, including creeks, rivers, and dams in Taiwan. The second area exhibits marine ecology and creatures of different…
Active Learning through the Use of Virtual Environments
ERIC Educational Resources Information Center
Mayrose, James
2012-01-01
Immersive Virtual Reality (VR) has seen explosive growth over the last decade. Immersive VR attempts to give users the sensation of being fully immersed in a synthetic environment by providing them with 3D hardware, and allowing them to interact with objects in virtual worlds. The technology is extremely effective for learning and exploration, and…
Taking the Plunge: Districts Leap into Virtualization
ERIC Educational Resources Information Center
Demski, Jennifer
2010-01-01
Moving from a traditional desktop computing environment to a virtualized solution is a daunting task. In this article, the author presents case histories of three districts that have made the conversion to virtual computing to learn about their experiences: What prompted them to make the move, and what were their objectives? Which obstacles prove…
NASA Astrophysics Data System (ADS)
Madden, Christopher S.; Richards, Noel J.; Culpepper, Joanne B.
2016-10-01
This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.
An experiment on fear of public speaking in virtual reality.
Pertaub, D P; Slater, M; Barker, C
2001-01-01
Can virtual reality exposure therapy be used to treat people with social phobia? To answer this question it is vital to known if people will respond to virtual humans (avatars) in a virtual social setting in the same way they would to real humans. If someone is extremely anxious with real people, will they also be anxious when faced with simulated people, despite knowing that the avatars are computer generated? In [17] we described a small pilot study that placed 10 people before a virtual audience. The purpose was to assess the extent to which social anxiety, specifically fear of public speaking, was induced by the virtual audience and the extent of influence of degree of immersion (head mounted display or desktop monitor. The current paper describes a follow up study conducted with 40 subjects and the results clearly show that not only is social anxiety induced by the audience, but the degree of anxiety experienced is directly related to the type of virtual audience feedback the speaker receives. In particular, a hostile negative audience scenario was found to generate strong affect in speakers, regardless of whether or not they normally suffered from fear of public speaking.
Sounds of silence: How to animate virtual worlds with sound
NASA Technical Reports Server (NTRS)
Astheimer, Peter
1993-01-01
Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.
Kinematic evaluation of virtual walking trajectories.
Cirio, Gabriel; Olivier, Anne-Hélène; Marchal, Maud; Pettré, Julien
2013-04-01
Virtual walking, a fundamental task in Virtual Reality (VR), is greatly influenced by the locomotion interface being used, by the specificities of input and output devices, and by the way the virtual environment is represented. No matter how virtual walking is controlled, the generation of realistic virtual trajectories is absolutely required for some applications, especially those dedicated to the study of walking behaviors in VR, navigation through virtual places for architecture, rehabilitation and training. Previous studies focused on evaluating the realism of locomotion trajectories have mostly considered the result of the locomotion task (efficiency, accuracy) and its subjective perception (presence, cybersickness). Few focused on the locomotion trajectory itself, but in situation of geometrically constrained task. In this paper, we study the realism of unconstrained trajectories produced during virtual walking by addressing the following question: did the user reach his destination by virtually walking along a trajectory he would have followed in similar real conditions? To this end, we propose a comprehensive evaluation framework consisting on a set of trajectographical criteria and a locomotion model to generate reference trajectories. We consider a simple locomotion task where users walk between two oriented points in space. The travel path is analyzed both geometrically and temporally in comparison to simulated reference trajectories. In addition, we demonstrate the framework over a user study which considered an initial set of common and frequent virtual walking conditions, namely different input devices, output display devices, control laws, and visualization modalities. The study provides insight into the relative contributions of each condition to the overall realism of the resulting virtual trajectories.
Virtual healthcare delivery: defined, modeled, and predictive barriers to implementation identified.
Harrop, V M
2001-01-01
Provider organizations lack: 1. a definition of "virtual" healthcare delivery relative to the products, services, and processes offered by dot.coms, web-compact disk healthcare content providers, telemedicine, and telecommunications companies, and 2. a model for integrating real and virtual healthcare delivery. This paper defines virtual healthcare delivery as asynchronous, outsourced, and anonymous, then proposes a 2x2 Real-Virtual Healthcare Delivery model focused on real and virtual patients and real and virtual provider organizations. Using this model, provider organizations can systematically deconstruct healthcare delivery in the real world and reconstruct appropriate pieces in the virtual world. Observed barriers to virtual healthcare delivery are: resistance to telecommunication integrated delivery networks and outsourcing; confusion over virtual infrastructure requirements for telemedicine and full-service web portals, and the impact of integrated delivery networks and outsourcing on extant cultural norms and revenue generating practices. To remain competitive provider organizations must integrate real and virtual healthcare delivery.
Virtual healthcare delivery: defined, modeled, and predictive barriers to implementation identified.
Harrop, V. M.
2001-01-01
Provider organizations lack: 1. a definition of "virtual" healthcare delivery relative to the products, services, and processes offered by dot.coms, web-compact disk healthcare content providers, telemedicine, and telecommunications companies, and 2. a model for integrating real and virtual healthcare delivery. This paper defines virtual healthcare delivery as asynchronous, outsourced, and anonymous, then proposes a 2x2 Real-Virtual Healthcare Delivery model focused on real and virtual patients and real and virtual provider organizations. Using this model, provider organizations can systematically deconstruct healthcare delivery in the real world and reconstruct appropriate pieces in the virtual world. Observed barriers to virtual healthcare delivery are: resistance to telecommunication integrated delivery networks and outsourcing; confusion over virtual infrastructure requirements for telemedicine and full-service web portals, and the impact of integrated delivery networks and outsourcing on extant cultural norms and revenue generating practices. To remain competitive provider organizations must integrate real and virtual healthcare delivery. PMID:11825189
ERIC Educational Resources Information Center
O'Connor, Eileen
2013-01-01
With the advent of web 2.0 and virtual technologies and new understandings about learning within a global, networked environment, online course design has moved beyond the constraints of text readings, papers, and discussion boards. This next generation of online courses needs to dynamically and actively integrate the wide-ranging distribution of…
Social Virtual Worlds for Technology-Enhanced Learning on an Augmented Learning Platform
ERIC Educational Resources Information Center
Jin, Li; Wen, Zhigang; Gough, Norman
2010-01-01
Virtual worlds have been linked with e-learning applications to create virtual learning environments (VLEs) for the past decade. However, while they can support many educational activities that extend both traditional on-campus teaching and distance learning, they are used primarily for learning content generated and managed by instructors. With…
An atlas of objectively analyzed atmospheric cross sections, 1973-1980
NASA Technical Reports Server (NTRS)
Goodman, J.; Gaines, S. E.; Hipskind, R. S.
1985-01-01
Atmospheric variability over time scales greater than one month is conceptually simplified and readily recognized from vertical cross-sections of zonal-monthly mean data. The reduction to two dimensions, latitude and height, explicitly eliminates all zonal waves but implicity retains their effects on the thermal-pressure fields and the dynamically related zonal wind fields. This atlas contains 96 examples, spanning all latitudes in both the northern and southern hemispheres and two decades in pressure, from 1000 to 10 mb. Four analyses, representing each month from January 1973 through December 1980, depicts the potential virtual temperature, the observed zonal wind velocity, the virtual temperature and the geostrophic zonal wind velocity. Each variable is contoured at a close interval to facilitate visual estimates of stability and vorticity via their gradients. The analyses are generated and contoured by objective computer methods from just one data source: in situ measurements from the conventional rawin-radiosonde system. Although the analyses are independently made at constant pressure levels (the mandatory levels) the cross-sections are drawn with geopotential height as the ordinate. With this ordinate one can observe the seasonal expansion and contraction of the earth's atmosphere, especially that of the polar stratosphere. Also, the quasi-biannual cycle can be identified and studied directly from successive cross-sections.
NASA Astrophysics Data System (ADS)
van Aardt, J. A.; van Leeuwen, M.; Kelbe, D.; Kampe, T.; Krause, K.
2015-12-01
Remote sensing is widely accepted as a useful technology for characterizing the Earth surface in an objective, reproducible, and economically feasible manner. To date, the calibration and validation of remote sensing data sets and biophysical parameter estimates remain challenging due to the requirements to sample large areas for ground-truth data collection, and restrictions to sample these data within narrow temporal windows centered around flight campaigns or satellite overpasses. The computer graphics community have taken significant steps to ameliorate some of these challenges by providing an ability to generate synthetic images based on geometrically and optically realistic representations of complex targets and imaging instruments. These synthetic data can be used for conceptual and diagnostic tests of instrumentation prior to sensor deployment or to examine linkages between biophysical characteristics of the Earth surface and at-sensor radiance. In the last two decades, the use of image generation techniques for remote sensing of the vegetated environment has evolved from the simulation of simple homogeneous, hypothetical vegetation canopies, to advanced scenes and renderings with a high degree of photo-realism. Reported virtual scenes comprise up to 100M surface facets; however, due to the tighter coupling between hardware and software development, the full potential of image generation techniques for forestry applications yet remains to be fully explored. In this presentation, we examine the potential computer graphics techniques have for the analysis of forest structure-function relationships and demonstrate techniques that provide for the modeling of extremely high-faceted virtual forest canopies, comprising billions of scene elements. We demonstrate the use of ray tracing simulations for the analysis of gap size distributions and characterization of foliage clumping within spatial footprints that allow for a tight matching between characteristics derived from these virtual scenes and typical pixel resolutions of remote sensing imagery.
Photorealistic virtual anatomy based on Chinese Visible Human data.
Heng, P A; Zhang, S X; Xie, Y M; Wong, T T; Chui, Y P; Cheng, C Y
2006-04-01
Virtual reality based learning of human anatomy is feasible when a database of 3D organ models is available for the learner to explore, visualize, and dissect in virtual space interactively. In this article, we present our latest work on photorealistic virtual anatomy applications based on the Chinese Visible Human (CVH) data. We have focused on the development of state-of-the-art virtual environments that feature interactive photo-realistic visualization and dissection of virtual anatomical models constructed from ultra-high resolution CVH datasets. We also outline our latest progress in applying these highly accurate virtual and functional organ models to generate realistic look and feel to advanced surgical simulators. (c) 2006 Wiley-Liss, Inc.
A computational model of the cognitive impact of decorative elements on the perception of suspense
NASA Astrophysics Data System (ADS)
Delatorre, Pablo; León, Carlos; Gervás, Pablo; Palomo-Duarte, Manuel
2017-10-01
Suspense is a key narrative issue in terms of emotional gratification, influencing the way in which the audience experiences a story. Virtually all narrative media uses suspense as a strategy for reader engagement regardless of the technology or genre. Being such an important narrative component, computational creativity has tackled suspense in a number of automatic storytelling. These systems are mainly based on narrative theories, and in general lack a cognitive approach involving the study of empathy or emotional effect of the environment impact. With this idea in mind, this paper reports on a computational model of the influence of decorative elements on suspense. It has been developed as part of a more general proposal for plot generation based on cognitive aspects. In order to test and parameterise the model, an evaluation based on textual stories and an evaluation based on a 3D virtual environment was run. In both cases, results suggest a direct influence of emotional perception of decorative objects in the suspense of a scene.
ERIC Educational Resources Information Center
Yurt, Eyup; Sunbul, Ali Murat
2012-01-01
In this study, the effect of modeling based activities using virtual environments and concrete objects on spatial thinking and mental rotation skills was investigated. The study was designed as a pretest-posttest model with a control group, which is one of the experimental research models. The study was carried out on sixth grade students…
ERIC Educational Resources Information Center
Wrzesien, Maja; Raya, Mariano Alcaniz
2010-01-01
The objective of this study is to present and to evaluate the E-Junior application: a serious virtual world (SVW) for teaching children natural science and ecology. E-Junior was designed according to pedagogical theories and curricular objectives to help children learn about the Mediterranean Sea and its environmental issues while playing. In this…
The Virtual Tablet: Virtual Reality as a Control System
NASA Technical Reports Server (NTRS)
Chronister, Andrew
2016-01-01
In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.
The virtual environment display system
NASA Technical Reports Server (NTRS)
Mcgreevy, Michael W.
1991-01-01
Virtual environment technology is a display and control technology that can surround a person in an interactive computer generated or computer mediated virtual environment. It has evolved at NASA-Ames since 1984 to serve NASA's missions and goals. The exciting potential of this technology, sometimes called Virtual Reality, Artificial Reality, or Cyberspace, has been recognized recently by the popular media, industry, academia, and government organizations. Much research and development will be necessary to bring it to fruition.
NASA Technical Reports Server (NTRS)
Teng, William; Rui, Hualan; Strub, Richard; Vollmer, Bruce
2015-01-01
A Digital Divide has long stood between how NASA and other satellite-derived data are typically archived (time-step arrays or maps) and how hydrology and other point-time series oriented communities prefer to access those data. In essence, the desired method of data access is orthogonal to the way the data are archived. Our approach to bridging the Divide is part of a larger NASA-supported data rods project to enhance access to and use of NASA and other data by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) and the larger hydrology community. Our main objective was to determine a way to reorganize data that is optimal for these communities. Two related objectives were to optimally reorganize data in a way that (1) is operational and fits in and leverages the existing Goddard Earth Sciences Data and Information Services Center (GES DISC) operational environment and (2) addresses the scaling up of data sets available as time series from those archived at the GES DISC to potentially include those from other Earth Observing System Data and Information System (EOSDIS) data archives. Through several prototype efforts and lessons learned, we arrived at a non-database solution that satisfied our objectivesconstraints. We describe, in this presentation, how we implemented the operational production of pre-generated data rods and, considering the tradeoffs between length of time series (or number of time steps), resources needed, and performance, how we implemented the operational production of on-the-fly (virtual) data rods. For the virtual data rods, we leveraged a number of existing resources, including the NASA Giovanni Cache and NetCDF Operators (NCO) and used data cubes processed in parallel. Our current benchmark performance for virtual generation of data rods is about a years worth of time series for hourly data (9,000 time steps) in 90 seconds. Our approach is a specific implementation of the general optimal strategy of reorganizing data to match the desired means of access. Results from our project have already significantly extended NASA data to the large and important hydrology user community that has been, heretofore, mostly unable to easily access and use NASA data.
Layer 1 VPN services in distributed next-generation SONET/SDH networks with inverse multiplexing
NASA Astrophysics Data System (ADS)
Ghani, N.; Muthalaly, M. V.; Benhaddou, D.; Alanqar, W.
2006-05-01
Advances in next-generation SONET/SDH along with GMPLS control architectures have enabled many new service provisioning capabilities. In particular, a key services paradigm is the emergent Layer 1 virtual private network (L1 VPN) framework, which allows multiple clients to utilize a common physical infrastructure and provision their own 'virtualized' circuit-switched networks. This precludes expensive infrastructure builds and increases resource utilization for carriers. Along these lines, a novel L1 VPN services resource management scheme for next-generation SONET/SDH networks is proposed that fully leverages advanced virtual concatenation and inverse multiplexing features. Additionally, both centralized and distributed GMPLS-based implementations are also tabled to support the proposed L1 VPN services model. Detailed performance analysis results are presented along with avenues for future research.
Applied virtual reality at the Research Triangle Institute
NASA Technical Reports Server (NTRS)
Montoya, R. Jorge
1994-01-01
Virtual Reality (VR) is a way for humans to use computers in visualizing, manipulating and interacting with large geometric data bases. This paper describes a VR infrastructure and its application to marketing, modeling, architectural walk through, and training problems. VR integration techniques used in these applications are based on a uniform approach which promotes portability and reusability of developed modules. For each problem, a 3D object data base is created using data captured by hand or electronically. The object's realism is enhanced through either procedural or photo textures. The virtual environment is created and populated with the data base using software tools which also support interactions with and immersivity in the environment. These capabilities are augmented by other sensory channels such as voice recognition, 3D sound, and tracking. Four applications are presented: a virtual furniture showroom, virtual reality models of the North Carolina Global TransPark, a walk through the Dresden Fraunenkirche, and the maintenance training simulator for the National Guard.
Hologram-reconstruction signal enhancement
NASA Technical Reports Server (NTRS)
Mezrich, R. S.
1977-01-01
Principle of heterodyne detection is used to combine object beam and reconstructed virtual image beam. All light valves in page composer are opened, and virtual-image beam is allowed to interfere with light from valves.
NASA Astrophysics Data System (ADS)
Basso Moro, Sara; Carrieri, Marika; Avola, Danilo; Brigadoi, Sabrina; Lancia, Stefania; Petracca, Andrea; Spezialetti, Matteo; Ferrari, Marco; Placidi, Giuseppe; Quaresima, Valentina
2016-06-01
Objective. In the last few years, the interest in applying virtual reality systems for neurorehabilitation is increasing. Their compatibility with neuroimaging techniques, such as functional near-infrared spectroscopy (fNIRS), allows for the investigation of brain reorganization with multimodal stimulation and real-time control of the changes occurring in brain activity. The present study was aimed at testing a novel semi-immersive visuo-motor task (VMT), which has the features of being adopted in the field of neurorehabilitation of the upper limb motor function. Approach. A virtual environment was simulated through a three-dimensional hand-sensing device (the LEAP Motion Controller), and the concomitant VMT-related prefrontal cortex (PFC) response was monitored non-invasively by fNIRS. Upon the VMT, performed at three different levels of difficulty, it was hypothesized that the PFC would be activated with an expected greater level of activation in the ventrolateral PFC (VLPFC), given its involvement in the motor action planning and in the allocation of the attentional resources to generate goals from current contexts. Twenty-one subjects were asked to move their right hand/forearm with the purpose of guiding a virtual sphere over a virtual path. A twenty-channel fNIRS system was employed for measuring changes in PFC oxygenated-deoxygenated hemoglobin (O2Hb/HHb, respectively). Main results. A VLPFC O2Hb increase and a concomitant HHb decrease were observed during the VMT performance, without any difference in relation to the task difficulty. Significance. The present study has revealed a particular involvement of the VLPFC in the execution of the novel proposed semi-immersive VMT adoptable in the neurorehabilitation field.
The building blocks of the full body ownership illusion
Maselli, Antonella; Slater, Mel
2013-01-01
Previous work has reported that it is not difficult to give people the illusion of ownership over an artificial body, providing a powerful tool for the investigation of the neural and cognitive mechanisms underlying body perception and self consciousness. We present an experimental study that uses immersive virtual reality (IVR) focused on identifying the perceptual building blocks of this illusion. We systematically manipulated visuotactile and visual sensorimotor contingencies, visual perspective, and the appearance of the virtual body in order to assess their relative role and mutual interaction. Consistent results from subjective reports and physiological measures showed that a first person perspective over a fake humanoid body is essential for eliciting a body ownership illusion. We found that the illusion of ownership can be generated when the virtual body has a realistic skin tone and spatially substitutes the real body seen from a first person perspective. In this case there is no need for an additional contribution of congruent visuotactile or sensorimotor cues. Additionally, we found that the processing of incongruent perceptual cues can be modulated by the level of the illusion: when the illusion is strong, incongruent cues are not experienced as incorrect. Participants exposed to asynchronous visuotactile stimulation can experience the ownership illusion and perceive touch as originating from an object seen to contact the virtual body. Analogously, when the level of realism of the virtual body is not high enough and/or when there is no spatial overlap between the two bodies, then the contribution of congruent multisensory and/or sensorimotor cues is required for evoking the illusion. On the basis of these results and inspired by findings from neurophysiological recordings in the monkey, we propose a model that accounts for many of the results reported in the literature. PMID:23519597
Using Virtual Worlds in Education: Second Life[R] as an Educational Tool
ERIC Educational Resources Information Center
Baker, Suzanne C.; Wentz, Ryan K.; Woods, Madison M.
2009-01-01
The online virtual world Second Life (www.secondlife.com) has multiple potential uses in teaching. In Second Life (SL), users create avatars that represent them in the virtual world. Within SL, avatars can interact with each other and with objects and environments. SL offers tremendous creative potential in that users can create content within the…
The Development of a Virtual Dinosaur Museum
ERIC Educational Resources Information Center
Tarng, Wernhuar; Liou, Hsin-Hun
2007-01-01
The objective of this article is to study the network and virtual reality technologies for developing a virtual dinosaur museum, which provides a Web-learning environment for students of all ages and the general public to know more about dinosaurs. We first investigate the method for building the 3D dynamic models of dinosaurs, and then describe…
Virtual viewpoint generation for three-dimensional display based on the compressive light field
NASA Astrophysics Data System (ADS)
Meng, Qiao; Sang, Xinzhu; Chen, Duo; Guo, Nan; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan
2016-10-01
Virtual view-point generation is one of the key technologies the three-dimensional (3D) display, which renders the new scene image perspective with the existing viewpoints. The three-dimensional scene information can be effectively recovered at different viewing angles to allow users to switch between different views. However, in the process of multiple viewpoints matching, when N free viewpoints are received, we need to match N viewpoints each other, namely matching C 2N = N(N-1)/2 times, and even in the process of matching different baselines errors can occur. To address the problem of great complexity of the traditional virtual view point generation process, a novel and rapid virtual view point generation algorithm is presented in this paper, and actual light field information is used rather than the geometric information. Moreover, for better making the data actual meaning, we mainly use nonnegative tensor factorization(NTF). A tensor representation is introduced for virtual multilayer displays. The light field emitted by an N-layer, M-frame display is represented by a sparse set of non-zero elements restricted to a plane within an Nth-order, rank-M tensor. The tensor representation allows for optimal decomposition of a light field into time-multiplexed, light-attenuating layers using NTF. Finally, the compressive light field of multilayer displays information synthesis is used to obtain virtual view-point by multiple multiplication. Experimental results show that the approach not only the original light field is restored with the high image quality, whose PSNR is 25.6dB, but also the deficiency of traditional matching is made up and any viewpoint can obtained from N free viewpoints.
Virtual Images: Going Through the Looking Glass
NASA Astrophysics Data System (ADS)
Mota, Ana Rita; dos Santos, João Lopes
2017-01-01
Virtual images are often introduced through a "geometric" perspective, with little conceptual or qualitative illustrations, hindering a deeper understanding of this physical concept. In this paper, we present two rather simple observations that force a critical reflection on the optical nature of a virtual image. This approach is supported by the reflect-view, a useful device in geometrical optics classes because it allows a visual confrontation between virtual images and real objects that seemingly occupy the same region of space.
Ray Tracing with Virtual Objects.
ERIC Educational Resources Information Center
Leinoff, Stuart
1991-01-01
Introduces the method of ray tracing to analyze the refraction or reflection of real or virtual images from multiple optical devices. Discusses ray-tracing techniques for locating images using convex and concave lenses or mirrors. (MDH)
Wang, Yu; Helminen, Emily; Jiang, Jingfeng
2015-01-01
Purpose: Quasistatic ultrasound elastography (QUE) is being used to augment in vivo characterization of breast lesions. Results from early clinical trials indicated that there was a lack of confidence in image interpretation. Such confidence can only be gained through rigorous imaging tests using complex, heterogeneous but known media. The objective of this study is to build a virtual breast QUE simulation platform in the public domain that can be used not only for innovative QUE research but also for rigorous imaging tests. Methods: The main thrust of this work is to streamline biomedical ultrasound simulations by leveraging existing open source software packages including Field II (ultrasound simulator), VTK (geometrical visualization and processing), FEBio [finite element (FE) analysis], and Tetgen (mesh generator). However, integration of these open source packages is nontrivial and requires interdisciplinary knowledge. In the first step, a virtual breast model containing complex anatomical geometries was created through a novel combination of image-based landmark structures and randomly distributed (small) structures. Image-based landmark structures were based on data from the NIH Visible Human Project. Subsequently, an unstructured FE-mesh was created by Tetgen. In the second step, randomly positioned point scatterers were placed within the meshed breast model through an octree-based algorithm to make a virtual breast ultrasound phantom. In the third step, an ultrasound simulator (Field II) was used to interrogate the virtual breast phantom to obtain simulated ultrasound echo data. Of note, tissue deformation generated using a FE-simulator (FEBio) was the basis of deforming the original virtual breast phantom in order to obtain the postdeformation breast phantom for subsequent ultrasound simulations. Using the procedures described above, a full cycle of QUE simulations involving complex and highly heterogeneous virtual breast phantoms can be accomplished for the first time. Results: Representative examples were used to demonstrate capabilities of this virtual simulation platform. In the first set of three ultrasound simulation examples, three heterogeneous volumes of interest were selected from a virtual breast ultrasound phantom to perform sophisticated ultrasound simulations. These resultant B-mode images realistically represented the underlying complex but known media. In the second set of three QUE examples, advanced applications in QUE were simulated. The first QUE example was to show breast tumors with complex shapes and/or compositions. The resultant strain images showed complex patterns that were normally seen in freehand clinical ultrasound data. The second and third QUE examples demonstrated (deformation-dependent) nonlinear strain imaging and time-dependent strain imaging, respectively. Conclusions: The proposed virtual QUE platform was implemented and successfully tested in this study. Through show-case examples, the proposed work has demonstrated its capabilities of creating sophisticated QUE data in a way that cannot be done through the manufacture of physical tissue-mimicking phantoms and other software. This open software architecture will soon be made available in the public domain and can be readily adapted to meet specific needs of different research groups to drive innovations in QUE. PMID:26328994
Promotion of Self-directed Learning Using Virtual Patient Cases
Schonder, Kristine; McGee, James
2013-01-01
Objective. To assess the effectiveness of virtual patient cases to promote self-directed learning (SDL) in a required advanced therapeutics course. Design. Virtual patient software based on a branched-narrative decision-making model was used to create complex patient case simulations to replace lecture-based instruction. Within each simulation, students used SDL principles to learn course objectives, apply their knowledge through clinical recommendations, and assess their progress through patient outcomes and faculty feedback linked to their individual decisions. Group discussions followed each virtual patient case to provide further interpretation, clarification, and clinical perspective. Assessments. Students found the simulated patient cases to be organized (90%), enjoyable (82%), intellectually challenging (97%), and valuable to their understanding of course content (91%). Students further indicated that completion of the virtual patient cases prior to class permitted better use of class time (78%) and promoted SDL (84%). When assessment questions regarding material on postoperative nausea and vomiting were compared, no difference in scores were found between the students who attended the lecture on the material in 2011 (control group) and those who completed the virtual patient case on the material in 2012 (intervention group). Conclusion. Completion of virtual patient cases, designed to replace lectures and promote SDL, was overwhelmingly supported by students and proved to be as effective as traditional teaching methods. PMID:24052654
NASA Astrophysics Data System (ADS)
Herbuś, K.; Ociepka, P.
2016-08-01
The development of methods of computer aided design and engineering allows conducting virtual tests, among others concerning motion simulation of technical means. The paper presents a method of integrating an object in the form of a virtual model of a Stewart platform with an avatar of a vehicle moving in a virtual environment. The area of the problem includes issues related to the problem of fidelity of mapping the work of the analyzed technical mean. The main object of investigations is a 3D model of a Stewart platform, which is a subsystem of the simulator designated for driving learning for disabled persons. The analyzed model of the platform, prepared for motion simulation, was created in the “Motion Simulation” module of a CAD/CAE class system Siemens PLM NX. Whereas the virtual environment, in which the moves the avatar of the passenger car, was elaborated in a VR class system EON Studio. The element integrating both of the mentioned software environments is a developed application that reads information from the virtual reality (VR) concerning the current position of the car avatar. Then, basing on the accepted algorithm, it sends control signals to respective joints of the model of the Stewart platform (CAD).
Is Carbon Capture and Storage Really Needed?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsouris, Costas; Williams, Kent Alan; Aaron, D
2010-01-01
Two of the greatest contemporary global challenges are anthropogenic greenhouse gas emissions and energy sustainability. A popular proposed solution to the former problem is carbon capture and storage (CCS). Unfortunately, CCS has little benefit for energy sustainability and introduces significant long-term costs and risks. Thus, we propose the adoption of 'virtual CCS' by directing the resources that would have been spent on CCS to alternative energy technologies. (The term 'virtual' is used here because the concept described in this work satisfies the Merriam-Webster Dictionary definition of virtual: 'being such in essence or effect though not formally recognized or admitted.') Inmore » this example, we consider wind and nuclear power and use the funds that would have been required by CCS to invest in installation and operation of these technologies. Many other options exist in addition to wind and nuclear power including solar, biomass, geothermal, and others. These additional energy technologies can be considered in future studies. While CCS involves spending resources to concentrate CO{sub 2} in sinks, such as underground reservoirs, low-carbon alternative energy produces power, which will displace fossil fuel use while simultaneously generating revenues. Thus, these alternative energy technologies achieve the same objective as that of CCS, namely, the avoidance of atmospheric CO{sub 2} emissions.« less
Samothrakis, S; Arvanitis, T N; Plataniotis, A; McNeill, M D; Lister, P F
1997-11-01
Virtual Reality Modelling Language (VRML) is the start of a new era for medicine and the World Wide Web (WWW). Scientists can use VRML across the Internet to explore new three-dimensional (3D) worlds, share concepts and collaborate together in a virtual environment. VRML enables the generation of virtual environments through the use of geometric, spatial and colour data structures to represent 3D objects and scenes. In medicine, researchers often want to interact with scientific data, which in several instances may also be dynamic (e.g. MRI data). This data is often very large and is difficult to visualise. A 3D graphical representation can make the information contained in such large data sets more understandable and easier to interpret. Fast networks and satellites can reliably transfer large data sets from computer to computer. This has led to the adoption of remote tale-working in many applications including medical applications. Radiology experts, for example, can view and inspect in near real-time a 3D data set acquired from a patient who is in another part of the world. Such technology is destined to improve the quality of life for many people. This paper introduces VRML (including some technical details) and discusses the advantages of VRML in application developing.
Virtual Reality Website of Indonesia National Monument and Its Environment
NASA Astrophysics Data System (ADS)
Wardijono, B. A.; Hendajani, F.; Sudiro, S. A.
2017-02-01
National Monument (Monumen Nasional) is an Indonesia National Monument building where located in Jakarta. This monument is a symbol of Jakarta and it is a pride monument of the people in Jakarta and Indonesia country. This National Monument also has a museum about the history of the Indonesian country. To provide information to the general public, in this research we created and developed models of 3D graphics from the National Monument and the surrounding environment. Virtual Reality technology was used to display the visualization of the National Monument and the surrounding environment in 3D graphics form. Latest programming technology makes it possible to display 3D objects via the internet browser. This research used Unity3D and WebGL to make virtual reality models that can be implemented and showed on a Website. The result from this research is the development of 3-dimensional Website of the National Monument and its objects surrounding the environment that can be displayed through the Web browser. The virtual reality of whole objects was divided into a number of scenes, so that it can be displayed in real time visualization.
NASA Astrophysics Data System (ADS)
Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella
2015-09-01
Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.
Virtual reality and planetary exploration
NASA Technical Reports Server (NTRS)
Mcgreevy, Michael W.
1992-01-01
Exploring planetary environments is central to NASA's missions and goals. A new computing technology called Virtual Reality has much to offer in support of planetary exploration. This technology augments and extends human presence within computer-generated and remote spatial environments. Historically, NASA has been a leader in many of the fundamental concepts and technologies that comprise Virtual Reality. Indeed, Ames Research Center has a central role in the development of this rapidly emerging approach to using computers. This ground breaking work has inspired researchers in academia, industry, and the military. Further, NASA's leadership in this technology has spun off new businesses, has caught the attention of the international business community, and has generated several years of positive international media coverage. In the future, Virtual Reality technology will enable greatly improved human-machine interactions for more productive planetary surface exploration. Perhaps more importantly, Virtual Reality technology will democratize the experience of planetary exploration and thereby broaden understanding of, and support for, this historic enterprise.
Virtual reality and planetary exploration
NASA Astrophysics Data System (ADS)
McGreevy, Michael W.
Exploring planetary environments is central to NASA's missions and goals. A new computing technology called Virtual Reality has much to offer in support of planetary exploration. This technology augments and extends human presence within computer-generated and remote spatial environments. Historically, NASA has been a leader in many of the fundamental concepts and technologies that comprise Virtual Reality. Indeed, Ames Research Center has a central role in the development of this rapidly emerging approach to using computers. This ground breaking work has inspired researchers in academia, industry, and the military. Further, NASA's leadership in this technology has spun off new businesses, has caught the attention of the international business community, and has generated several years of positive international media coverage. In the future, Virtual Reality technology will enable greatly improved human-machine interactions for more productive planetary surface exploration. Perhaps more importantly, Virtual Reality technology will democratize the experience of planetary exploration and thereby broaden understanding of, and support for, this historic enterprise.
Dynamic Test Generation for Large Binary Programs
2009-11-12
the fuzzing@whitestar.linuxbox.orgmailing list, including Jared DeMott, Disco Jonny, and Ari Takanen, for discussions on fuzzing tradeoffs. Martin...as is the case for large applications where exercising all execution paths is virtually hopeless anyway. This point will be further discussed in...consumes trace files generated by iDNA and virtually re-executes the recorded runs. TruScan offers several features that substantially simplify symbolic
Virtual reality and consciousness inference in dreaming
Hobson, J. Allan; Hong, Charles C.-H.; Friston, Karl J.
2014-01-01
This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that – through experience-dependent plasticity – becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep – and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain’s generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis – evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research. PMID:25346710
Virtual reality and consciousness inference in dreaming.
Hobson, J Allan; Hong, Charles C-H; Friston, Karl J
2014-01-01
This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that - through experience-dependent plasticity - becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep - and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain's generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis - evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research.
Future Game Developers within a Virtual World: Learner Archetypes and Team Leader Attributes
ERIC Educational Resources Information Center
Franetovic, Marija
2016-01-01
This case study research sought to understand a subset of the next generation in reference to virtual world learning within a game development course. The students completed an ill-structured team project which was facilitated using authentic learning strategies within a virtual world over a period of seven weeks. Research findings emerged from…
ERIC Educational Resources Information Center
Yuzer, T. Volkan
2007-01-01
The Internet usage has been increasing among persons in the worldwide. This situation highlights that the number of potential distance learners has been increasing in the Internet society. Besides, the terms and concepts of the Internet environments become to be spread out in this society like virtual reality. It is also possible to explain the…
ERIC Educational Resources Information Center
Taylor, Michael J.; Taylor, Dave; Vlaev, Ivo; Elkin, Sarah
2017-01-01
Recent advances in communication technologies enable potential provision of remote education for patients using computer-generated environments known as virtual worlds. Previous research has revealed highly variable levels of patient receptiveness to using information technologies for healthcare-related purposes. This preliminary study involved…
Virtual Worlds vs Books and Videos in History Education
ERIC Educational Resources Information Center
Ijaz, Kiran; Bogdanovych, Anton; Trescak, Tomas
2017-01-01
In this paper, we investigate an application of virtual reality and artificial intelligence (AI) as a technological combination that has a potential to improve the learning experience and engage with the modern generation of students. To address this need, we have created a virtual reality replica of one of humanity's first cities, the city of…
Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael
2011-01-01
This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation.
Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael
2011-01-01
This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation. PMID:22247677
A genetic algorithm for a bi-objective mathematical model for dynamic virtual cell formation problem
NASA Astrophysics Data System (ADS)
Moradgholi, Mostafa; Paydar, Mohammad Mahdi; Mahdavi, Iraj; Jouzdani, Javid
2016-09-01
Nowadays, with the increasing pressure of the competitive business environment and demand for diverse products, manufacturers are force to seek for solutions that reduce production costs and rise product quality. Cellular manufacturing system (CMS), as a means to this end, has been a point of attraction to both researchers and practitioners. Limitations of cell formation problem (CFP), as one of important topics in CMS, have led to the introduction of virtual CMS (VCMS). This research addresses a bi-objective dynamic virtual cell formation problem (DVCFP) with the objective of finding the optimal formation of cells, considering the material handling costs, fixed machine installation costs and variable production costs of machines and workforce. Furthermore, we consider different skills on different machines in workforce assignment in a multi-period planning horizon. The bi-objective model is transformed to a single-objective fuzzy goal programming model and to show its performance; numerical examples are solved using the LINGO software. In addition, genetic algorithm (GA) is customized to tackle large-scale instances of the problems to show the performance of the solution method.
An augmented reality tool for learning spatial anatomy on mobile devices.
Jain, Nishant; Youngblood, Patricia; Hasel, Matthew; Srivastava, Sakti
2017-09-01
Augmented Realty (AR) offers a novel method of blending virtual and real anatomy for intuitive spatial learning. Our first aim in the study was to create a prototype AR tool for mobile devices. Our second aim was to complete a technical evaluation of our prototype AR tool focused on measuring the system's ability to accurately render digital content in the real world. We imported Computed Tomography (CT) data derived virtual surface models into a 3D Unity engine environment and implemented an AR algorithm to display these on mobile devices. We investigated the accuracy of the virtual renderings by comparing a physical cube with an identical virtual cube for dimensional accuracy. Our comparative study confirms that our AR tool renders 3D virtual objects with a high level of accuracy as evidenced by the degree of similarity between measurements of the dimensions of a virtual object (a cube) and the corresponding physical object. We developed an inexpensive and user-friendly prototype AR tool for mobile devices that creates highly accurate renderings. This prototype demonstrates an intuitive, portable, and integrated interface for spatial interaction with virtual anatomical specimens. Integrating this AR tool with a library of CT derived surface models provides a platform for spatial learning in the anatomy curriculum. The segmentation methodology implemented to optimize human CT data for mobile viewing can be extended to include anatomical variations and pathologies. The ability of this inexpensive educational platform to deliver a library of interactive, 3D models to students worldwide demonstrates its utility as a supplemental teaching tool that could greatly benefit anatomical instruction. Clin. Anat. 30:736-741, 2017. © 2017Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Virtual Inertia: Current Trends and Future Directions
Tamrakar, Ujjwol; Shrestha, Dipesh; Maharjan, Manisha; ...
2017-06-26
The modern power system is progressing from a synchronous machine-based system towards an inverter-dominated system, with a large-scale penetration of renewable energy sources (RESs) like wind and photovoltaics. RES units today represent a major share of the generation, and the traditional approach of integrating themas grid following units can lead to frequency instability. Many researchers have pointed towards using inverters with virtual inertia control algorithms so that they appear as synchronous generators to the grid, maintaining and enhancing system stability. Our paper presents a literature review of the current state-of-the-art of virtual inertia implementation techniques, and explores potential research directionsmore » and challenges. The major virtual inertia topologies are compared and classified. Through literature review and simulations of some selected topologies it has been shown that similar inertial response can be achieved by relating the parameters of these topologies through time constants and inertia constants, although the exact frequency dynamics may vary slightly. The suitability of a topology depends on system control architecture and desired level of detail in replication of the dynamics of synchronous generators. We present a discussion on the challenges and research directions which points out several research needs, especially for systems level integration of virtual inertia systems.« less
Virtual Inertia: Current Trends and Future Directions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamrakar, Ujjwol; Shrestha, Dipesh; Maharjan, Manisha
The modern power system is progressing from a synchronous machine-based system towards an inverter-dominated system, with a large-scale penetration of renewable energy sources (RESs) like wind and photovoltaics. RES units today represent a major share of the generation, and the traditional approach of integrating themas grid following units can lead to frequency instability. Many researchers have pointed towards using inverters with virtual inertia control algorithms so that they appear as synchronous generators to the grid, maintaining and enhancing system stability. Our paper presents a literature review of the current state-of-the-art of virtual inertia implementation techniques, and explores potential research directionsmore » and challenges. The major virtual inertia topologies are compared and classified. Through literature review and simulations of some selected topologies it has been shown that similar inertial response can be achieved by relating the parameters of these topologies through time constants and inertia constants, although the exact frequency dynamics may vary slightly. The suitability of a topology depends on system control architecture and desired level of detail in replication of the dynamics of synchronous generators. We present a discussion on the challenges and research directions which points out several research needs, especially for systems level integration of virtual inertia systems.« less
Locally linear regression for pose-invariant face recognition.
Chai, Xiujuan; Shan, Shiguang; Chen, Xilin; Gao, Wen
2007-07-01
The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez Anez, Francisco
This paper presents two development projects (STARMATE and VIRMAN) focused on supporting training on maintenance. Both projects aim at specifying, designing, developing, and demonstrating prototypes allowing computer guided maintenance of complex mechanical elements using Augmented and Virtual Reality techniques. VIRMAN is a Spanish development project. The objective is to create a computer tool for maintenance training course elaborations and training delivery based on 3D virtual reality models of complex components. The training delivery includes 3D record displays on maintenance procedures with all complementary information for intervention understanding. Users are requested to perform the maintenance intervention trying to follow up themore » procedure. Users can be evaluated about the level of knowledge achieved. Instructors can check the evaluation records left during the training sessions. VIRMAN is simple software supported by a regular computer and can be used in an Internet framework. STARMATE is a forward step in the area of virtual reality. STARMATE is a European Commission project in the frame of 'Information Societies Technologies'. A consortium of five companies and one research institute shares their expertise in this new technology. STARMATE provides two main functionalities (1) user assistance for achieving assembly/de-assembly and following maintenance procedures, and (2) workforce training. The project relies on Augmented Reality techniques, which is a growing area in Virtual Reality research. The idea of Augmented Reality is to combine a real scene, viewed by the user, with a virtual scene, generated by a computer, augmenting the reality with additional information. The user interface is see-through goggles, headphones, microphone and an optical tracking system. All these devices are integrated in a helmet connected with two regular computers. The user has his hands free for performing the maintenance intervention and he can navigate in the virtual world thanks to a voice recognition system and a virtual pointing device. The maintenance work is guided with audio instructions, 2D and 3D information are directly displayed into the user's goggles: There is a position-tracking system that allows 3D virtual models to be displayed in the real counterpart positions independently of the user allocation. The user can create his own virtual environment, placing the information required wherever he wants. The STARMATE system is applicable to a large variety of real work situations. (author)« less
Freeform object design and simultaneous manufacturing
NASA Astrophysics Data System (ADS)
Zhang, Wei; Zhang, Weihan; Lin, Heng; Leu, Ming C.
2003-04-01
Today's product design, especially the consuming product design, focuses more and more on individuation, originality, and the time to market. One way to meet these challenges is using the interactive and creationary product design methods and rapid prototyping/rapid tooling. This paper presents a novel Freeform Object Design and Simultaneous Manufacturing (FODSM) method that combines the natural interaction feature in the design phase and simultaneous manufacturing feature in the prototyping phase. The natural interactive three-dimensional design environment is achieved by adopting virtual reality technology. The geometry of the designed object is defined through the process of "virtual sculpting" during which the designer can touch and visualize the designed object and can hear the virtual manufacturing environment noise. During the designing process, the computer records the sculpting trajectories and automatically translates them into NC codes so as to simultaneously machine the designed part. The paper introduced the principle, implementation process, and key techniques of the new method, and compared it with other popular rapid prototyping methods.
The CAVE (TM) automatic virtual environment: Characteristics and applications
NASA Technical Reports Server (NTRS)
Kenyon, Robert V.
1995-01-01
Virtual reality may best be defined as the wide-field presentation of computer-generated, multi-sensory information that tracks a user in real time. In addition to the more well-known modes of virtual reality -- head-mounted displays and boom-mounted displays -- the Electronic Visualization Laboratory at the University of Illinois at Chicago recently introduced a third mode: a room constructed from large screens on which the graphics are projected on to three walls and the floor. The CAVE is a multi-person, room sized, high resolution, 3D video and audio environment. Graphics are rear projected in stereo onto three walls and the floor, and viewed with stereo glasses. As a viewer wearing a location sensor moves within its display boundaries, the correct perspective and stereo projections of the environment are updated, and the image moves with and surrounds the viewer. The other viewers in the CAVE are like passengers in a bus, along for the ride. 'CAVE,' the name selected for the virtual reality theater, is both a recursive acronym (Cave Automatic Virtual Environment) and a reference to 'The Simile of the Cave' found in Plato's 'Republic,' in which the philosopher explores the ideas of perception, reality, and illusion. Plato used the analogy of a person facing the back of a cave alive with shadows that are his/her only basis for ideas of what real objects are. Rather than having evolved from video games or flight simulation, the CAVE has its motivation rooted in scientific visualization and the SIGGRAPH 92 Showcase effort. The CAVE was designed to be a useful tool for scientific visualization. The Showcase event was an experiment; the Showcase chair and committee advocated an environment for computational scientists to interactively present their research at a major professional conference in a one-to-many format on high-end workstations attached to large projection screens. The CAVE was developed as a 'virtual reality theater' with scientific content and projection that met the criteria of Showcase.
Building an Open-source Simulation Platform of Acoustic Radiation Force-based Breast Elastography
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-01-01
Ultrasound-based elastography including strain elastography (SE), acoustic radiation force Impulse (ARFI) imaging, point shear wave elastography (pSWE) and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. “ground truth”) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity – one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments. PMID:28075330
Building an open-source simulation platform of acoustic radiation force-based breast elastography
NASA Astrophysics Data System (ADS)
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-03-01
Ultrasound-based elastography including strain elastography, acoustic radiation force impulse (ARFI) imaging, point shear wave elastography and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. ‘ground truth’) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity—one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments.
Network Virtualization - Opportunities and Challenges for Operators
NASA Astrophysics Data System (ADS)
Carapinha, Jorge; Feil, Peter; Weissmann, Paul; Thorsteinsson, Saemundur E.; Etemoğlu, Çağrı; Ingþórsson, Ólafur; Çiftçi, Selami; Melo, Márcio
In the last few years, the concept of network virtualization has gained a lot of attention both from industry and research projects. This paper evaluates the potential of network virtualization from an operator's perspective, with the short-term goal of optimizing service delivery and rollout, and on a longer term as an enabler of technology integration and migration. Based on possible scenarios for implementing and using network virtualization, new business roles and models are examined. Open issues and topics for further evaluation are identified. In summary, the objective is to identify the challenges but also new opportunities for telecom operators raised by network virtualization.
Ohio | Midmarket Solar Policies in the United States | Solar Research |
Meter aggregation: Virtual net metering is allowed for state, municipal, and agricultural customers state, municipal, or agricultural host under certain conditions. Net excess generation from virtual net
Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo
2014-04-15
Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients' brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies.
Gatica-Rojas, Valeska; Méndez-Rebolledo, Guillermo
2014-01-01
Two key characteristics of all virtual reality applications are interaction and immersion. Systemic interaction is achieved through a variety of multisensory channels (hearing, sight, touch, and smell), permitting the user to interact with the virtual world in real time. Immersion is the degree to which a person can feel wrapped in the virtual world through a defined interface. Virtual reality interface devices such as the Nintendo® Wii and its peripheral nunchuks-balance board, head mounted displays and joystick allow interaction and immersion in unreal environments created from computer software. Virtual environments are highly interactive, generating great activation of visual, vestibular and proprioceptive systems during the execution of a video game. In addition, they are entertaining and safe for the user. Recently, incorporating therapeutic purposes in virtual reality interface devices has allowed them to be used for the rehabilitation of neurological patients, e.g., balance training in older adults and dynamic stability in healthy participants. The improvements observed in neurological diseases (chronic stroke and cerebral palsy) have been shown by changes in the reorganization of neural networks in patients’ brain, along with better hand function and other skills, contributing to their quality of life. The data generated by such studies could substantially contribute to physical rehabilitation strategies. PMID:25206907
ERIC Educational Resources Information Center
Chihak, Benjamin J.; Plumert, Jodie M.; Ziemer, Christine J.; Babu, Sabarish; Grechkin, Timofey; Cremer, James F.; Kearney, Joseph K.
2010-01-01
Two experiments examined how 10- and 12-year-old children and adults intercept moving gaps while bicycling in an immersive virtual environment. Participants rode an actual bicycle along a virtual roadway. At 12 test intersections, participants attempted to pass through a gap between 2 moving, car-sized blocks without stopping. The blocks were…
ERIC Educational Resources Information Center
Lahav, Orly; Schloerb, David W.; Srinivasan, Mandayam A.
2015-01-01
Introduction: The BlindAid, a virtual system developed for orientation and mobility (O&M) training of people who are blind or have low vision, allows interaction with different virtual components (structures and objects) via auditory and haptic feedback. This research examined if and how the BlindAid that was integrated within an O&M…
Mangold, Stefanie; Gatidis, Sergios; Luz, Oliver; König, Benjamin; Schabel, Christoph; Bongers, Malte N; Flohr, Thomas G; Claussen, Claus D; Thomas, Christoph
2014-12-01
The objective of this study was to retrospectively determine the potential of virtual monoenergetic (ME) reconstructions for a reduction of metal artifacts using a new-generation single-source computed tomographic (CT) scanner. The ethics committee of our institution approved this retrospective study with a waiver of the need for informed consent. A total of 50 consecutive patients (29 men and 21 women; mean [SD] age, 51.3 [16.7] years) with metal implants after osteosynthetic fracture treatment who had been examined using a single-source CT scanner (SOMATOM Definition Edge; Siemens Healthcare, Forchheim, Germany; consecutive dual-energy mode with 140 kV/80 kV) were selected. Using commercially available postprocessing software (syngo Dual Energy; Siemens AG), virtual ME data sets with extrapolated energy of 130 keV were generated (medium smooth convolution kernel D30) and compared with standard polyenergetic images reconstructed with a B30 (medium smooth) and a B70 (sharp) kernel. For quantification of the beam hardening artifacts, CT values were measured on circular lines surrounding bone and the osteosynthetic device, and frequency analyses of these values were performed using discrete Fourier transform. A high proportion of low frequencies to the spectrum indicates a high level of metal artifacts. The measurements in all data sets were compared using the Wilcoxon signed rank test. The virtual ME images with extrapolated energy of 130 keV showed significantly lower contribution of low frequencies after the Fourier transform compared with any polyenergetic data set reconstructed with D30, B70, and B30 kernels (P < 0.001). Sequential single-source dual-energy CT allows an efficient reduction of metal artifacts using high-energy ME extrapolation after osteosynthetic fracture treatment.
Vroom: designing an augmented environment for remote collaboration in digital cinema production
NASA Astrophysics Data System (ADS)
Margolis, Todd; Cornish, Tracy
2013-03-01
As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.
Using a 3D Virtual Supermarket to Measure Food Purchase Behavior: A Validation Study
Jiang, Yannan; Steenhuis, Ingrid Hendrika Margaretha; Ni Mhurchu, Cliona
2015-01-01
Background There is increasing recognition that supermarkets are an important environment for health-promoting interventions such as fiscal food policies or front-of-pack nutrition labeling. However, due to the complexities of undertaking such research in the real world, well-designed randomized controlled trials on these kinds of interventions are lacking. The Virtual Supermarket is a 3-dimensional computerized research environment designed to enable experimental studies in a supermarket setting without the complexity or costs normally associated with undertaking such research. Objective The primary objective was to validate the Virtual Supermarket by comparing virtual and real-life food purchasing behavior. A secondary objective was to obtain participant feedback on perceived sense of “presence” (the subjective experience of being in one place or environment even if physically located in another) in the Virtual Supermarket. Methods Eligible main household shoppers (New Zealand adults aged ≥18 years) were asked to conduct 3 shopping occasions in the Virtual Supermarket over 3 consecutive weeks, complete the validated Presence Questionnaire Items Stems, and collect their real supermarket grocery till receipts for that same period. Proportional expenditure (NZ$) and the proportion of products purchased over 18 major food groups were compared between the virtual and real supermarkets. Data were analyzed using repeated measures mixed models. Results A total of 123 participants consented to take part in the study. In total, 69.9% (86/123) completed 1 shop in the Virtual Supermarket, 64.2% (79/123) completed 2 shops, 60.2% (74/123) completed 3 shops, and 48.8% (60/123) returned their real supermarket till receipts. The 4 food groups with the highest relative expenditures were the same for the virtual and real supermarkets: fresh fruit and vegetables (virtual estimate: 14.3%; real: 17.4%), bread and bakery (virtual: 10.0%; real: 8.2%), dairy (virtual: 19.1%; real: 12.6%), and meat and fish (virtual: 16.5%; real: 16.8%). Significant differences in proportional expenditures were observed for 6 food groups, with largest differences (virtual – real) for dairy (in expenditure 6.5%, P<.001; in items 2.2%, P=.04) and fresh fruit and vegetables (in expenditure: –3.1%, P=.04; in items: 5.9%, P=.002). There was no trend of overspending in the Virtual Supermarket and participants experienced a medium-to-high presence (88%, 73/83 scored medium; 8%, 7/83 scored high). Conclusions Shopping patterns in the Virtual Supermarket were comparable to those in real life. Overall, the Virtual Supermarket is a valid tool to measure food purchasing behavior. Nevertheless, it is important to improve the functionality of some food categories, in particular fruit and vegetables and dairy. The results of this validation will assist in making further improvements to the software and with optimization of the internal and external validity of this innovative methodology. PMID:25921185
Minovski, Nikola; Perdih, Andrej; Solmajer, Tom
2012-05-01
The virtual combinatorial chemistry approach as a methodology for generating chemical libraries of structurally-similar analogs in a virtual environment was employed for building a general mixed virtual combinatorial library with a total of 53.871 6-FQ structural analogs, introducing the real synthetic pathways of three well known 6-FQ inhibitors. The druggability properties of the generated combinatorial 6-FQs were assessed using an in-house developed drug-likeness filter integrating the Lipinski/Veber rule-sets. The compounds recognized as drug-like were used as an external set for prediction of the biological activity values using a neural-networks (NN) model based on an experimentally-determined set of active 6-FQs. Furthermore, a subset of compounds was extracted from the pool of drug-like 6-FQs, with predicted biological activity, and subsequently used in virtual screening (VS) campaign combining pharmacophore modeling and molecular docking studies. This complex scheme, a powerful combination of chemometric and molecular modeling approaches provided novel QSAR guidelines that could aid in the further lead development of 6-FQs agents.
NASA Astrophysics Data System (ADS)
Davias, M. E.; Gilbride, J. L.
2011-12-01
Aerial photographs of Carolina bays taken in the 1930's sparked the initial research into their geomorphology. Satellite Imagery available today through the Google Earth Virtual Globe facility expands the regions available for interrogation, but reveal only part of their unique planforms. Digital Elevation Maps (DEMs), using Light Detection And Ranging (LiDAR) remote sensing data, accentuate the visual presentation of these aligned ovoid shallow basins by emphasizing their robust circumpheral rims. To support a geospatial survey of Carolina bay landforms in the continental USA, 400,000 km2 of hsv-shaded DEMs were created as KML-JPEG tile sets. A majority of these DEMs were generated with LiDAR-derived data. We demonstrate the tile generation process and their integration into Google Earth, where the DEMs augment available photographic imagery for the visualization of bay planforms. While the generic Carolina bay planform is considered oval, we document subtle regional variations. Using a small set of empirically derived planform shapes, we created corresponding Google Earth overlay templates. We demonstrate the analysis of an individual Carolina bay by placing an appropriate overlay onto the virtually globe, then orientating, sizing and rotating it by edit handles such that it satisfactorily represents the bay's rim. The resulting overlay data element is extracted from Google Earth's object directory and programmatically processed to generate metrics such as geographic location, elevation, major and minor axis and inferred orientation. Utilizing a virtual globe facility for data capture may result in higher quality data compared to methods that reference flat maps, where geospatial shape and orientation of the bays could be skewed and distorted in the orthographic projection process. Using the methodology described, we have measured over 25k distinct Carolina bays. We discuss the Google Fusion geospatial data repository facility, through which these data have been assembled and made web-accessible to other researchers. Preliminary findings from the survey are discussed, such as how bay surface area, eccentricity and orientation vary across ~800 1/4° × 1/4° grid elements. Future work includes measuring 25k additional bays, as well as interrogation of the orientation data to identify any possible systematic geospatial relationships.
Benbouriche, M; Renaud, P; Pelletier, J-F; De Loor, P
2016-12-01
Forensic psychiatry is the field whose expertise is the assessment and treatment of offending behaviours, in particular when offenses are related to mental illness. An underlying question for all etiological models concerns the manner in which an individual's behaviours are organized. Specifically, it becomes crucial to understand how certain individuals come to display maladaptive behaviours in a given environment, especially when considering issues such as offenders' responsibility and their ability to change their behaviours. Thanks to its ability to generate specific environments, associated with a high experimental control on generated simulations, virtual reality is gaining recognition in forensic psychiatry. Virtual reality has generated promising research data and may turn out to be a remarkable clinical tool in the near future. While research has increased, a conceptual work about its theoretical underpinnings is still lacking. However, no important benefit should be expected from the introduction of a new tool (as innovative as virtual reality) without an explicit and heuristic theoretical framework capable of clarifying its benefits in forensic psychiatry. Our paper introduces self-regulation perspective as the most suitable theoretical framework for virtual reality in forensic psychiatry. It will be argued that virtual reality does not solely help to increase ecological validity. However, it does allow one to grant access to an improved understanding of violent offending behaviours by probing into the underlying mechanisms involved in the self-regulation of behaviours in a dynamical environment. Illustrations are given as well as a discussion regarding perspectives in the use of virtual reality in forensic psychiatry. Copyright © 2015 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.
Research and realization of signal simulation on virtual instrument
NASA Astrophysics Data System (ADS)
Zhao, Qi; He, Wenting; Guan, Xiumei
2010-02-01
In the engineering project, arbitrary waveform generator controlled by software interface is needed by simulation and test. This article discussed the program using the SCPI (Standard Commands For Programmable Instruments) protocol and the VISA (Virtual Instrument System Architecture) library to control the Agilent signal generator (Agilent N5182A) by instrument communication over the LAN interface. The program can conduct several signal generations such as CW (continuous wave), AM (amplitude modulation), FM (frequency modulation), ΦM (phase modulation), Sweep. As the result, the program system has good operability and portability.
Three-dimensional (3D) printing and its applications for aortic diseases.
Hangge, Patrick; Pershad, Yash; Witting, Avery A; Albadawi, Hassan; Oklu, Rahmi
2018-04-01
Three-dimensional (3D) printing is a process which generates prototypes from virtual objects in computer-aided design (CAD) software. Since 3D printing enables the creation of customized objects, it is a rapidly expanding field in an age of personalized medicine. We discuss the use of 3D printing in surgical planning, training, and creation of devices for the treatment of aortic diseases. 3D printing can provide operators with a hands-on model to interact with complex anatomy, enable prototyping of devices for implantation based upon anatomy, or even provide pre-procedural simulation. Potential exists to expand upon current uses of 3D printing to create personalized implantable devices such as grafts. Future studies should aim to demonstrate the impact of 3D printing on outcomes to make this technology more accessible to patients with complex aortic diseases.
Reduced order modeling of head related transfer functions for virtual acoustic displays
NASA Astrophysics Data System (ADS)
Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley
2003-04-01
The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.
Virtual gonio-spectrophotometer for validation of BRDF designs
NASA Astrophysics Data System (ADS)
Mihálik, Andrej; Ďurikovič, Roman
2011-10-01
Measurement of the appearance of an object consists of a group of measurements to characterize the color and surface finish of the object. This group of measurements involves the spectral energy distribution of propagated light measured in terms of reflectance and transmittance, and the spatial energy distribution of that light measured in terms of the bidirectional reflectance distribution function (BRDF). In this article we present the virtual gonio-spectrophotometer, a device that measures flux (power) as a function of illumination and observation. Virtual gonio-spectrophotometer measurements allow the determination of the scattering profile of specimens that can be used to verify the physical characteristics of the computer model used to simulate the scattering profile. Among the characteristics that we verify is the energy conservation of the computer model. A virtual gonio-spectrophotometer is utilized to find the correspondence between industrial measurements obtained from gloss meters and the parameters of a computer reflectance model.
Possibilities and Determinants of Using Low-Cost Devices in Virtual Education Applications
ERIC Educational Resources Information Center
Bun, Pawel Kazimierz; Wichniarek, Radoslaw; Górski, Filip; Grajewski, Damian; Zawadzki, Przemyslaw; Hamrol, Adam
2017-01-01
Virtual reality (VR) may be used as an innovative educational tool. However, in order to fully exploit its potential, it is essential to achieve the effect of immersion. To more completely submerge the user in a virtual environment, it is necessary to ensure that the user's actions are directly translated into the image generated by the…
ERIC Educational Resources Information Center
Jaen, Maria Moreno
2009-01-01
This paper presents survey data from English Philology students (University of Granada) on a virtual course entitled ADELEX--Assessing and Developing Lexis--which was carried out in 2007-08 to enhance vocabulary acquisition. In the first part of this paper, we briefly offer a description of this second generation virtual course to enhance lexical…
Advanced Collaborative Environments Supporting Systems Integration and Design
2003-03-01
concurrently view a virtual system or product model while maintaining natural, human communication . These virtual systems operate within a computer-generated...These environments allow multiple individuals to concurrently view a virtual system or product model while simultaneously maintaining natural, human ... communication . As a result, TARDEC researchers and system developers are using this advanced high-end visualization technology to develop future
Tieri, Gaetano; Gioia, Annamaria; Scandola, Michele; Pavone, Enea F; Aglioti, Salvatore M
2017-05-01
To explore the link between Sense of Embodiment (SoE) over a virtual hand and physiological regulation of skin temperature, 24 healthy participants were immersed in virtual reality through a Head Mounted Display and had their real limb temperature recorded by means of a high-sensitivity infrared camera. Participants observed a virtual right upper limb (appearing either normally, or with the hand detached from the forearm) or limb-shaped non-corporeal control objects (continuous or discontinuous wooden blocks) from a first-person perspective. Subjective ratings of SoE were collected in each observation condition, as well as temperatures of the right and left hand, wrist and forearm. The observation of these complex, body and body-related virtual scenes resulted in increased real hand temperature when compared to a baseline condition in which a 3d virtual ball was presented. Crucially, observation of non-natural appearances of the virtual limb (discontinuous limb) and limb-shaped non-corporeal objects elicited high increase in real hand temperature and low SoE. In contrast, observation of the full virtual limb caused high SoE and low temperature changes in the real hand with respect to the other conditions. Interestingly, the temperature difference across the different conditions occurred according to a topographic rule that included both hands. Our study sheds new light on the role of an external hand's visual appearance and suggests a tight link between higher-order bodily self-representations and topographic regulation of skin temperature. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Bolodurina, I. P.; Parfenov, D. I.
2018-01-01
We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.
Pukala, Jason; Meeks, Sanford L; Staton, Robert J; Bova, Frank J; Mañon, Rafael R; Langen, Katja M
2013-11-01
Deformable image registration (DIR) is being used increasingly in various clinical applications. However, the underlying uncertainties of DIR are not well-understood and a comprehensive methodology has not been developed for assessing a range of interfraction anatomic changes during head and neck cancer radiotherapy. This study describes the development of a library of clinically relevant virtual phantoms for the purpose of aiding clinicians in the QA of DIR software. These phantoms will also be available to the community for the independent study and comparison of other DIR algorithms and processes. Each phantom was derived from a pair of kVCT volumetric image sets. The first images were acquired of head and neck cancer patients prior to the start-of-treatment and the second were acquired near the end-of-treatment. A research algorithm was used to autosegment and deform the start-of-treatment (SOT) images according to a biomechanical model. This algorithm allowed the user to adjust the head position, mandible position, and weight loss in the neck region of the SOT images to resemble the end-of-treatment (EOT) images. A human-guided thin-plate splines algorithm was then used to iteratively apply further deformations to the images with the objective of matching the EOT anatomy as closely as possible. The deformations from each algorithm were combined into a single deformation vector field (DVF) and a simulated end-of-treatment (SEOT) image dataset was generated from that DVF. Artificial noise was added to the SEOT images and these images, along with the original SOT images, created a virtual phantom where the underlying "ground-truth" DVF is known. Images from ten patients were deformed in this fashion to create ten clinically relevant virtual phantoms. The virtual phantoms were evaluated to identify unrealistic DVFs using the normalized cross correlation (NCC) and the determinant of the Jacobian matrix. A commercial deformation algorithm was applied to the virtual phantoms to show how they may be used to generate estimates of DIR uncertainty. The NCC showed that the simulated phantom images had greater similarity to the actual EOT images than the images from which they were derived, supporting the clinical relevance of the synthetic deformation maps. Calculation of the Jacobian of the "ground-truth" DVFs resulted in only positive values. As an example, mean error statistics are presented for all phantoms for the brainstem, cord, mandible, left parotid, and right parotid. It is essential that DIR algorithms be evaluated using a range of possible clinical scenarios for each treatment site. This work introduces a library of virtual phantoms intended to resemble real cases for interfraction head and neck DIR that may be used to estimate and compare the uncertainty of any DIR algorithm.
Monitoring and analysis of data in cyberspace
NASA Technical Reports Server (NTRS)
Schwuttke, Ursula M. (Inventor); Angelino, Robert (Inventor)
2001-01-01
Information from monitored systems is displayed in three dimensional cyberspace representations defining a virtual universe having three dimensions. Fixed and dynamic data parameter outputs from the monitored systems are visually represented as graphic objects that are positioned in the virtual universe based on relationships to the system and to the data parameter categories. Attributes and values of the data parameters are indicated by manipulating properties of the graphic object such as position, color, shape, and motion.
Foreman, Nigel; Sandamas, George; Newson, David
2004-08-01
Four groups of undergraduates (half of each gender) experienced a movement along a corridor containing three distinctive objects, in a virtual environment (VE) with wide-screen projection. One group simulated walking along the virtual corridor using a proprietary step-exercise device. A second group moved along the corridor in conventional flying mode, depressing a keyboard key to initiate continuous forward motion. Two further groups observed the walking and flying participants, by viewing their progress on the screen. Participants then had to walk along a real equivalent but empty corridor, and indicate the positions of the three objects. All groups underestimated distances in the real corridor, the greatest underestimates occurring for the middle distance object. Males' underestimations were significantly lower than females' at all distances. However, there was no difference between the active participants and passive observers, nor between walking and flying conditions.
Virtual water balance estimation in Tunisia
NASA Astrophysics Data System (ADS)
Stambouli, Talel; Benalaya, Abdallah; Ghezal, Lamia; Ali, Chebil; Hammami, Rifka; Souissi, Asma
2015-04-01
The water in Tunisia is limited and unevenly distributed in the different regions, especially in arid zones. In fact, the annual rainfall average varies from less than 100 mm in the extreme South to over 1500 mm in the extreme North of the country. Currently, the conventional potential of water resources of the country is estimated about 4.84 billion m³ / year of which 2.7 billion cubic meters / year of surface water and 2.14 billion cubic meters / year of groundwater, characterizing a structural shortage for water safety in Tunisia (under 500m3/inhabitant/year). With over than 80% of water volumes have been mobilized for agriculture. The virtual water concept, defined by Allan (1997), as the amount of water needed to generate a product of both natural and artificial origin, this concept establish a similarity between product marketing and water trade. Given the influence of water in food production, virtual water studies focus generally on food products. At a global scale, the influence of these product's markets with water management was not seen. Influence has appreciated only by analyzing water-scarce countries, but at the detail level, should be increased, as most studies consider a country as a single geographical point, leading to considerable inaccuracies. The main objective of this work is the virtual water balance estimation of strategic crops in Tunisia (both irrigated and dry crops) to determine their influence on the water resources management and to establish patterns for improving it. The virtual water balance was performed basing on farmer's surveys, crop and meteorological data, irrigation management and regional statistics. Results show that the majority of farmers realize a waste of the irrigation water especially at the vegetable crops and fruit trees. Thus, a good control of the cultural package may result in lower quantities of water used by crops while ensuring good production with a suitable economic profitability. Then, the virtual water concept integration in the production systems choice and policies affecting the use of water is very useful to save over this scarce resource and to support farmers in their production activities and maintaining the sustainability of farms. Keywords: Virtual water, water balance, irrigation, Tunisia
Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg
2018-01-01
Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as ‘presence’, when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user’s overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience. PMID:29390023
Cooper, Natalia; Milella, Ferdinando; Pinto, Carlo; Cant, Iain; White, Mark; Meyer, Georg
2018-01-01
Objective and subjective measures of performance in virtual reality environments increase as more sensory cues are delivered and as simulation fidelity increases. Some cues (colour or sound) are easier to present than others (object weight, vestibular cues) so that substitute cues can be used to enhance informational content in a simulation at the expense of simulation fidelity. This study evaluates how substituting cues in one modality by alternative cues in another modality affects subjective and objective performance measures in a highly immersive virtual reality environment. Participants performed a wheel change in a virtual reality (VR) environment. Auditory, haptic and visual cues, signalling critical events in the simulation, were manipulated in a factorial design. Subjective ratings were recorded via questionnaires. The time taken to complete the task was used as an objective performance measure. The results show that participants performed best and felt an increased sense of immersion and involvement, collectively referred to as 'presence', when substitute multimodal sensory feedback was provided. Significant main effects of audio and tactile cues on task performance and on participants' subjective ratings were found. A significant negative relationship was found between the objective (overall completion times) and subjective (ratings of presence) performance measures. We conclude that increasing informational content, even if it disrupts fidelity, enhances performance and user's overall experience. On this basis we advocate the use of substitute cues in VR environments as an efficient method to enhance performance and user experience.
ERIC Educational Resources Information Center
Anyanwu, Godson Emeka; Agu, Augustine Uchechukwu; Anyaehie, Ugochukwu Bond
2012-01-01
The impact and perception of students on the use of a simple, low technology-driven version of a virtual microscope in teaching and assessments in cellular physiology and histology were studied. Its impact on the time and resources of the faculty were also assessed. Simple virtual slides and conventional microscopes were used to conduct the same…
Virtual Reality: An Emerging Tool to Treat Pain
2010-04-01
burn patients, physical therapy stretching of the newly healing skin helps to counteract the healing skin’s natural contraction as it scars...room, and substitute more calming music and sound effects. The patient interacts with the virtual world, throwing snowballs at objects in the virtual...care (Hoffman, Patterson et al, 2008) and physical therapy (Hoffman, Patterson, Carrougher, 2000; Hoffman, Patterson, Carrougher, Sharar, 2001; Sharar
Choi, Jung-Seok; Park, Sumi; Lee, Jun-Young; Jung, Hee-Yeon; Lee, Hae-Woo; Jin, Chong-Hyeon
2011-01-01
Objective Smoking related cues may elicit smoking urges and psychophysiological responses in subjects with nicotine dependence. This study aimed to investigate the effect of repeated virtual cue exposure therapy using the surround-screen based projection wall system on the psychophysiological responses in nicotine dependence. Methods The authors developed 3-dimensional neutral and smoking-related environments using virtual reality (VR) technology. Smoking-related environment was a virtual bar, which comprised both object-related and social situation cues. Ten subjects with nicotine dependence participated in 4-week (one session per week) virtual cue exposure therapy. Psychophysiological responses [electromyography (EMG), skin conductance (SC), and heart rate] and subjective nicotine craving were acquired during each session. Results VR nicotine cue elicited greater psychophysiological responses and subjective craving for smoking than did neutral cue, and exposure to social situation cues showed greater psychophysiological responses in SC and EMG than did object-related cues. This responsiveness decreased during the course of repeated therapy. Conclusion The present study found that both psychophysiological responses and subjective nicotine craving were greater to nicotine cue exposure via projection wall VR system than to neutral cues and that enhanced cue reactivity decreased gradually over the course of repeated exposure therapy. These results suggest that VR cue exposure therapy combined with psychophysiological response monitoring may be an alternative treatment modality for smoking cessation, although the current findings are preliminary. PMID:21852993
NASA Astrophysics Data System (ADS)
McGee, B. W.
2006-12-01
Recent studies reveal a general mistrust of science as well as a distorted perception of the scientific method by the public at-large. Concurrently, the number of science undergraduate and graduate students is in decline. By taking advantage of emergent technologies not only for direct public outreach but also to enhance public accessibility to the science process, it may be possible to both begin a reversal of popular scientific misconceptions and to engage a new generation of scientists. The Second Life platform is a 3-D virtual world produced and operated by Linden Research, Inc., a privately owned company instituted to develop new forms of immersive entertainment. Free and downloadable to the public, Second Life offers an imbedded physics engine, streaming audio and video capability, and unlike other "multiplayer" software, the objects and inhabitants of Second Life are entirely designed and created by its users, providing an open-ended experience without the structure of a traditional video game. Already, educational institutions, virtual museums, and real-world businesses are utilizing Second Life for teleconferencing, pre-visualization, and distance education, as well as to conduct traditional business. However, the untapped potential of Second Life lies in its versatility, where the limitations of traditional scientific meeting venues do not exist, and attendees need not be restricted by prohibitive travel costs. It will be shown that the Second Life system enables scientific authors and presenters at a "virtual conference" to display figures and images at full resolution, employ audio-visual content typically not available to conference organizers, and to perform demonstrations or premier three-dimensional renderings of objects, processes, or information. An enhanced presentation like those possible with Second Life would be more engaging to non- scientists, and such an event would be accessible to the general users of Second Life, who could have an uprecedented opportunity to witness an example of scientific collaboration typically reserved for members of a particular field or focus group. With a minimal investment in advertising or promotion both in real and virtual space, the possibility exists for scientific information and interaction to reach a far broader audience through Second Life than with any other currently available means for comparable cost.
Joint object and action recognition via fusion of partially observable surveillance imagery data
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Chan, Alex L.
2017-05-01
Partially observable group activities (POGA) occurring in confined spaces are epitomized by their limited observability of the objects and actions involved. In many POGA scenarios, different objects are being used by human operators for the conduct of various operations. In this paper, we describe the ontology of such as POGA in the context of In-Vehicle Group Activity (IVGA) recognition. Initially, we describe the virtue of ontology modeling in the context of IVGA and show how such an ontology and a priori knowledge about the classes of in-vehicle activities can be fused for inference of human actions that consequentially leads to understanding of human activity inside the confined space of a vehicle. In this paper, we treat the problem of "action-object" as a duality problem. We postulate a correlation between observed human actions and the object that is being utilized within those actions, and conversely, if an object being handled is recognized, we may be able to expect a number of actions that are likely to be performed on that object. In this study, we use partially observable human postural sequences to recognition actions. Inspired by convolutional neural networks (CNNs) learning capability, we present an architecture design using a new CNN model to learn "action-object" perception from surveillance videos. In this study, we apply a sequential Deep Hidden Markov Model (DHMM) as a post-processor to CNN to decode realized observations into recognized actions and activities. To generate the needed imagery data set for the training and testing of these new methods, we use the IRIS virtual simulation software to generate high-fidelity and dynamic animated scenarios that depict in-vehicle group activities under different operational contexts. The results of our comparative investigation are discussed and presented in detail.
Testing the continuum of delusional beliefs: an experimental study using virtual reality.
Freeman, Daniel; Pugh, Katherine; Vorontsova, Natasha; Antley, Angus; Slater, Mel
2010-02-01
A key problem in studying a hypothesized spectrum of severity of delusional ideation is determining that ideas are unfounded. The first objective was to use virtual reality to validate groups of individuals with low, moderate, and high levels of unfounded persecutory ideation. The second objective was to investigate, drawing upon a cognitive model of persecutory delusions, whether clinical and nonclinical paranoia are associated with similar causal factors. Three groups (low paranoia, high nonclinical paranoia, persecutory delusions) of 30 participants were recruited. Levels of paranoia were tested using virtual reality. The groups were compared on assessments of anxiety, worry, interpersonal sensitivity, depression, anomalous perceptual experiences, reasoning, and history of traumatic events. Virtual reality was found to cause no side effects. Persecutory ideation in virtual reality significantly differed across the groups. For the clear majority of the theoretical factors there were dose-response relationships with levels of paranoia. This is consistent with the idea of a spectrum of paranoia in the general population. Persecutory ideation is clearly present outside of clinical groups and there is consistency across the paranoia spectrum in associations with important theoretical variables.
Virtual reality for mobility devices: training applications and clinical results: a review.
Erren-Wolters, Catelijne Victorien; van Dijk, Henk; de Kort, Alexander C; Ijzerman, Maarten J; Jannink, Michiel J
2007-06-01
Virtual reality technology is an emerging technology that possibly can address the problems encountered in training (elderly) people to handle a mobility device. The objective of this review was to study different virtual reality training applications as well as their clinical implication for patients with mobility problems. Computerized literature searches were performed using the MEDLINE, Cochrane, CIRRIE and REHABDATA databases. This resulted in eight peer reviewed journal articles. The included studies could be divided into three categories, on the basis of their study objective. Five studies were related to training driving skills, two to physical exercise training and one to leisure activity. This review suggests that virtual reality is a potentially useful means to improve the use of a mobility device, in training one's driving skills, for keeping up the physical condition and also in a way of leisure time activity. Although this field of research appears to be in its early stages, the included studies pointed out a promising transfer of training in a virtual environment to the real-life use of mobility devices.
Virtual C Machine and Integrated Development Environment for ATMS Controllers.
DOT National Transportation Integrated Search
2000-04-01
The overall objective of this project is to develop a prototype virtual machine that fits on current Advanced Traffic Management Systems (ATMS) controllers and provides functionality for complex traffic operations.;Prepared in cooperation with Utah S...
Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.
Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu
2015-01-01
The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.
Ma, Hui-Ing; Hwang, Wen-Juh; Fang, Jing-Jing; Kuo, Jui-Kun; Wang, Ching-Yi; Leong, Iat-Fai; Wang, Tsui-Ying
2011-10-01
To investigate whether practising reaching for virtual moving targets would improve motor performance in people with Parkinson's disease. Randomized pretest-posttest control group design. A virtual reality laboratory in a university setting. Thirty-three adults with Parkinson's disease. The virtual reality training required 60 trials of reaching for fast-moving virtual balls with the dominant hand. The control group had 60 practice trials turning pegs with their non-dominant hand. Pretest and posttest required reaching with the dominant hand to grasp real stationary balls and balls moving at different speeds down a ramp. Success rates and kinematic data (movement time, peak velocity and percentage of movement time for acceleration phase) from pretest and posttest were recorded to determine the immediate transfer effects. Compared with the control group, the virtual reality training group became faster (F = 9.08, P = 0.005) and more forceful (F = 9.36, P = 0.005) when reaching for real stationary balls. However, there was no significant difference in success rate or movement kinematics between the two groups when reaching for real moving balls. A short virtual reality training programme improved the movement speed of discrete aiming tasks when participants reached for real stationary objects. However, the transfer effect was minimal when reaching for real moving objects.
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Banfi, F.; Brumana, R.; Oreni, D.; Previtali, M.; Roncoroni, F.
2015-08-01
This paper describes a procedure for the generation of a detailed HBIM which is then turned into a model for mobile apps based on augmented and virtual reality. Starting from laser point clouds, photogrammetric data and additional information, a geometric reconstruction with a high level of detail can be carried out by considering the basic requirements of BIM projects (parametric modelling, object relations, attributes). The work aims at demonstrating that a complex HBIM can be managed in portable devices to extract useful information not only for expert operators, but also towards a wider user community interested in cultural tourism.
3D Graphics Through the Internet: A "Shoot-Out"
NASA Technical Reports Server (NTRS)
Watson, Val; Lasinski, T. A. (Technical Monitor)
1995-01-01
3D graphics through the Internet needs to move beyond the current lowest common denominator of pre-computed movies, which consume bandwidth and are non-interactive. Panelists will demonstrate and compare 3D graphical tools for accessing, analyzing, and collaborating on information through the Internet and World-wide web. The "shoot-out" will illustrate which tools are likely to be the best for the various types of information, including dynamic scientific data, 3-D objects, and virtual environments. The goal of the panel is to encourage more effective use of the Internet by encouraging suppliers and users of information to adopt the next generation of graphical tools.
Sanhueza, Carlos A; Cartmell, Jonathan; El-Hawiet, Amr; Szpacenko, Adam; Kitova, Elena N; Daneshfar, Rambod; Klassen, John S; Lang, Dean E; Eugenio, Luiz; Ng, Kenneth K-S; Kitov, Pavel I; Bundle, David R
2015-01-07
A focused library of virtual heterobifunctional ligands was generated in silico and a set of ligands with recombined fragments was synthesized and evaluated for binding to Clostridium difficile toxins. The position of the trisaccharide fragment was used as a reference for filtering docked poses during virtual screening to match the trisaccharide ligand in a crystal structure. The peptoid, a diversity fragment probing the protein surface area adjacent to a known binding site, was generated by a multi-component Ugi reaction. Our approach combines modular fragment-based design with in silico screening of synthetically feasible compounds and lays the groundwork for future efforts in development of composite bifunctional ligands for large clostridial toxins.
Enhancing the pictorial content of digital holograms at 100 frames per second.
Tsang, P W M; Poon, T-C; Cheung, K W K
2012-06-18
We report a low complexity, non-iterative method for enhancing the sharpness, brightness, and contrast of the pictorial content that is recorded in a digital hologram, without the need of re-generating the latter from the original object scene. In our proposed method, the hologram is first back-projected to a 2-D virtual diffraction plane (VDP) which is located at close proximity to the original object points. Next the field distribution on the VDP, which shares similar optical properties as the object scene, is enhanced. Subsequently, the processed VDP is expanded into a full hologram. We demonstrate two types of enhancement: a modified histogram equalization to improve the brightness and contrast, and localized high-boost-filtering (LHBF) to increase the sharpness. Experiment results have demonstrated that our proposed method is capable of enhancing a 2048x2048 hologram at a rate of around 100 frames per second. To the best of our knowledge, this is the first time real-time image enhancement is considered in the context of digital holography.
Mental Representation of Spatial Cues During Spaceflight (3D-SPACE)
NASA Astrophysics Data System (ADS)
Clement, Gilles; Lathan, Corinna; Skinner, Anna; Lorigny, Eric
2008-06-01
The 3D-SPACE experiment is a joint effort between ESA and NASA to develop a simple virtual reality platform to enable astronauts to complete a series of tests while aboard the International Space Station (ISS). These tests will provide insights into the effects of the space environment on: (a) depth perception, by presenting 2D geometric illusions and 3D objects that subjects adjust with a finger trackball; (b) distance perception, by presenting natural or computer-generated 3D scenes where subjects estimate and report absolute distances or adjust distances; and (c) handwriting/drawing, by analyzing trajectories and velocities when subjects write or draw memorized objects with an electronic pen on a digitizing tablet. The objective of these tasks is to identify problems associated with 3D perception in astronauts with the goal of developing countermeasures to alleviate any associated performance risks. The equipment has been uploaded to the ISS in April 2008, and the first measurements should take place during Increment 17.
Designing a successful HMD-based experience
NASA Technical Reports Server (NTRS)
Pierce, J. S.; Pausch, R.; Sturgill, C. B.; Christiansen, K. D.; Kaiser, M. K. (Principal Investigator)
1999-01-01
For entertainment applications, a successful virtual experience based on a head-mounted display (HMD) needs to overcome some or all of the following problems: entering a virtual world is a jarring experience, people do not naturally turn their heads or talk to each other while wearing an HMD, putting on the equipment is hard, and people do not realize when the experience is over. In the Electric Garden at SIGGRAPH 97, we presented the Mad Hatter's Tea Party, a shared virtual environment experienced by more than 1,500 SIGGRAPH attendees. We addressed these HMD-related problems with a combination of back story, see-through HMDs, virtual characters, continuity of real and virtual objects, and the layout of the physical and virtual environments.
Caring and Dominance Affect Participants’ Perceptions and Behaviors During a Virtual Medical Visit
Hall, Judith A.; Roter, Debra L.
2008-01-01
BACKGROUND Physician communication style affects patients’ perceptions and behaviors. Two aspects of physician communication style, caring and dominance, are often related in that a high caring physician is usually not dominant and vice versa. OBJECTIVE This research was aimed at testing the sole or joint impact of physician caring and physician dominance on participant perceptions and behavior during the medical visit. PARTICIPANTS AND DESIGN In an experimental design, analog patients (APs) (167 university students) interacted with a computer-generated virtual physician on a computer screen. Participants were randomly assigned to 1 of 4 experimental conditions (physician communication style: high dominance and low caring, high dominance and high caring, low dominance and low caring, or low dominance and high caring). The APs’ verbal and nonverbal behavior during the visit as well as their perception of the virtual physician were assessed. RESULTS Analog patients were able to distinguish dominance and caring dimensions of the virtual physician’s communication. Moreover, APs provided less medical information, spoke less, and agreed more when interacting with a high-dominant compared to a low-dominant physician. They also talked more about emotions and were quicker in taking their turn to speak when interacting with a high-caring compared to a low-caring physician. CONCLUSIONS Dominant and caring physicians elicit different emotional and behavioral responses from APs. Physician dominance reduces patient engagement in the medical dialog and produces submissiveness, whereas physician caring increases patient emotionality. Electronic supplementary material The online version of this article (doi:10.1007/s11606-008-0512-5) contains supplementary material, which is available to authorized users. PMID:18259824
Roth, Jeremy A; Wilson, Timothy D; Sandig, Martin
2015-01-01
Histology is a core subject in the anatomical sciences where learners are challenged to interpret two-dimensional (2D) information (gained from histological sections) to extrapolate and understand the three-dimensional (3D) morphology of cells, tissues, and organs. In gross anatomical education 3D models and learning tools have been associated with improved learning outcomes, but similar tools have not been created for histology education to visualize complex cellular structure-function relationships. This study outlines steps in creating a virtual 3D model of the renal corpuscle from serial, semi-thin, histological sections obtained from epoxy resin-embedded kidney tissue. The virtual renal corpuscle model was generated by digital segmentation to identify: Bowman's capsule, nuclei of epithelial cells in the parietal capsule, afferent arteriole, efferent arteriole, proximal convoluted tubule, distal convoluted tubule, glomerular capillaries, podocyte nuclei, nuclei of extraglomerular mesangial cells, nuclei of epithelial cells of the macula densa in the distal convoluted tubule. In addition to the imported images of the original sections the software generates, and allows for visualization of, images of virtual sections generated in any desired orientation, thus serving as a "virtual microtome". These sections can be viewed separately or with the 3D model in transparency. This approach allows for the development of interactive e-learning tools designed to enhance histology education of microscopic structures with complex cellular interrelationships. Future studies will focus on testing the efficacy of interactive virtual 3D models for histology education. © 2015 American Association of Anatomists.
Integrated Data Visualization and Virtual Reality Tool
NASA Technical Reports Server (NTRS)
Dryer, David A.
1998-01-01
The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.
Real-time, rapidly updating severe weather products for virtual globes
NASA Astrophysics Data System (ADS)
Smith, Travis M.; Lakshmanan, Valliappa
2011-01-01
It is critical that weather forecasters are able to put severe weather information from a variety of observational and modeling platforms into a geographic context so that warning information can be effectively conveyed to the public, emergency managers, and disaster response teams. The availability of standards for the specification and transport of virtual globe data products has made it possible to generate spatially precise, geo-referenced images and to distribute these centrally created products via a web server to a wide audience. In this paper, we describe the data and methods for enabling severe weather threat analysis information inside a KML framework. The method of creating severe weather diagnosis products that are generated and translating them to KML and image files is described. We illustrate some of the practical applications of these data when they are integrated into a virtual globe display. The availability of standards for interoperable virtual globe clients has not completely alleviated the need for custom solutions. We conclude by pointing out several of the limitations of the general-purpose virtual globe clients currently available.
NASA Technical Reports Server (NTRS)
Vranish, John M.
2006-01-01
The term "virtual feel" denotes a type of capaciflector (an advanced capacitive proximity sensor) and a methodology for designing and using a sensor of this type to guide a robot in manipulating a tool (e.g., a wrench socket) into alignment with a mating fastener (e.g., a bolt head) or other electrically conductive object. A capaciflector includes at least one sensing electrode, excited with an alternating voltage, that puts out a signal indicative of the capacitance between that electrode and a proximal object.
Design of a 4-DOF MR haptic master for application to robot surgery: virtual environment work
NASA Astrophysics Data System (ADS)
Oh, Jong-Seok; Choi, Seung-Hyun; Choi, Seung-Bok
2014-09-01
This paper presents the design and control performance of a novel type of 4-degrees-of-freedom (4-DOF) haptic master in cyberspace for a robot-assisted minimally invasive surgery (RMIS) application. By using a controllable magnetorheological (MR) fluid, the proposed haptic master can have a feedback function for a surgical robot. Due to the difficulty in utilizing real human organs in the experiment, the cyberspace that features the virtual object is constructed to evaluate the performance of the haptic master. In order to realize the cyberspace, a volumetric deformable object is represented by a shape-retaining chain-linked (S-chain) model, which is a fast volumetric model and is suitable for real-time applications. In the haptic architecture for an RMIS application, the desired torque and position induced from the virtual object of the cyberspace and the haptic master of real space are transferred to each other. In order to validate the superiority of the proposed master and volumetric model, a tracking control experiment is implemented with a nonhomogenous volumetric cubic object to demonstrate that the proposed model can be utilized in real-time haptic rendering architecture. A proportional-integral-derivative (PID) controller is then designed and empirically implemented to accomplish the desired torque trajectories. It has been verified from the experiment that tracking the control performance for torque trajectories from a virtual slave can be successfully achieved.
NASA Astrophysics Data System (ADS)
Ren, Yilong; Duan, Xitong; Wu, Lei; He, Jin; Xu, Wu
2017-06-01
With the development of the “VR+” era, the traditional virtual assembly system of power equipment has been unable to satisfy our growing needs. In this paper, based on the analysis of the traditional virtual assembly system of electric power equipment and the application of VR technology in the virtual assembly system of electric power equipment in our country, this paper puts forward the scheme of establishing the virtual assembly system of power equipment: At first, we should obtain the information of power equipment, then we should using OpenGL and multi texture technology to build 3D solid graphics library. After the completion of three-dimensional modeling, we can use the dynamic link library DLL package three-dimensional solid graphics generation program to realize the modularization of power equipment model library and power equipment model library generated hidden algorithm. After the establishment of 3D power equipment model database, we set up the virtual assembly system of 3D power equipment to separate the assembly operation of the power equipment from the space. At the same time, aiming at the deficiency of the traditional gesture recognition algorithm, we propose a gesture recognition algorithm based on improved PSO algorithm for BP neural network data glove. Finally, the virtual assembly system of power equipment can really achieve multi-channel interaction function.
NASA Astrophysics Data System (ADS)
Acero, R.; Santolaria, J.; Pueo, M.; Aguilar, J. J.; Brau, A.
2015-11-01
High-range measuring equipment like laser trackers need large dimension calibrated reference artifacts in their calibration and verification procedures. In this paper, a new verification procedure for portable coordinate measuring instruments based on the generation and evaluation of virtual distances with an indexed metrology platform is developed. This methodology enables the definition of an unlimited number of reference distances without materializing them in a physical gauge to be used as a reference. The generation of the virtual points and reference lengths derived is linked to the concept of the indexed metrology platform and the knowledge of the relative position and orientation of its upper and lower platforms with high accuracy. It is the measuring instrument together with the indexed metrology platform one that remains still, rotating the virtual mesh around them. As a first step, the virtual distances technique is applied to a laser tracker in this work. The experimental verification procedure of the laser tracker with virtual distances is simulated and further compared with the conventional verification procedure of the laser tracker with the indexed metrology platform. The results obtained in terms of volumetric performance of the laser tracker proved the suitability of the virtual distances methodology in calibration and verification procedures for portable coordinate measuring instruments, broadening and expanding the possibilities for the definition of reference distances in these procedures.
Intelligent Motion and Interaction Within Virtual Environments
NASA Technical Reports Server (NTRS)
Ellis, Stephen R. (Editor); Slater, Mel (Editor); Alexander, Thomas (Editor)
2007-01-01
What makes virtual actors and objects in virtual environments seem real? How can the illusion of their reality be supported? What sorts of training or user-interface applications benefit from realistic user-environment interactions? These are some of the central questions that designers of virtual environments face. To be sure simulation realism is not necessarily the major, or even a required goal, of a virtual environment intended to communicate specific information. But for some applications in entertainment, marketing, or aspects of vehicle simulation training, realism is essential. The following chapters will examine how a sense of truly interacting with dynamic, intelligent agents may arise in users of virtual environments. These chapters are based on presentations at the London conference on Intelligent Motion and Interaction within a Virtual Environments which was held at University College, London, U.K., 15-17 September 2003.
Improving the discrimination of hand motor imagery via virtual reality based visual guidance.
Liang, Shuang; Choi, Kup-Sze; Qin, Jing; Pang, Wai-Man; Wang, Qiong; Heng, Pheng-Ann
2016-08-01
While research on the brain-computer interface (BCI) has been active in recent years, how to get high-quality electrical brain signals to accurately recognize human intentions for reliable communication and interaction is still a challenging task. The evidence has shown that visually guided motor imagery (MI) can modulate sensorimotor electroencephalographic (EEG) rhythms in humans, but how to design and implement efficient visual guidance during MI in order to produce better event-related desynchronization (ERD) patterns is still unclear. The aim of this paper is to investigate the effect of using object-oriented movements in a virtual environment as visual guidance on the modulation of sensorimotor EEG rhythms generated by hand MI. To improve the classification accuracy on MI, we further propose an algorithm to automatically extract subject-specific optimal frequency and time bands for the discrimination of ERD patterns produced by left and right hand MI. The experimental results show that the average classification accuracy of object-directed scenarios is much better than that of non-object-directed scenarios (76.87% vs. 69.66%). The result of the t-test measuring the difference between them is statistically significant (p = 0.0207). When compared to algorithms based on fixed frequency and time bands, contralateral dominant ERD patterns can be enhanced by using the subject-specific optimal frequency and the time bands obtained by our proposed algorithm. These findings have the potential to improve the efficacy and robustness of MI-based BCI applications. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A Stochastic Model of Plausibility in Live Virtual Constructive Environments
2017-09-14
objective in virtual environment research and design is the maintenance of adequate consistency levels in the face of limited system resources such as...provides some commentary with regard to system design considerations and future research directions. II. SYSTEM MODEL DVEs are often designed as a...exceed the system’s requirements. Research into predictive models of virtual environment consistency is needed to provide designers the tools to
Virtual Raters for Reproducible and Objective Assessments in Radiology
NASA Astrophysics Data System (ADS)
Kleesiek, Jens; Petersen, Jens; Döring, Markus; Maier-Hein, Klaus; Köthe, Ullrich; Wick, Wolfgang; Hamprecht, Fred A.; Bendszus, Martin; Biller, Armin
2016-04-01
Volumetric measurements in radiologic images are important for monitoring tumor growth and treatment response. To make these more reproducible and objective we introduce the concept of virtual raters (VRs). A virtual rater is obtained by combining knowledge of machine-learning algorithms trained with past annotations of multiple human raters with the instantaneous rating of one human expert. Thus, he is virtually guided by several experts. To evaluate the approach we perform experiments with multi-channel magnetic resonance imaging (MRI) data sets. Next to gross tumor volume (GTV) we also investigate subcategories like edema, contrast-enhancing and non-enhancing tumor. The first data set consists of N = 71 longitudinal follow-up scans of 15 patients suffering from glioblastoma (GB). The second data set comprises N = 30 scans of low- and high-grade gliomas. For comparison we computed Pearson Correlation, Intra-class Correlation Coefficient (ICC) and Dice score. Virtual raters always lead to an improvement w.r.t. inter- and intra-rater agreement. Comparing the 2D Response Assessment in Neuro-Oncology (RANO) measurements to the volumetric measurements of the virtual raters results in one-third of the cases in a deviating rating. Hence, we believe that our approach will have an impact on the evaluation of clinical studies as well as on routine imaging diagnostics.
Realistic generation of natural phenomena based on video synthesis
NASA Astrophysics Data System (ADS)
Wang, Changbo; Quan, Hongyan; Li, Chenhui; Xiao, Zhao; Chen, Xiao; Li, Peng; Shen, Liuwei
2009-10-01
Research on the generation of natural phenomena has many applications in special effects of movie, battlefield simulation and virtual reality, etc. Based on video synthesis technique, a new approach is proposed for the synthesis of natural phenomena, including flowing water and fire flame. From the fire and flow video, the seamless video of arbitrary length is generated. Then, the interaction between wind and fire flame is achieved through the skeleton of flame. Later, the flow is also synthesized by extending the video textures using an edge resample method. Finally, we can integrate the synthesized natural phenomena into a virtual scene.
Wearable Virtual White Cane Network for navigating people with visual impairment.
Gao, Yabiao; Chandrawanshi, Rahul; Nau, Amy C; Tse, Zion Tsz Ho
2015-09-01
Navigating the world with visual impairments presents inconveniences and safety concerns. Although a traditional white cane is the most commonly used mobility aid due to its low cost and acceptable functionality, electronic traveling aids can provide more functionality as well as additional benefits. The Wearable Virtual Cane Network is an electronic traveling aid that utilizes ultrasound sonar technology to scan the surrounding environment for spatial information. The Wearable Virtual Cane Network is composed of four sensing nodes: one on each of the user's wrists, one on the waist, and one on the ankle. The Wearable Virtual Cane Network employs vibration and sound to communicate object proximity to the user. While conventional navigation devices are typically hand-held and bulky, the hands-free design of our prototype allows the user to perform other tasks while using the Wearable Virtual Cane Network. When the Wearable Virtual Cane Network prototype was tested for distance resolution and range detection limits at various displacements and compared with a traditional white cane, all participants performed significantly above the control bar (p < 4.3 × 10(-5), standard t-test) in distance estimation. Each sensor unit can detect an object with a surface area as small as 1 cm(2) (1 cm × 1 cm) located 70 cm away. Our results showed that the walking speed for an obstacle course was increased by 23% on average when subjects used the Wearable Virtual Cane Network rather than the white cane. The obstacle course experiment also shows that the use of the white cane in combination with the Wearable Virtual Cane Network can significantly improve navigation over using either the white cane or the Wearable Virtual Cane Network alone (p < 0.05, paired t-test). © IMechE 2015.
Augmented reality glass-free three-dimensional display with the stereo camera
NASA Astrophysics Data System (ADS)
Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zitney, S.E.; McCorkle, D.; Yang, C.
Process modeling and simulation tools are widely used for the design and operation of advanced power generation systems. These tools enable engineers to solve the critical process systems engineering problems that arise throughout the lifecycle of a power plant, such as designing a new process, troubleshooting a process unit or optimizing operations of the full process. To analyze the impact of complex thermal and fluid flow phenomena on overall power plant performance, the Department of Energy’s (DOE) National Energy Technology Laboratory (NETL) has developed the Advanced Process Engineering Co-Simulator (APECS). The APECS system is an integrated software suite that combinesmore » process simulation (e.g., Aspen Plus) and high-fidelity equipment simulations such as those based on computational fluid dynamics (CFD), together with advanced analysis capabilities including case studies, sensitivity analysis, stochastic simulation for risk/uncertainty analysis, and multi-objective optimization. In this paper we discuss the initial phases of the integration of the APECS system with the immersive and interactive virtual engineering software, VE-Suite, developed at Iowa State University and Ames Laboratory. VE-Suite uses the ActiveX (OLE Automation) controls in the Aspen Plus process simulator wrapped by the CASI library developed by Reaction Engineering International to run process/CFD co-simulations and query for results. This integration represents a necessary step in the development of virtual power plant co-simulations that will ultimately reduce the time, cost, and technical risk of developing advanced power generation systems.« less
Borrel, Alexandre; Fourches, Denis
2017-12-01
There is a growing interest for the broad use of Augmented Reality (AR) and Virtual Reality (VR) in the fields of bioinformatics and cheminformatics to visualize complex biological and chemical structures. AR and VR technologies allow for stunning and immersive experiences, offering untapped opportunities for both research and education purposes. However, preparing 3D models ready to use for AR and VR is time-consuming and requires a technical expertise that severely limits the development of new contents of potential interest for structural biologists, medicinal chemists, molecular modellers and teachers. Herein we present the RealityConvert software tool and associated website, which allow users to easily convert molecular objects to high quality 3D models directly compatible for AR and VR applications. For chemical structures, in addition to the 3D model generation, RealityConvert also generates image trackers, useful to universally call and anchor that particular 3D model when used in AR applications. The ultimate goal of RealityConvert is to facilitate and boost the development and accessibility of AR and VR contents for bioinformatics and cheminformatics applications. http://www.realityconvert.com. dfourch@ncsu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
2D virtual texture on 3D real object with coded structured light
NASA Astrophysics Data System (ADS)
Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick
2008-02-01
Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.
Virtual reality for emergency training
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altinkemer, K.
1995-12-31
Virtual reality is a sequence of scenes generated by a computer as a response to the five different senses. These senses are sight, sound, taste, touch, smell. Other senses that can be used in virtual reality include balance, pheromonal, and immunological senses. Many application areas include: leisure and entertainment, medicine, architecture, engineering, manufacturing, and training. Virtual reality is especially important when it is used for emergency training and management of natural disasters including earthquakes, floods, tornados and other situations which are hard to emulate. Classical training methods for these extraordinary environments lack the realistic surroundings that virtual reality can provide.more » In order for virtual reality to be a successful training tool the design needs to include certain aspects; such as how real virtual reality should be and how much fixed cost is entailed in setting up the virtual reality trainer. There are also pricing questions regarding the price per training session on virtual reality trainer, and the appropriate training time length(s).« less
Virtual Worlds for Virtual Organizing
NASA Astrophysics Data System (ADS)
Rhoten, Diana; Lutters, Wayne
The members and resources of a virtual organization are dispersed across time and space, yet they function as a coherent entity through the use of technologies, networks, and alliances. As virtual organizations proliferate and become increasingly important in society, many may exploit the technical architecture s of virtual worlds, which are the confluence of computer-mediated communication, telepresence, and virtual reality originally created for gaming. A brief socio-technical history describes their early origins and the waves of progress followed by stasis that brought us to the current period of renewed enthusiasm. Examination of contemporary examples demonstrates how three genres of virtual worlds have enabled new arenas for virtual organizing: developer-defined closed worlds, user-modifiable quasi-open worlds, and user-generated open worlds. Among expected future trends are an increase in collaboration born virtually rather than imported from existing organizations, a tension between high-fidelity recreations of the physical world and hyper-stylized imaginations of fantasy worlds, and the growth of specialized worlds optimized for particular sectors, companies, or cultures.
Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering
NASA Astrophysics Data System (ADS)
Jiang, Lu; Piao, Yan
2018-04-01
The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.
Goh, Rachel L Z; Kong, Yu Xiang George; McAlinden, Colm; Liu, John; Crowston, Jonathan G; Skalicky, Simon E
2018-01-01
To evaluate the use of smartphone-based virtual reality to objectively assess activity limitation in glaucoma. Cross-sectional study of 93 patients (54 mild, 22 moderate, 17 severe glaucoma). Sociodemographics, visual parameters, Glaucoma Activity Limitation-9 and Visual Function Questionnaire - Utility Index (VFQ-UI) were collected. Mean age was 67.4 ± 13.2 years; 52.7% were male; 65.6% were driving. A smartphone placed inside virtual reality goggles was used to administer the Virtual Reality Glaucoma Visual Function Test (VR-GVFT) to participants, consisting of three parts: stationary, moving ball, driving. Rasch analysis and classical validity tests were conducted to assess performance of VR-GVFT. Twenty-four of 28 stationary test items showed acceptable fit to the Rasch model (person separation 3.02, targeting 0). Eleven of 12 moving ball test items showed acceptable fit (person separation 3.05, targeting 0). No driving test items showed acceptable fit. Stationary test person scores showed good criterion validity, differentiating between glaucoma severity groups ( P = 0.014); modest convergence validity, with mild to moderate correlation with VFQ-UI, better eye (BE) mean deviation, BE pattern deviation, BE central scotoma, worse eye (WE) visual acuity, and contrast sensitivity (CS) in both eyes ( R = 0.243-0.381); and suboptimal divergent validity. Multivariate analysis showed that lower WE CS ( P = 0.044) and greater age ( P = 0.009) were associated with worse stationary test person scores. Smartphone-based virtual reality may be a portable objective simulation test of activity limitation related to glaucomatous visual loss. The use of simulated virtual environments could help better understand the activity limitations that affect patients with glaucoma.
Goh, Rachel L. Z.; McAlinden, Colm; Liu, John; Crowston, Jonathan G.; Skalicky, Simon E.
2018-01-01
Purpose To evaluate the use of smartphone-based virtual reality to objectively assess activity limitation in glaucoma. Methods Cross-sectional study of 93 patients (54 mild, 22 moderate, 17 severe glaucoma). Sociodemographics, visual parameters, Glaucoma Activity Limitation-9 and Visual Function Questionnaire – Utility Index (VFQ-UI) were collected. Mean age was 67.4 ± 13.2 years; 52.7% were male; 65.6% were driving. A smartphone placed inside virtual reality goggles was used to administer the Virtual Reality Glaucoma Visual Function Test (VR-GVFT) to participants, consisting of three parts: stationary, moving ball, driving. Rasch analysis and classical validity tests were conducted to assess performance of VR-GVFT. Results Twenty-four of 28 stationary test items showed acceptable fit to the Rasch model (person separation 3.02, targeting 0). Eleven of 12 moving ball test items showed acceptable fit (person separation 3.05, targeting 0). No driving test items showed acceptable fit. Stationary test person scores showed good criterion validity, differentiating between glaucoma severity groups (P = 0.014); modest convergence validity, with mild to moderate correlation with VFQ-UI, better eye (BE) mean deviation, BE pattern deviation, BE central scotoma, worse eye (WE) visual acuity, and contrast sensitivity (CS) in both eyes (R = 0.243–0.381); and suboptimal divergent validity. Multivariate analysis showed that lower WE CS (P = 0.044) and greater age (P = 0.009) were associated with worse stationary test person scores. Conclusions Smartphone-based virtual reality may be a portable objective simulation test of activity limitation related to glaucomatous visual loss. Translational Relevance The use of simulated virtual environments could help better understand the activity limitations that affect patients with glaucoma. PMID:29372112
Thomas, Thaddeus P.; Anderson, Donald D.; Willis, Andrew R.; Liu, Pengcheng; Marsh, J. Lawrence; Brown, Thomas D.
2010-01-01
Background Highly comminuted intra-articular fractures are complex and difficult injuries to treat. Once emergent care is rendered, the definitive treatment objective is to restore the original anatomy while minimizing surgically induced trauma. Operations that use limited or percutaneous approaches help preserve tissue vitality, but reduced visibility makes reconstruction more difficult. A pre-operative plan of how comminuted fragments would best be re-positioned to restore anatomy helps in executing a successful reduction. Methods In this study, methods for virtually reconstructing a tibial plafond fracture were developed and applied to clinical cases. Building upon previous benchtop work, novel image analysis techniques and puzzle solving algorithms were developed for clinical application. Specialty image analysis tools were used to segment the fracture fragment geometries from CT data. The original anatomy was then restored by matching fragment native (periosteal and subchondral) bone surfaces to an intact template, generated from the uninjured contralateral limb. Findings Virtual reconstructions obtained for ten tibial plafond fracture cases had average alignment errors of 0.39 (0.5 standard deviation) mm. In addition to precise reduction planning, 3D puzzle solutions can help identify articular deformities and bone loss. Interpretation The results from this study indicate that 3D puzzle solving provides a powerful new tool for planning the surgical reconstruction of comminuted articular fractures. PMID:21215501
New Dimensions of GIS Data: Exploring Virtual Reality (VR) Technology for Earth Science
NASA Astrophysics Data System (ADS)
Skolnik, S.; Ramirez-Linan, R.
2016-12-01
NASA's Science Mission Directorate (SMD) Earth Science Division (ESD) Earth Science Technology Office (ESTO) and Navteca are exploring virtual reality (VR) technology as an approach and technique related to the next generation of Earth science technology information systems. Having demonstrated the value of VR in viewing pre-visualized science data encapsulated in a movie representation of a time series, further investigation has led to the additional capability of permitting the observer to interact with the data, make selections, and view volumetric data in an innovative way. The primary objective of this project has been to investigate the use of commercially available VR hardware, the Oculus Rift and the Samsung Gear VR, for scientific analysis through an interface to ArcGIS to enable the end user to order and view data from the NASA Discover-AQ mission. A virtual console is presented through the VR interface that allows the user to select various layers of data from the server in both 2D, 3D, and full 4pi steradian views. By demonstrating the utility of VR in interacting with Discover-AQ flight mission measurements, and building on previous work done at the Atmospheric Science Data Center (ASDC) at NASA Langley supporting analysis of sources of CO2 during the Discover-AQ mission, the investigation team has shown the potential for VR as a science tool beyond simple visualization.
Cooperative storage of shared files in a parallel computing system with dynamic block size
Bent, John M.; Faibish, Sorin; Grider, Gary
2015-11-10
Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).
Three-dimensional (3D) printing and its applications for aortic diseases
Hangge, Patrick; Pershad, Yash; Witting, Avery A.; Albadawi, Hassan
2018-01-01
Three-dimensional (3D) printing is a process which generates prototypes from virtual objects in computer-aided design (CAD) software. Since 3D printing enables the creation of customized objects, it is a rapidly expanding field in an age of personalized medicine. We discuss the use of 3D printing in surgical planning, training, and creation of devices for the treatment of aortic diseases. 3D printing can provide operators with a hands-on model to interact with complex anatomy, enable prototyping of devices for implantation based upon anatomy, or even provide pre-procedural simulation. Potential exists to expand upon current uses of 3D printing to create personalized implantable devices such as grafts. Future studies should aim to demonstrate the impact of 3D printing on outcomes to make this technology more accessible to patients with complex aortic diseases. PMID:29850416
Social Media, Education and Data Sharing
NASA Astrophysics Data System (ADS)
King, T. A.; Walker, R. J.; Masters, A.
2011-12-01
Social media is a blending of technology and social interactions which allows for the creation and exchange of user-generated content. Social media started as conversations between groups of people, now companies are using social media to communicate with customers and politicians use it to communicate with their constituents. Social media is now finding uses in the science communities. This adoption is driven by the expectation of students that technology will be an integral part of their research and that it will match the technology they use in their social lifes. Students are using social media to keep informed and collaborate with others. They have also replaced notepads with smart mobile devices. We have been introducing social media components into Virtual Observatories as a way to quickly access and exchange information with a tap or a click. We discuss the use of Quick Response (QR) codes, Digital Object Identifiers (DOIs), unique identifiers, Twitter, Facebook and tiny URL redirects as ways to enable easier sharing of data and information. We also discuss what services and features are needed in a Virtual Observatory to make data sharing with social media possible.
Cappa, Paolo; Clerico, Andrea; Nov, Oded; Porfiri, Maurizio
2013-01-01
In this paper, we demonstrate that healthy adults respond differentially to the administration of force feedback and the presentation of scientific content in a virtual environment, where they interact with a low-cost haptic device. Subjects are tasked with controlling the movement of a cursor on a predefined trajectory that is superimposed on a map of New York City’s Bronx Zoo. The system is characterized in terms of a suite of objective indices quantifying the subjects’ dexterity in planning and generating the multijoint visuomotor tasks. We find that force feedback regulates the smoothness, accuracy, and duration of the subject’s movement, whereby converging or diverging force fields influence the range of variations of the hand speed. Finally, our findings provide preliminary evidence that using educational content increases subjects’ satisfaction. Improving the level of interest through the inclusion of learning elements can increase the time spent performing rehabilitation tasks and promote learning in a new context. PMID:24349562
Designing and researching of the virtual display system based on the prism elements
NASA Astrophysics Data System (ADS)
Vasilev, V. N.; Grimm, V. A.; Romanova, G. E.; Smirnov, S. A.; Bakholdin, A. V.; Grishina, N. Y.
2014-05-01
Problems of designing of systems for virtual display systems for augmented reality placed near the observers eye (so called head worn displays) with the light guide prismatic elements are considered. Systems of augmented reality is the complex consists of the image generator (most often it's the microdisplay with the illumination system if the display is not self-luminous), the objective which forms the display image practically in infinity and the combiner which organizes the light splitting so that an observer could see the information of the microdisplay and the surrounding environment as the background at the same time. This work deals with the system with the combiner based on the composite structure of the prism elements. In the work three cases of the prism combiner design are considered and also the results of the modeling with the optical design software are presented. In the model the question of the large pupil zone was analyzed and also the discontinuous character (mosaic structure) of the angular field in transmission of the information from the microdisplay to the observer's eye with the prismatic structure are discussed.
Innovative application of virtual display technique in virtual museum
NASA Astrophysics Data System (ADS)
Zhang, Jiankang
2017-09-01
Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.
ERIC Educational Resources Information Center
Jauregi, Kristi; Kuure, Leena; Bastian, Pim; Reinhardt, Dennis; Koivisto, Tuomo
2015-01-01
Within the European TILA project a case study was carried out where pupils from schools in Finland and the Netherlands engaged in debating sessions using the 3D virtual world of OpenSim once a week for a period of 5 weeks. The case study had two main objectives: (1) to study the impact that the discussion tasks undertaken in a virtual environment…
ERIC Educational Resources Information Center
Akhavan, Peyman; Arefi, Majid Feyz
2014-01-01
The purpose of this study is to obtain suitable quality criteria for evaluation of electronic content for virtual courses. We attempt to find the aspects which are important in developing e-content for virtual courses and to determine the criteria we need to judge for the quality and efficiency of learning objects and e-content. So we can classify…
Revisiting Parametric Types and Virtual Classes
NASA Astrophysics Data System (ADS)
Madsen, Anders Bach; Ernst, Erik
This paper presents a conceptually oriented updated view on the relationship between parametric types and virtual classes. The traditional view is that parametric types excel at structurally oriented composition and decomposition, and virtual classes excel at specifying mutually recursive families of classes whose relationships are preserved in derived families. Conversely, while class families can be specified using a large number of F-bounded type parameters, this approach is complex and fragile; and it is difficult to use traditional virtual classes to specify object composition in a structural manner, because virtual classes are closely tied to nominal typing. This paper adds new insight about the dichotomy between these two approaches; it illustrates how virtual constraints and type refinements, as recently introduced in gbeta and Scala, enable structural treatment of virtual types; finally, it shows how a novel kind of dynamic type check can detect compatibility among entire families of classes.
Acai, Anita; Sonnadara, Ranil R; O'Neill, Thomas A
2018-06-01
Concerns around the time and administrative burden of trainee promotion processes have been reported, making virtual meetings an attractive option for promotions committees in undergraduate and postgraduate medicine. However, whether such meetings can uphold the integrity of decision-making processes has yet to be explored. This narrative review aimed to summarize the literature on decision making in virtual teams, discuss ways to improve the effectiveness of virtual teams, and explore their implications for practice. In August 2017, the Web of Science platform was searched with the terms 'decision making' AND 'virtual teams' for articles published within the last 20 years. The search yielded 336 articles, which was narrowed down to a final set of 188 articles. A subset of these, subjectively deemed to be of high-quality and relevant to the work of promotions committees, was included in this review. Virtual team functioning was explored with respect to team composition and development, idea generation and selection, group memory, and communication. While virtual teams were found to potentially offer a number of key benefits over face-to-face meetings including convenience and scheduling flexibility, inclusion of members at remote sites, and enhanced idea generation and external storage, these benefits must be carefully weighed against potential challenges involving planning and coordination, integration of perspectives, and relational conflict among members, all of which can potentially reduce decision-making quality. Avenues to address these issues and maximize the outcomes of virtual promotions meetings are offered in light of the evidence.
An interactive VR system based on full-body tracking and gesture recognition
NASA Astrophysics Data System (ADS)
Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru
2016-10-01
Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.
Learning Objects and Gerontology
ERIC Educational Resources Information Center
Weinreich, Donna M.; Tompkins, Catherine J.
2006-01-01
Virtual AGE (vAGE) is an asynchronous educational environment that utilizes learning objects focused on gerontology and a learning anytime/anywhere philosophy. This paper discusses the benefits of asynchronous instruction and the process of creating learning objects. Learning objects are "small, reusable chunks of instructional media" Wiley…
Vision-based augmented reality system
NASA Astrophysics Data System (ADS)
Chen, Jing; Wang, Yongtian; Shi, Qi; Yan, Dayuan
2003-04-01
The most promising aspect of augmented reality lies in its ability to integrate the virtual world of the computer with the real world of the user. Namely, users can interact with the real world subjects and objects directly. This paper presents an experimental augmented reality system with a video see-through head-mounted device to display visual objects, as if they were lying on the table together with real objects. In order to overlay virtual objects on the real world at the right position and orientation, the accurate calibration and registration are most important. A vision-based method is used to estimate CCD external parameters by tracking 4 known points with different colors. It achieves sufficient accuracy for non-critical applications such as gaming, annotation and so on.
NASA Astrophysics Data System (ADS)
Aberasturi, M.; Solano, E.; Martín, E.
2015-05-01
Low-mass stars and brown dwarfs (with spectral types M, L, T and Y) are the most common objects in the Milky Way. A complete census of these objects is necessary to understand the theories about their complex structure and formation processes. In order to increase the number of known objects in the Solar neighborhood (d<30 pc), we have made use of the Virtual Observatory which allows an efficient handling of the huge amount of information available in astronomical databases. We also used the WFC3 installed in the Hubble Space Telescope to look for T5+ dwarfs binaries.
The Direct Lighting Computation in Global Illumination Methods
NASA Astrophysics Data System (ADS)
Wang, Changyaw Allen
1994-01-01
Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.
Laboratory E-Notebooks: A Learning Object-Based Repository
ERIC Educational Resources Information Center
Abari, Ilior; Pierre, Samuel; Saliah-Hassane, Hamadou
2006-01-01
During distributed virtual laboratory experiment sessions, a major problem is to be able to collect, store, manage and share heterogeneous data (intermediate results, analysis, annotations, etc) manipulated simultaneously by geographically distributed teammates composing a virtual team. The electronic notebook is a possible response to this…
The Satirical Value of Virtual Worlds
ERIC Educational Resources Information Center
Baggaley, Jon
2010-01-01
Imaginary worlds have been devised by artists and commentators for centuries to focus satirical attention on society's problems. The increasing sophistication of three-dimensional graphics software is generating comparable "virtual worlds" for educational usage. Can such worlds play a satirical role suggesting developments in distance…
Virtual Environments in Scientific Visualization
NASA Technical Reports Server (NTRS)
Bryson, Steve; Lisinski, T. A. (Technical Monitor)
1994-01-01
Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.
NASA Technical Reports Server (NTRS)
Hale, Joseph P.
1994-01-01
A virtual reality (VR) Applications Program has been under development at MSFC since 1989. Its objectives are to develop, assess, validate, and utilize VR in hardware development, operations development and support, missions operations training, and science training. A variety of activities are under way within many of these areas. One ongoing macro-ergonomic application of VR relates to the design of the Space Station Freedom Payload Control Area (PCA), the control room from which onboard payload operations are managed. Several preliminary conceptual PCA layouts have been developed and modeled in VR. Various managers and potential end users have virtually 'entered' these rooms and provided valuable feedback. Before VR can be used with confidence in a particular application, it must be validated, or calibrated, for that class of applications. Two associated validation studies for macro-ergonomic applications are under way to help characterize possible distortions of filtering of relevant perceptions in a virtual world. In both studies, existing control rooms and their 'virtual counterparts will be empirically compared using distance and heading estimations to objects and subjective assessments. Approaches and findings of the PCA activities and details of the studies are presented.
NASA Astrophysics Data System (ADS)
Potter, Lucas; Arikatla, Sreekanth; Bray, Aaron; Webb, Jeff; Enquobahrie, Andinet
2017-03-01
Stenosis of the upper airway affects approximately 1 in 200,000 adults per year1 , and occurs in neonates as well2 . Its treatment is often dictated by institutional factors and clinicians' experience or preferences 3 . Objective and quantitative methods of evaluating treatment options hold the potential to improve care in stenosis patients. Virtual surgical planning software tools are critically important for this. The Virtual Pediatric Airway Workbench (VPAW) is a software platform designed and evaluated for upper airway stenosis treatment planning. It incorporates CFD simulation and geometric authoring with objective metrics from both that help in informed evaluation and planning. However, this planner currently lacks physiological information which could impact the surgical planning outcomes. In this work, we integrated a lumped parameter, model based human physiological engine called BioGears with VPAW. We demonstrated the use of physiology informed virtual surgical planning platform for patient-specific stenosis treatment planning. The preliminary results show that incorporating patient-specific physiology in the pretreatment plan would play important role in patient-specific surgical trainers and planners in airway surgery and other types of surgery that are significantly impacted by physiological conditions during surgery.
Virtual manufacturing in reality
NASA Astrophysics Data System (ADS)
Papstel, Jyri; Saks, Alo
2000-10-01
SMEs play an important role in manufacturing industry. But from time to time there is a shortage in resources to complete the particular order in time. Number of systems is introduced to produce digital information in order to support product and process development activities. Main problem is lack of opportunity for direct data transition within design system modules when needed temporary extension of design capacity (virtuality) or to implement integrated concurrent product development principles. The planning experience in the field is weakly used as well. The concept of virtual manufacturing is a supporting idea to solve this problem. At the same time a number of practical problems should be solved like information conformity, data transfer, unified technological concepts acceptation etc. In the present paper the proposed ways to solve the practical problems of virtual manufacturing are described. General objective is to introduce the knowledge-based CAPP system as missing module for Virtual Manufacturing in the selected product domain. Surface-centered planning concept based on STEP- based modeling principles, and knowledge-based process planning methodology will be used to gain the objectives. As a result the planning module supplied by design data with direct access, and supporting advising environment is expected. Mould producing SME would be as test basis.
Virtual reality for stroke rehabilitation.
Laver, Kate E; Lange, Belinda; George, Stacey; Deutsch, Judith E; Saposnik, Gustavo; Crotty, Maria
2017-11-20
Virtual reality and interactive video gaming have emerged as recent treatment approaches in stroke rehabilitation with commercial gaming consoles in particular, being rapidly adopted in clinical settings. This is an update of a Cochrane Review published first in 2011 and then again in 2015. Primary objective: to determine the efficacy of virtual reality compared with an alternative intervention or no intervention on upper limb function and activity.Secondary objectives: to determine the efficacy of virtual reality compared with an alternative intervention or no intervention on: gait and balance, global motor function, cognitive function, activity limitation, participation restriction, quality of life, and adverse events. We searched the Cochrane Stroke Group Trials Register (April 2017), CENTRAL, MEDLINE, Embase, and seven additional databases. We also searched trials registries and reference lists. Randomised and quasi-randomised trials of virtual reality ("an advanced form of human-computer interface that allows the user to 'interact' with and become 'immersed' in a computer-generated environment in a naturalistic fashion") in adults after stroke. The primary outcome of interest was upper limb function and activity. Secondary outcomes included gait and balance and global motor function. Two review authors independently selected trials based on pre-defined inclusion criteria, extracted data, and assessed risk of bias. A third review author moderated disagreements when required. The review authors contacted investigators to obtain missing information. We included 72 trials that involved 2470 participants. This review includes 35 new studies in addition to the studies included in the previous version of this review. Study sample sizes were generally small and interventions varied in terms of both the goals of treatment and the virtual reality devices used. The risk of bias present in many studies was unclear due to poor reporting. Thus, while there are a large number of randomised controlled trials, the evidence remains mostly low quality when rated using the GRADE system. Control groups usually received no intervention or therapy based on a standard-care approach. results were not statistically significant for upper limb function (standardised mean difference (SMD) 0.07, 95% confidence intervals (CI) -0.05 to 0.20, 22 studies, 1038 participants, low-quality evidence) when comparing virtual reality to conventional therapy. However, when virtual reality was used in addition to usual care (providing a higher dose of therapy for those in the intervention group) there was a statistically significant difference between groups (SMD 0.49, 0.21 to 0.77, 10 studies, 210 participants, low-quality evidence). when compared to conventional therapy approaches there were no statistically significant effects for gait speed or balance. Results were statistically significant for the activities of daily living (ADL) outcome (SMD 0.25, 95% CI 0.06 to 0.43, 10 studies, 466 participants, moderate-quality evidence); however, we were unable to pool results for cognitive function, participation restriction, or quality of life. Twenty-three studies reported that they monitored for adverse events; across these studies there were few adverse events and those reported were relatively mild. We found evidence that the use of virtual reality and interactive video gaming was not more beneficial than conventional therapy approaches in improving upper limb function. Virtual reality may be beneficial in improving upper limb function and activities of daily living function when used as an adjunct to usual care (to increase overall therapy time). There was insufficient evidence to reach conclusions about the effect of virtual reality and interactive video gaming on gait speed, balance, participation, or quality of life. This review found that time since onset of stroke, severity of impairment, and the type of device (commercial or customised) were not strong influencers of outcome. There was a trend suggesting that higher dose (more than 15 hours of total intervention) was preferable as were customised virtual reality programs; however, these findings were not statistically significant.
WE-FG-207B-11: Objective Image Characterization of Spectral CT with a Dual-Layer Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozguner, O; Halliburton, S; Dhanantwari, A
2016-06-15
Purpose: To obtain objective reference data for the spectral performance on a dual-layer detector CT platform (IQon, Philips) and compare virtual monoenergetic to conventional CT images. Methods: Scanning was performed using the hospital’s clinical adult body protocol: helical acquisition at 120kVp, with CTDIvol=15mGy. Multiple modules (591, 515, 528) of a CATPHAN 600 phantom and a 20 cm diameter cylindrical water phantom were scanned. No modifications to the standard protocol were necessary to enable spectral imaging. Both conventional and virtual monoenergetic images were generated from acquired data. Noise characteristics were assessed through Noise Power Spectra (NPS) and pixel standard deviation frommore » water phantom images. Spatial resolution was evaluated using Modulation Transfer Functions (MTF) of a tungsten wire as well as resolution bars. Low-contrast detectability was studied using contrast-to-noise ratio (CNR) of a low contrast object. Results: MTF curves of monoenergetic and conventional images were almost identical. MTF 50%, 10%, and 5% levels for monoenergetic images agreed with conventional images within 0.05lp/cm. These observations were verified by the resolution bars, which were clearly resolved at 7lp/cm but started blurring at 8lp/cm for this protocol in both conventional and 70 keV images. NPS curves indicated that, compared to conventional images, the noise power distribution of 70 keV monoenergetic images is similar (i.e. noise texture is similar) but exhibit a low frequency peak at keVs higher and lower than 70 keV. Standard deviation measurements show monoenergetic images have lower noise except at 40 keV where it is slightly higher. CNR of monoenergetic images is mostly flat across keV values and is superior to that of conventional images. Conclusion: Values for standard image quality metrics are the same or better for monoenergetic images compared to conventional images. Results indicate virtual monoenergetic images can be used without any loss in image quality or noise penalties relative to conventional images. This study was performed as part of a research agreement among Philips Healthcare, University Hospitals of Cleveland, and Case Western Reserve University.« less
Language-driven anticipatory eye movements in virtual reality.
Eichert, Nicole; Peeters, David; Hagoort, Peter
2018-06-01
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.
Application of image processing to calculate the number of fish seeds using raspberry-pi
NASA Astrophysics Data System (ADS)
Rahmadiansah, A.; Kusumawardhani, A.; Duanto, F. N.; Qoonita, F.
2018-03-01
Many fish cultivator in Indonesia who suffered losses due to the sale and purchase of fish seeds did not match the agreed amount. The loss is due to the calculation of fish seed still using manual method. To overcome these problems, then in this study designed fish counting system automatically and real-time fish using the image processing based on Raspberry Pi. Used image processing because it can calculate moving objects and eliminate noise. Image processing method used to calculate moving object is virtual loop detector or virtual detector method and the approach used is “double difference image”. The “double difference” approach uses information from the previous frame and the next frame to estimate the shape and position of the object. Using these methods and approaches, the results obtained were quite good with an average error of 1.0% for 300 individuals in a test with a virtual detector width of 96 pixels and a slope of 1 degree test plane.
European Pharmacy Students' Experience With Virtual Patient Technology
Madeira, Filipe
2012-01-01
Objective. To describe how virtual patients are being used to simulate real-life clinical scenarios in undergraduate pharmacy education in Europe. Methods. One hundred ninety-four participants at the 2011 Congress of the European Pharmaceutical Students Association (EPSA) completed an exploratory cross-sectional survey instrument. Results. Of the 46 universities and 23 countries represented at the EPSA Congress, only 12 students from 6 universities in 6 different countries reported having experience with virtual patient technology. The students were satisfied with the virtual patient technology and considered it more useful as a teaching and learning tool than an assessment tool. Respondents who had not used virtual patient technology expressed support regarding its potential benefits in pharmacy education. French and Dutch students were significantly less interested in virtual patient technology than were their counterparts from other European countries. Conclusion. The limited use of virtual patients in pharmacy education in Europe suggests the need for initiatives to increase the use of virtual patient technology and the benefits of computer-assisted learning in pharmacy education. PMID:22919082
Surgery applications of virtual reality
NASA Technical Reports Server (NTRS)
Rosen, Joseph
1994-01-01
Virtual reality is a computer-generated technology which allows information to be displayed in a simulated, bus lifelike, environment. In this simulated 'world', users can move and interact as if they were actually a part of that world. This new technology will be useful in many different fields, including the field of surgery. Virtual reality systems can be used to teach surgical anatomy, diagnose surgical problems, plan operations, simulate and perform surgical procedures (telesurgery), and predict the outcomes of surgery. The authors of this paper describe the basic components of a virtual reality surgical system. These components include: the virtual world, the virtual tools, the anatomical model, the software platform, the host computer, the interface, and the head-coupled display. In the chapter they also review the progress towards using virtual reality for surgical training, planning, telesurgery, and predicting outcomes. Finally, the authors present a training system being developed for the practice of new procedures in abdominal surgery.
NASA Astrophysics Data System (ADS)
Priego-Roche, Luz-María; Rieu, Dominique; Front, Agnès
Nowadays, organizations aiming to be successful in an increasingly competitive market tend to group together into virtual organizations. Designing the information system (IS) of such virtual organizations on the basis of the IS of those participating is a real challenge. The IS of a virtual organization plays an important role in the collaboration and cooperation of the participants organizations and in reaching the common goal. This article proposes criteria allowing virtual organizations to be identified and classified at an intentional level, as well as the information necessary for designing the organizations’ IS. Instantiation of criteria for a specific virtual organization and its participants, will allow simple graphical models to be generated in a modelling tool. The models will be used as bases for the IS design at organizational and operational levels. The approach is illustrated by the example of the virtual organization UGRT (a regional stockbreeders union in Tabasco, Mexico).
Educational Uses of Virtual Reality Technology.
1998-01-01
technology. It is affordable in that a basic level of technology can be achieved on most existing personal computers at either no cost or some minimal...actually present in a virtual environment is termed "presence" and is an artifact of being visually immersed in the computer -generated virtual world...Carolina University, VREL Teachers 1996 onward £ CO ■3 u VR in Education University of Illinois, National Center for Super- computing Applications
NASA Astrophysics Data System (ADS)
Maiwald, F.; Vietze, T.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.
2017-02-01
Historical photographs contain high density of information and are of great importance as sources in humanities research. In addition to the semantic indexing of historical images based on metadata, it is also possible to reconstruct geometric information about the depicted objects or the camera position at the time of the recording by employing photogrammetric methods. The approach presented here is intended to investigate (semi-) automated photogrammetric reconstruction methods for heterogeneous collections of historical (city) photographs and photographic documentation for the use in the humanities, urban research and history sciences. From a photogrammetric point of view, these images are mostly digitized photographs. For a photogrammetric evaluation, therefore, the characteristics of scanned analog images with mostly unknown camera geometry, missing or minimal object information and low radiometric and geometric resolution have to be considered. In addition, these photographs have not been created specifically for documentation purposes and so the focus of these images is often not on the object to be evaluated. The image repositories must therefore be subjected to a preprocessing analysis of their photogrammetric usability. Investigations are carried out on the basis of a repository containing historical images of the Kronentor ("crown gate") of the Dresden Zwinger. The initial step was to assess the quality and condition of available images determining their appropriateness for generating three-dimensional point clouds from historical photos using a structure-from-motion evaluation (SfM). Then, the generated point clouds were assessed by comparing them with current measurement data of the same object.
Multiple Semantic Matching on Augmented N-partite Graph for Object Co-segmentation.
Wang, Chuan; Zhang, Hua; Yang, Liang; Cao, Xiaochun; Xiong, Hongkai
2017-09-08
Recent methods for object co-segmentation focus on discovering single co-occurring relation of candidate regions representing the foreground of multiple images. However, region extraction based only on low and middle level information often occupies a large area of background without the help of semantic context. In addition, seeking single matching solution very likely leads to discover local parts of common objects. To cope with these deficiencies, we present a new object cosegmentation framework, which takes advantages of semantic information and globally explores multiple co-occurring matching cliques based on an N-partite graph structure. To this end, we first propose to incorporate candidate generation with semantic context. Based on the regions extracted from semantic segmentation of each image, we design a merging mechanism to hierarchically generate candidates with high semantic responses. Secondly, all candidates are taken into consideration to globally formulate multiple maximum weighted matching cliques, which complements the discovery of part of the common objects induced by a single clique. To facilitate the discovery of multiple matching cliques, an N-partite graph, which inherently excludes intralinks between candidates from the same image, is constructed to separate multiple cliques without additional constraints. Further, we augment the graph with an additional virtual node in each part to handle irrelevant matches when the similarity between two candidates is too small. Finally, with the explored multiple cliques, we statistically compute pixel-wise co-occurrence map for each image. Experimental results on two benchmark datasets, i.e., iCoseg and MSRC datasets, achieve desirable performance and demonstrate the effectiveness of our proposed framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, M.; Grimshaw, A.
1996-12-31
The Legion project at the University of Virginia is an architecture for designing and building system services that provide the illusion of a single virtual machine to users, a virtual machine that provides secure shared object and shared name spaces, application adjustable fault-tolerance, improved response time, and greater throughput. Legion targets wide area assemblies of workstations, supercomputers, and parallel supercomputers, Legion tackles problems not solved by existing workstation based parallel processing tools; the system will enable fault-tolerance, wide area parallel processing, inter-operability, heterogeneity, a single global name space, protection, security, efficient scheduling, and comprehensive resource management. This paper describes themore » core Legion object model, which specifies the composition and functionality of Legion`s core objects-those objects that cooperate to create, locate, manage, and remove objects in the Legion system. The object model facilitates a flexible extensible implementation, provides a single global name space, grants site autonomy to participating organizations, and scales to millions of sites and trillions of objects.« less
California Cultures: Implementing a Model for Virtual Collections
ERIC Educational Resources Information Center
Guerard, Genie; Chandler, Robin L.
2006-01-01
This article highlights the California Cultures Project as a case study examining the architecture and framework required to support the deployment of digital objects as virtual collections at the California Digital Library. Chronologically arranged, it describes the Online Archive of California (OAC) Working Group's functional requirements for…
Virtual reality interventions for rehabilitation: considerations for developing protocols.
Boechler, Patricia; Krol, Andrea; Raso, Jim; Blois, Terry
2009-01-01
This paper is a preliminary report on a work in progress that explores the existence of practice effects in early use of virtual reality environments for rehabilitation purposes and the effects of increases in level of difficulty as defined by rate of on-screen objects.
Dizziness Can Be a Drag: Coping with Balance Disorders
... now in clinical trials, scientists have created a “virtual reality” grocery store. It allows people with balance disorders to walk safely on a treadmill through computer-generated store aisles. While ... reach for items on virtual shelves. By doing this, they safely learn how ...
Virtual Visits and Patient-Centered Care: Results of a Patient Survey and Observational Study.
McGrail, Kimberlyn Marie; Ahuja, Megan Alyssa; Leaver, Chad Andrew
2017-05-26
Virtual visits are clinical interactions in health care that do not involve the patient and provider being in the same room at the same time. The use of virtual visits is growing rapidly in health care. Some health systems are integrating virtual visits into primary care as a complement to existing modes of care, in part reflecting a growing focus on patient-centered care. There is, however, limited empirical evidence about how patients view this new form of care and how it affects overall health system use. Descriptive objectives were to assess users and providers of virtual visits, including the reasons patients give for use. The analytic objective was to assess empirically the influence of virtual visits on overall primary care use and costs, including whether virtual care is with a known or a new primary care physician. The study took place in British Columbia, Canada, where virtual visits have been publicly funded since October 2012. A survey of patients who used virtual visits and an observational study of users and nonusers of virtual visits were conducted. Comparison groups included two groups: (1) all other BC residents, and (2) a group matched (3:1) to the cohort. The first virtual visit was used as the intervention and the main outcome measures were total primary care visits and costs. During 2013-2014, there were 7286 virtual visit encounters, involving 5441 patients and 144 physicians. Younger patients and physicians were more likely to use and provide virtual visits (P<.001), with no differences by sex. Older and sicker patients were more likely to see a known provider, whereas the lowest socioeconomic groups were the least likely (P<.001). The survey of 399 virtual visit patients indicated that virtual visits were liked by patients, with 372 (93.2%) of respondents saying their virtual visit was of high quality and 364 (91.2%) reporting their virtual visit was "very" or "somewhat" helpful to resolve their health issue. Segmented regression analysis and the corresponding regression parameter estimates suggested virtual visits appear to have the potential to decrease primary care costs by approximately Can $4 per quarter (Can -$3.79, P=.12), but that benefit is most associated with seeing a known provider (Can -$8.68, P<.001). Virtual visits may be one means of making the health system more patient-centered, but careful attention needs to be paid to how these services are integrated into existing health care delivery systems. ©Kimberlyn Marie McGrail, Megan Alyssa Ahuja, Chad Andrew Leaver. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.05.2017.
Rey, Beatriz; Rodriguez-Pujadas, Aina; Breton-Lopez, Juani; Barros-Loscertales, Alfonso; Baños, Rosa M; Botella, Cristina; Alcañiz, Mariano; Avila, Cesar
2014-01-01
Background To date, still images or videos of real animals have been used in functional magnetic resonance imaging protocols to evaluate the brain activations associated with small animals’ phobia. Objective The objective of our study was to evaluate the brain activations associated with small animals’ phobia through the use of virtual environments. This context will have the added benefit of allowing the subject to move and interact with the environment, giving the subject the illusion of being there. Methods We have analyzed the brain activation in a group of phobic people while they navigated in a virtual environment that included the small animals that were the object of their phobia. Results We have found brain activation mainly in the left occipital inferior lobe (P<.05 corrected, cluster size=36), related to the enhanced visual attention to the phobic stimuli; and in the superior frontal gyrus (P<.005 uncorrected, cluster size=13), which is an area that has been previously related to the feeling of self-awareness. Conclusions In our opinion, these results demonstrate that virtual stimulus can enhance brain activations consistent with previous studies with still images, but in an environment closer to the real situation the subject would face in their daily lives. PMID:25654753
Z-depth integration: a new technique for manipulating z-depth properties in composited scenes
NASA Astrophysics Data System (ADS)
Steckel, Kayla; Whittinghill, David
2014-02-01
This paper presents a new technique in the production pipeline of asset creation for virtual environments called Z-Depth Integration (ZeDI). ZeDI is intended to reduce the time required to place elements at the appropriate z-depth within a scene. Though ZeDI is intended for use primarily in two-dimensional scene composition, depth-dependent "flat" animated objects are often critical elements of augmented and virtual reality applications (AR/VR). ZeDI is derived from "deep image compositing", a capacity implemented within the OpenEXR file format. In order to trick the human eye into perceiving overlapping scene elements as being in front of or behind one another, the developer must manually manipulate which pixels of an element are visible in relation to other objects embedded within the environment's image sequence. ZeDI improves on this process by providing a means for interacting with procedurally extracted z-depth data from a virtual environment scene. By streamlining the process of defining objects' depth characteristics, it is expected that the time and energy required for developers to create compelling AR/VR scenes will be reduced. In the proof of concept presented in this manuscript, ZeDI is implemented for pre-rendered virtual scene construction via an AfterEffects software plug-in.
Proceedings of the Next Generation Exploration Conference
NASA Technical Reports Server (NTRS)
Schingler, Robbie (Editor); Lynch, Kennda
2006-01-01
The Next Generation Exploration Conference (NGEC) brought together the emerging next generation of space leaders over three intensive days of collaboration and planning. The participants extended the ongoing work of national space agencies to draft a common strategic framework for lunar exploration, to include other destinations in the solar system. NGEC is the first conference to bring together emerging leaders to comment on and contribute to these activities. The majority of the three-day conference looked beyond the moon and focused on the "next destination": Asteroids, Cis-Lunar, Earth 3.0, Mars Science and Exploration, Mars Settlement and Society, and Virtual Worlds and Virtual Exploration.
Using voice input and audio feedback to enhance the reality of a virtual experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miner, N.E.
1994-04-01
Virtual Reality (VR) is a rapidly emerging technology which allows participants to experience a virtual environment through stimulation of the participant`s senses. Intuitive and natural interactions with the virtual world help to create a realistic experience. Typically, a participant is immersed in a virtual environment through the use of a 3-D viewer. Realistic, computer-generated environment models and accurate tracking of a participant`s view are important factors for adding realism to a virtual experience. Stimulating a participant`s sense of sound and providing a natural form of communication for interacting with the virtual world are equally important. This paper discusses the advantagesmore » and importance of incorporating voice recognition and audio feedback capabilities into a virtual world experience. Various approaches and levels of complexity are discussed. Examples of the use of voice and sound are presented through the description of a research application developed in the VR laboratory at Sandia National Laboratories.« less
Niu, Qiang; Chi, Xiaoyi; Leu, Ming C; Ochoa, Jorge
2008-01-01
This paper describes image processing, geometric modeling and data management techniques for the development of a virtual bone surgery system. Image segmentation is used to divide CT scan data into different segments representing various regions of the bone. A region-growing algorithm is used to extract cortical bone and trabecular bone structures systematically and efficiently. Volume modeling is then used to represent the bone geometry based on the CT scan data. Material removal simulation is achieved by continuously performing Boolean subtraction of the surgical tool model from the bone model. A quadtree-based adaptive subdivision technique is developed to handle the large set of data in order to achieve the real-time simulation and visualization required for virtual bone surgery. A Marching Cubes algorithm is used to generate polygonal faces from the volumetric data. Rendering of the generated polygons is performed with the publicly available VTK (Visualization Tool Kit) software. Implementation of the developed techniques consists of developing a virtual bone-drilling software program, which allows the user to manipulate a virtual drill to make holes with the use of a PHANToM device on a bone model derived from real CT scan data.
Template-based combinatorial enumeration of virtual compound libraries for lipids
2012-01-01
A variety of software packages are available for the combinatorial enumeration of virtual libraries for small molecules, starting from specifications of core scaffolds with attachments points and lists of R-groups as SMILES or SD files. Although SD files include atomic coordinates for core scaffolds and R-groups, it is not possible to control 2-dimensional (2D) layout of the enumerated structures generated for virtual compound libraries because different packages generate different 2D representations for the same structure. We have developed a software package called LipidMapsTools for the template-based combinatorial enumeration of virtual compound libraries for lipids. Virtual libraries are enumerated for the specified lipid abbreviations using matching lists of pre-defined templates and chain abbreviations, instead of core scaffolds and lists of R-groups provided by the user. 2D structures of the enumerated lipids are drawn in a specific and consistent fashion adhering to the framework for representing lipid structures proposed by the LIPID MAPS consortium. LipidMapsTools is lightweight, relatively fast and contains no external dependencies. It is an open source package and freely available under the terms of the modified BSD license. PMID:23006594
Searching Fragment Spaces with feature trees.
Lessel, Uta; Wellenzohn, Bernd; Lilienthal, Markus; Claussen, Holger
2009-02-01
Virtual combinatorial chemistry easily produces billions of compounds, for which conventional virtual screening cannot be performed even with the fastest methods available. An efficient solution for such a scenario is the generation of Fragment Spaces, which encode huge numbers of virtual compounds by their fragments/reagents and rules of how to combine them. Similarity-based searches can be performed in such spaces without ever fully enumerating all virtual products. Here we describe the generation of a huge Fragment Space encoding about 5 * 10(11) compounds based on established in-house synthesis protocols for combinatorial libraries, i.e., we encode practically evaluated combinatorial chemistry protocols in a machine readable form, rendering them accessible to in silico search methods. We show how such searches in this Fragment Space can be integrated as a first step in an overall workflow. It reduces the extremely huge number of virtual products by several orders of magnitude so that the resulting list of molecules becomes more manageable for further more elaborated and time-consuming analysis steps. Results of a case study are presented and discussed, which lead to some general conclusions for an efficient expansion of the chemical space to be screened in pharmaceutical companies.
Template-based combinatorial enumeration of virtual compound libraries for lipids.
Sud, Manish; Fahy, Eoin; Subramaniam, Shankar
2012-09-25
A variety of software packages are available for the combinatorial enumeration of virtual libraries for small molecules, starting from specifications of core scaffolds with attachments points and lists of R-groups as SMILES or SD files. Although SD files include atomic coordinates for core scaffolds and R-groups, it is not possible to control 2-dimensional (2D) layout of the enumerated structures generated for virtual compound libraries because different packages generate different 2D representations for the same structure. We have developed a software package called LipidMapsTools for the template-based combinatorial enumeration of virtual compound libraries for lipids. Virtual libraries are enumerated for the specified lipid abbreviations using matching lists of pre-defined templates and chain abbreviations, instead of core scaffolds and lists of R-groups provided by the user. 2D structures of the enumerated lipids are drawn in a specific and consistent fashion adhering to the framework for representing lipid structures proposed by the LIPID MAPS consortium. LipidMapsTools is lightweight, relatively fast and contains no external dependencies. It is an open source package and freely available under the terms of the modified BSD license.
Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3
NASA Astrophysics Data System (ADS)
Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.
2014-12-01
The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.
Monte Carlo calculations for reporting patient organ doses from interventional radiology
NASA Astrophysics Data System (ADS)
Huo, Wanli; Feng, Mang; Pi, Yifei; Chen, Zhi; Gao, Yiming; Xu, X. George
2017-09-01
This paper describes a project to generate organ dose data for the purposes of extending VirtualDose software from CT imaging to interventional radiology (IR) applications. A library of 23 mesh-based anthropometric patient phantoms were involved in Monte Carlo simulations for database calculations. Organ doses and effective doses of IR procedures with specific beam projection, filed of view (FOV) and beam quality for all parts of body were obtained. Comparing organ doses for different beam qualities, beam projections, patients' ages and patient's body mass indexes (BMIs) which generated by VirtualDose-IR, significant discrepancies were observed. For relatively long time exposure, IR doses depend on beam quality, beam direction and patient size. Therefore, VirtualDose-IR, which is based on the latest anatomically realistic patient phantoms, can generate accurate doses for IR treatment. It is suitable to apply this software in clinical IR dose management as an effective tool to estimate patient doses and optimize IR treatment plans.
A Proposed Framework for Collaborative Design in a Virtual Environment
NASA Astrophysics Data System (ADS)
Breland, Jason S.; Shiratuddin, Mohd Fairuz
This paper describes a proposed framework for a collaborative design in a virtual environment. The framework consists of components that support a true collaborative design in a real-time 3D virtual environment. In support of the proposed framework, a prototype application is being developed. The authors envision the framework will have, but not limited to the following features: (1) real-time manipulation of 3D objects across the network, (2) support for multi-designer activities and information access, (3) co-existence within same virtual space, etc. This paper also discusses a proposed testing to determine the possible benefits of a collaborative design in a virtual environment over other forms of collaboration, and results from a pilot test.
Noncontact Tactile Display Based on Radiation Pressure of Airborne Ultrasound.
Hoshi, T; Takahashi, M; Iwamoto, T; Shinoda, H
2010-01-01
This paper describes a tactile display which provides unrestricted tactile feedback in air without any mechanical contact. It controls ultrasound and produces a stress field in a 3D space. The principle is based on a nonlinear phenomenon of ultrasound: Acoustic radiation pressure. The fabricated prototype consists of 324 airborne ultrasound transducers, and the phase and intensity of each transducer are controlled individually to generate a focal point. The DC output force at the focal point is 16 mN and the diameter of the focal point is 20 mm. The prototype produces vibrations up to 1 kHz. An interaction system including the prototype is also introduced, which enables users to see and touch virtual objects.
Integration of Problem-based Learning and Innovative Technology Into a Self-Care Course
2013-01-01
Objective. To assess the integration of problem-based learning and technology into a self-care course. Design. Problem-based learning (PBL) activities were developed and implemented in place of lectures in a self-care course. Students used technology, such as computer-generated virtual patients and iPads, during the PBL sessions. Assessments. Students’ scores on post-case quizzes were higher than on pre-case quizzes used to assess baseline knowledge. Student satisfaction with problem-based learning and the use of technology in the course remained consistent throughout the semester. Conclusion. Integrating problem-based learning and technology into a self-care course enabled students to become active learners. PMID:23966730
Matta, Ragai-Edward; von Wilmowsky, Cornelius; Neuhuber, Winfried; Lell, Michael; Neukam, Friedrich W; Adler, Werner; Wichmann, Manfred; Bergauer, Bastian
2016-05-01
Multi-slice computed tomography (MSCT) and cone beam computed tomography (CBCT) are indispensable imaging techniques in advanced medicine. The possibility of creating virtual and corporal three-dimensional (3D) models enables detailed planning in craniofacial and oral surgery. The objective of this study was to evaluate the impact of different scan protocols for CBCT and MSCT on virtual 3D model accuracy using a software-based evaluation method that excludes human measurement errors. MSCT and CBCT scans with different manufacturers' predefined scan protocols were obtained from a human lower jaw and were superimposed with a master model generated by an optical scan of an industrial noncontact scanner. To determine the accuracy, the mean and standard deviations were calculated, and t-tests were used for comparisons between the different settings. Averaged over 10 repeated X-ray scans per method and 19 measurement points per scan (n = 190), it was found that the MSCT scan protocol 140 kV delivered the most accurate virtual 3D model, with a mean deviation of 0.106 mm compared to the master model. Only the CBCT scans with 0.2-voxel resolution delivered a similar accurate 3D model (mean deviation 0.119 mm). Within the limitations of this study, it was demonstrated that the accuracy of a 3D model of the lower jaw depends on the protocol used for MSCT and CBCT scans. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Virtual reality: teaching tool of the twenty-first century?
Hoffman, H; Vu, D
1997-12-01
Virtual reality (VR) is gaining recognition for its enormous educational potential. While not yet in the mainstream of academic medical training, many prototype and first-generation VR applications are emerging, with target audiences ranging from first- and second-year medical students to residents in advanced clinical training. Visualization tools that take advantage of VR technologies are being designed to provide engaging and intuitive environments for learning visually and spatially complex topics such as human anatomy, biochemistry, and molecular biology. These applications present dynamic, three-dimensional views of structures and their spatial relationships, enabling users to move beyond "real-world" experiences by interacting with or altering virtual objects in ways that would otherwise be difficult or impossible. VR-based procedural and surgical simulations, often compared with flight simulators in aviation, hold significant promise for revolutionizing medical training. Already a wide range of simulations, representing diverse content areas and utilizing a variety of implementation strategies, are either under development or in their early implementation stages. These new systems promise to make broad-based training experiences available for students at all levels, without the risks and ethical concerns typically associated with using animal and human subjects. Medical students could acquire proficiency and gain confidence in the ability to perform a wide variety of techniques long before they need to use them clinically. Surgical residents could rehearse and refine operative procedures, using an unlimited pool of virtual patients manifesting a wide range of anatomic variations, traumatic wounds, and disease states. Those simulated encounters, in combination with existing opportunities to work with real patients, could increase the depth and breadth of learners' exposure to medical problems, ensure uniformity of training experiences, and enhance the acquisition of clinical skills.
Simulating Microdosimetry of Environmental Chemicals for EPA’s Virtual Liver
US EPA Virtual Liver (v-Liver) is a cellular systems model of hepatic tissues aimed at predicting chemical-induced adverse effects through agent-based modeling. A primary objective of the project is to extrapolate in vitro data to in vivo outcomes. Agent-based approaches to tissu...