Measuring the Effects of Virtual Pair Programming in an Introductory Programming Java Course
ERIC Educational Resources Information Center
Zacharis, N. Z.
2011-01-01
This study investigated the effectiveness of virtual pair programming (VPP) on student performance and satisfaction in an introductory Java course. Students used online tools that integrated desktop sharing and real-time communication, and the metrics examined showed that VPP is an acceptable alternative to individual programming experience.…
Korocsec, D; Holobar, A; Divjak, M; Zazula, D
2005-12-01
Medicine is a difficult thing to learn. Experimenting with real patients should not be the only option; simulation deserves a special attention here. Virtual Reality Modelling Language (VRML) as a tool for building virtual objects and scenes has a good record of educational applications in medicine, especially for static and animated visualisations of body parts and organs. However, to create computer simulations resembling situations in real environments the required level of interactivity and dynamics is difficult to achieve. In the present paper we describe some approaches and techniques which we used to push the limits of the current VRML technology further toward dynamic 3D representation of virtual environments (VEs). Our demonstration is based on the implementation of a virtual baby model, whose vital signs can be controlled from an external Java application. The main contributions of this work are: (a) outline and evaluation of the three-level VRML/Java implementation of the dynamic virtual environment, (b) proposal for a modified VRML Timesensor node, which greatly improves the overall control of system performance, and (c) architecture of the prototype distributed virtual environment for training in neonatal resuscitation comprising the interactive virtual newborn, active bedside monitor for vital signs and full 3D representation of the surgery room.
Real-time optimizations for integrated smart network camera
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Lienard, Bruno; Meessen, Jerome; Delaigle, Jean-Francois
2005-02-01
We present an integrated real-time smart network camera. This system is composed of an image sensor, an embedded PC based electronic card for image processing and some network capabilities. The application detects events of interest in visual scenes, highlights alarms and computes statistics. The system also produces meta-data information that could be shared between other cameras in a network. We describe the requirements of such a system and then show how the design of the system is optimized to process and compress video in real-time. Indeed, typical video-surveillance algorithms as background differencing, tracking and event detection should be highly optimized and simplified to be used in this hardware. To have a good adequation between hardware and software in this light embedded system, the software management is written on top of the java based middle-ware specification established by the OSGi alliance. We can integrate easily software and hardware in complex environments thanks to the Java Real-Time specification for the virtual machine and some network and service oriented java specifications (like RMI and Jini). Finally, we will report some outcomes and typical case studies of such a camera like counter-flow detection.
Shared virtual environments for telerehabilitation.
Popescu, George V; Burdea, Grigore; Boian, Rares
2002-01-01
Current VR telerehabilitation systems use offline remote monitoring from the clinic and patient-therapist videoconferencing. Such "store and forward" and video-based systems cannot implement medical services involving patient therapist direct interaction. Real-time telerehabilitation applications (including remote therapy) can be developed using a shared Virtual Environment (VE) architecture. We developed a two-user shared VE for hand telerehabilitation. Each site has a telerehabilitation workstation with a videocamera and a Rutgers Master II (RMII) force feedback glove. Each user can control a virtual hand and interact hapticly with virtual objects. Simulated physical interactions between therapist and patient are implemented using hand force feedback. The therapist's graphic interface contains several virtual panels, which allow control over the rehabilitation process. These controls start a videoconferencing session, collect patient data, or apply therapy. Several experimental telerehabilitation scenarios were successfully tested on a LAN. A Web-based approach to "real-time" patient telemonitoring--the monitoring portal for hand telerehabilitation--was also developed. The therapist interface is implemented as a Java3D applet that monitors patient hand movement. The monitoring portal gives real-time performance on off-the-shelf desktop workstations.
EvoluZion: A Computer Simulator for Teaching Genetic and Evolutionary Concepts
ERIC Educational Resources Information Center
Zurita, Adolfo R.
2017-01-01
EvoluZion is a forward-in-time genetic simulator developed in Java and designed to perform real time simulations on the evolutionary history of virtual organisms. These model organisms harbour a set of 13 genes that codify an equal number of phenotypic features. These genes change randomly during replication, and mutant genes can have null,…
1999-09-01
application, a complete specification will require one or more companion documents, as follows. 1. Profile Specification Documents A Profile...Rio de Janeiro - RJ - Brazil 21. Diretoria de Sistemas de Armas da Marinha Rua Primeiro de Marco, 118 Rio de Janeiro - RJ - Brazil CEP 20010 22
Real-Time Lunar Prospector Data Visualization Using Web-Based Java
NASA Technical Reports Server (NTRS)
Deardorff, D. Glenn; Green, Bryan D.; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
The Lunar Prospector was co-developed by NASA Ames Research Center and Lockheed Martin, and was launched on January 6th, 1998. Its mission is to search for water ice and various elements in the Moon's surface, map its magnetic and gravity fields, and detect volcanic activity. For the first time, the World Wide Web is being used to graphically display near-real-time data from a planetary exploration mission to the global public. Science data from the craft's instruments, as well as engineering data for the spacecraft subsystems, are continuously displayed in time-varying XY plots. The craft's current location is displayed relative to the whole Moon, and as an off-craft observer would see in the reference frame of the craft, with the lunar terrain scrolling underneath. These features are implemented as Java applets. Analyzed data (element and mass distribution) is presented as 3D lunar maps using VRML and Javascript. During the development phase, implementations of the Java Virtual Machine were just beginning to mature enough to adequately accommodate our target featureset; incomplete and varying implementations were the biggest bottleneck to our ideal of ubiquitous browser access. Bottlenecks notwithstanding, the reaction from the Internet community was overwhelmingly enthusiastic.
Component Composition for Embedded Systems Using Semantic Aspect-Oriented Programming
2004-10-01
real - time systems for the defense community. Our research focused on Real-Time Java implementation and analysis techniques. Real-Time Java is important for the defense community because it holds out the promise of enabling developers to apply COTS Java technology to specialized military embedded systems. It also promises to allow the defense community to utilize a large Java-literate workforce for building defense systems. Our research has delivered several techniques that may make Real-Time Java a better platform for developing embedded
Construction of a Virtual Scanning Electron Microscope (VSEM)
NASA Technical Reports Server (NTRS)
Fried, Glenn; Grosser, Benjamin
2004-01-01
The Imaging Technology Group (ITG) proposed to develop a Virtual SEM (VSEM) application and supporting materials as the first installed instrument in NASA s Virtual Laboratory Project. The instrument was to be a simulator modeled after an existing SEM, and was to mimic that real instrument as closely as possible. Virtual samples would be developed and provided along with the instrument, which would be written in Java.
The state of the Java universe
Gosling, James
2018-05-22
Speaker Bio: James Gosling received a B.Sc. in computer science from the University of Calgary, Canada in 1977. He received a Ph.D. in computer science from Carnegie-Mellon University in 1983. The title of his thesis was The Algebraic Manipulation of Constraints. He has built satellite data acquisition systems, a multiprocessor version of UNIX®, several compilers, mail systems, and window managers. He has also built a WYSIWYG text editor, a constraint-based drawing editor, and a text editor called Emacs, for UNIX systems. At Sun his early activity was as lead engineer of the NeWS window system. He did the original design of the Java programming language and implemented its original compiler and virtual machine. He has recently been a contributor to the Real-Time Specification for Java.
Model Checking Real Time Java Using Java PathFinder
NASA Technical Reports Server (NTRS)
Lindstrom, Gary; Mehlitz, Peter C.; Visser, Willem
2005-01-01
The Real Time Specification for Java (RTSJ) is an augmentation of Java for real time applications of various degrees of hardness. The central features of RTSJ are real time threads; user defined schedulers; asynchronous events, handlers, and control transfers; a priority inheritance based default scheduler; non-heap memory areas such as immortal and scoped, and non-heap real time threads whose execution is not impeded by garbage collection. The Robust Software Systems group at NASA Ames Research Center has JAVA PATHFINDER (JPF) under development, a Java model checker. JPF at its core is a state exploring JVM which can examine alternative paths in a Java program (e.g., via backtracking) by trying all nondeterministic choices, including thread scheduling order. This paper describes our implementation of an RTSJ profile (subset) in JPF, including requirements, design decisions, and current implementation status. Two examples are analyzed: jobs on a multiprogramming operating system, and a complex resource contention example involving autonomous vehicles crossing an intersection. The utility of JPF in finding logic and timing errors is illustrated, and the remaining challenges in supporting all of RTSJ are assessed.
Programming with non-heap memory in the real time specification for Java
NASA Technical Reports Server (NTRS)
Bollella, G.; Canham, T.; Carson, V.; Champlin, V.; Dvorak, D.; Giovannoni, B.; Indictor, M.; Meyer, K.; Reinholtz, A.; Murray, K.
2003-01-01
The Real-Time Specification for Java (RTSJ) provides facilities for deterministic, real-time execution in a language that is otherwise subject to variable latencies in memory allocation and garbage collection.
Real-time Java for flight applications: an update
NASA Technical Reports Server (NTRS)
Dvorak, D.
2003-01-01
The RTSJ is a specification for supporting real-time execution in the Java programming language. The specification has been shaped by several guiding principles, particularly: predictable execution as the first priority in all tradeoffs, no syntactic extensions to Java, and backward compatibility.
NASA Astrophysics Data System (ADS)
Lammers, M.
2016-12-01
Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, pre-rendered animations, or cumbersome geoservers. These methods can limit interactivity and/or place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite observed them on and above the Earth's surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.
NASA Technical Reports Server (NTRS)
Lammers, Matthew
2016-01-01
Advancements in the capabilities of JavaScript frameworks and web browsing technology make online visualization of large geospatial datasets viable. Commonly this is done using static image overlays, prerendered animations, or cumbersome geoservers. These methods can limit interactivity andor place a large burden on server-side post-processing and storage of data. Geospatial data, and satellite data specifically, benefit from being visualized both on and above a three-dimensional surface. The open-source JavaScript framework CesiumJS, developed by Analytical Graphics, Inc., leverages the WebGL protocol to do just that. It has entered the void left by the abandonment of the Google Earth Web API, and it serves as a capable and well-maintained platform upon which data can be displayed. This paper will describe the technology behind the two primary products developed as part of the NASA Precipitation Processing System STORM website: GPM Near Real Time Viewer (GPMNRTView) and STORM Virtual Globe (STORM VG). GPMNRTView reads small post-processed CZML files derived from various Level 1 through 3 near real-time products. For swath-based products, several brightness temperature channels or precipitation-related variables are available for animating in virtual real-time as the satellite-observed them on and above the Earths surface. With grid-based products, only precipitation rates are available, but the grid points are visualized in such a way that they can be interactively examined to explore raw values. STORM VG reads values directly off the HDF5 files, converting the information into JSON on the fly. All data points both on and above the surface can be examined here as well. Both the raw values and, if relevant, elevations are displayed. Surface and above-ground precipitation rates from select Level 2 and 3 products are shown. Examples from both products will be shown, including visuals from high impact events observed by GPM constellation satellites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gosling, James
Speaker Bio: James Gosling received a B.Sc. in computer science from the University of Calgary, Canada in 1977. He received a Ph.D. in computer science from Carnegie-Mellon University in 1983. The title of his thesis was The Algebraic Manipulation of Constraints. He has built satellite data acquisition systems, a multiprocessor version of UNIX®, several compilers, mail systems, and window managers. He has also built a WYSIWYG text editor, a constraint-based drawing editor, and a text editor called Emacs, for UNIX systems. At Sun his early activity was as lead engineer of the NeWS window system. He did the original designmore » of the Java programming language and implemented its original compiler and virtual machine. He has recently been a contributor to the Real-Time Specification for Java.« less
Virtual and remote robotic laboratory using EJS, MATLAB and LabVIEW.
Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián
2013-02-21
This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.
Virtual and Remote Robotic Laboratory Using EJS, MATLAB and Lab VIEW
Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián
2013-01-01
This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented. PMID:23429578
Hard Real-Time: C++ Versus RTSJ
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Reinholtz, William K.
2004-01-01
In the domain of hard real-time systems, which language is better: C++ or the Real-Time Specification for Java (RTSJ)? Although ordinary Java provides a more productive programming environment than C++ due to its automatic memory management, that benefit does not apply to RTSJ when using NoHeapRealtimeThread and non-heap memory areas. As a result, RTSJ programmers must manage non-heap memory explicitly. While that's not a deterrent for veteran real-time programmers-where explicit memory management is common-the lack of certain language features in RTSJ (and Java) makes that manual memory management harder to accomplish safely than in C++. This paper illustrates the problem for practitioners in the context of moving data and managing memory in a real-time producer/consumer pattern. The relative ease of implementation and safety of the C++ programming model suggests that RTSJ has a struggle ahead in the domain of hard real-time applications, despite its other attractive features.
Design Virtual Reality Scene Roam for Tour Animations Base on VRML and Java
NASA Astrophysics Data System (ADS)
Cao, Zaihui; hu, Zhongyan
Virtual reality has been involved in a wide range of academic and commercial applications. It can give users a natural feeling of the environment by creating realistic virtual worlds. Implementing a virtual tour through a model of a tourist area on the web has become fashionable. In this paper, we present a web-based application that allows a user to, walk through, see, and interact with a fully three-dimensional model of the tourist area. Issues regarding navigation and disorientation areaddressed and we suggest a combination of the metro map and an intuitive navigation system. Finally we present a prototype which implements our ideas. The application of VR techniques integrates the visualization and animation of the three dimensional modelling to landscape analysis. The use of the VRML format produces the possibility to obtain some views of the 3D model and to explore it in real time. It is an important goal for the spatial information sciences.
Yu, Zhengyang; Zheng, Shusen; Chen, Huaiqing; Wang, Jianjun; Xiong, Qingwen; Jing, Wanjun; Zeng, Yu
2006-10-01
This research studies the process of dynamic concision and 3D reconstruction from medical body data using VRML and JavaScript language, focuses on how to realize the dynamic concision of 3D medical model built with VRML. The 2D medical digital images firstly are modified and manipulated by 2D image software. Then, based on these images, 3D mould is built with VRML and JavaScript language. After programming in JavaScript to control 3D model, the function of dynamic concision realized by Script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be formed in high quality near to those got in traditional methods. By this way, with the function of dynamic concision, VRML browser can offer better windows of man-computer interaction in real time environment than before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and has a promising prospect in the fields of medical image.
NASA Astrophysics Data System (ADS)
Pedro Sánchez, Juan; Sáenz, Jacobo; de la Torre, Luis; Carreras, Carmen; Yuste, Manuel; Heradio, Rubén; Dormido, Sebastián
2016-05-01
This work describes two experiments: "study of the diffraction of light: Fraunhofer approximation" and "the photoelectric effect". Both of them count with a virtual, simulated, version of the experiment as well as with a real one which can be operated remotely. The two previous virtual and remote labs (built using Easy Java(script) Simulations) are integrated in UNILabs, a network of online interactive laboratories based on the free Learning Management System Moodle. In this web environment, students can find not only the virtual and remote labs but also manuals with related theory, the user interface description for each application, and so on.
Java simulations of embedded control systems.
Farias, Gonzalo; Cervin, Anton; Arzén, Karl-Erik; Dormido, Sebastián; Esquembre, Francisco
2010-01-01
This paper introduces a new Open Source Java library suited for the simulation of embedded control systems. The library is based on the ideas and architecture of TrueTime, a toolbox of Matlab devoted to this topic, and allows Java programmers to simulate the performance of control processes which run in a real time environment. Such simulations can improve considerably the learning and design of multitasking real-time systems. The choice of Java increases considerably the usability of our library, because many educators program already in this language. But also because the library can be easily used by Easy Java Simulations (EJS), a popular modeling and authoring tool that is increasingly used in the field of Control Education. EJS allows instructors, students, and researchers with less programming capabilities to create advanced interactive simulations in Java. The paper describes the ideas, implementation, and sample use of the new library both for pure Java programmers and for EJS users. The JTT library and some examples are online available on http://lab.dia.uned.es/jtt.
Java Simulations of Embedded Control Systems
Farias, Gonzalo; Cervin, Anton; Årzén, Karl-Erik; Dormido, Sebastián; Esquembre, Francisco
2010-01-01
This paper introduces a new Open Source Java library suited for the simulation of embedded control systems. The library is based on the ideas and architecture of TrueTime, a toolbox of Matlab devoted to this topic, and allows Java programmers to simulate the performance of control processes which run in a real time environment. Such simulations can improve considerably the learning and design of multitasking real-time systems. The choice of Java increases considerably the usability of our library, because many educators program already in this language. But also because the library can be easily used by Easy Java Simulations (EJS), a popular modeling and authoring tool that is increasingly used in the field of Control Education. EJS allows instructors, students, and researchers with less programming capabilities to create advanced interactive simulations in Java. The paper describes the ideas, implementation, and sample use of the new library both for pure Java programmers and for EJS users. The JTT library and some examples are online available on http://lab.dia.uned.es/jtt. PMID:22163674
Project Golden Gate: towards real-time Java in space missions
NASA Technical Reports Server (NTRS)
Dvorak, Daniel; Bollella, Greg; Canham, Tim; Carson, Vanessa; Champlin, Virgil; Giovannoni, Brian; Indictor, Mark; Meyer, Kenny; Murray, Alex; Reinholtz, Kirk
2004-01-01
This paper describes the problem domain and our experimentation with the first commercial implementation of the Real Time Specification for Java. The two main issues explored in this report are: (1) the effect of RTSJ's non-heap memory on the programming model, and (2) performance benchmarking of RTSJ/Linux relative to C++/VxWorks.
The Unidata Integrated Data Viewer
NASA Astrophysics Data System (ADS)
Weber, W. J.; Ho, Y.
2016-12-01
The Unidata Integrated Data Viewer (IDV) is a free and open source, virtual globe, software application that enables three dimensional viewing of earth science data. The Unidata IDV is data agnostic and can display and analyze disparate data in a single view. This capability facilitates cross discipline research and allows for multiple observation platforms to be displayed simultaneously for any given event. The Unidata IDV is a mature application, written in JAVA, and has been serving the earth science community for over 15 years. This demonstration will focus on near real time global satelliteobservations, the integration of the COSMIC radio occultation data set that profiles the atmosphere, and high resolution numerical weather prediction.
Interactive Plasma Physics Education Using Data from Fusion Experiments
NASA Astrophysics Data System (ADS)
Calderon, Brisa; Davis, Bill; Zwicker, Andrew
2010-11-01
The Internet Plasma Physics Education Experience (IPPEX) website was created in 1996 to give users access to data from plasma and fusion experiments. Interactive material on electricity, magnetism, matter, and energy was presented to generate interest and prepare users to understand data from a fusion experiment. Initially, users were allowed to analyze real-time and archival data from the Tokamak Fusion Test Reactor (TFTR) experiment. IPPEX won numerous awards for its novel approach of allowing users to participate in ongoing research. However, the latest revisions of IPPEX were in 2001 and the interactive material is no longer functional on modern browsers. Also, access to real-time data was lost when TFTR was shut down. The interactive material on IPPEX is being rewritten in ActionScript3.0, and real-time and archival data from the National Spherical Tokamak Experiment (NSTX) will be made available to users. New tools like EFIT animations, fast cameras, and plots of important plasma parameters will be included along with an existing Java-based ``virtual tokamak.'' Screenshots from the upgraded website and future directions will be presented.
Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun
2005-07-01
This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging.
Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun
2005-01-01
This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging. PMID:15973760
Airlift Operation Modeling Using Discrete Event Simulation (DES)
2009-12-01
Java ......................................................................................................20 2. Simkit...JRE Java Runtime Environment JVM Java Virtual Machine lbs Pounds LAM Load Allocation Mode LRM Landing Spot Reassignment Mode LEGO Listener Event...SOFTWARE DEVELOPMENT ENVIRONMENT The following are the software tools and development environment used for constructing the models. 1. Java Java
Prototyping Faithful Execution in a Java virtual machine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarman, Thomas David; Campbell, Philip LaRoche; Pierson, Lyndon George
2003-09-01
This report presents the implementation of a stateless scheme for Faithful Execution, the design for which is presented in a companion report, ''Principles of Faithful Execution in the Implementation of Trusted Objects'' (SAND 2003-2328). We added a simple cryptographic capability to an already simplified class loader and its associated Java Virtual Machine (JVM) to provide a byte-level implementation of Faithful Execution. The extended class loader and JVM we refer to collectively as the Sandia Faithfully Executing Java architecture (or JavaFE for short). This prototype is intended to enable exploration of more sophisticated techniques which we intend to implement in hardware.
New Web Server - the Java Version of Tempest - Produced
NASA Technical Reports Server (NTRS)
York, David W.; Ponyik, Joseph G.
2000-01-01
A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.
A kickball game for ankle rehabilitation by JAVA, JNI, and VRML
NASA Astrophysics Data System (ADS)
Choi, Hyungjeen; Ryu, Jeha; Lee, Chansu
2004-03-01
This paper presents development of a virtual environment that can be applied to the ankle rehabilitation procedure. We developed a virtual football stadium to intrigue a patient, where two degree of freedom (DOF) plate-shaped object is oriented to kick a ball falling from the sky in accordance with the data from the ankle's dorisflexion/plantarflexion and inversion/eversion motion on the moving platform of the K-Platform. This Kickball Game is implemented by Virtual Reality Modeling Language (VRML). To control virtual objects, data from the K-Platform are transmitted through the communication module implemented in C++. Java, Java Native Interface (JNI) and VRML plug-in are combined together so as to interface the communication module with the virtual environment by VRML. This game may be applied to the Active Range of Motion (AROM) exercise procedure that is one of the ankle rehabilitation procedures.
Visualization of Real-Time Data
NASA Technical Reports Server (NTRS)
Stansifer, Ryan; Engrand, Peter
1996-01-01
In this project we explored various approaches to presenting real-time data from the numerous systems monitored on the space shuttle to computer users. We examined the approach that several projects at the Kennedy Space Center (KSC) used to accomplish this. We undertook to build a prototype system to demonstrate that the Internet and the Java programming language could be used to present the real-time data conveniently. Several Java programs were developed that presented real-time data in different forms including one form that emulated the display screens of the PC GOAL system which is familiar to many at KSC. Also, we developed several communications programs to supply the data continuously. Furthermore, a framework was created using the World Wide Web (WWW) to organize the collection and presentation of the real-time data. We believe our demonstration project shows the great flexibility of the approach. We had no particular use of the data in mind, instead we wanted the most general and the least complex framework possible. People who wish to view data need only know how to use a WWW browser and the address (the URL). People wanting to build WWW documents containing real-time data need only know the values of a few parameters, they do not need to program in Java or any other language. These are stunning advantages over more monolithic systems.
An Analysis Platform for Mobile Ad Hoc Network (MANET) Scenario Execution Log Data
2016-01-01
these technologies. 4.1 Backend Technologies • Java 1.8 • my-sql-connector- java -5.0.8.jar • Tomcat • VirtualBox • Kali MANET Virtual Machine 4.2...Frontend Technologies • LAMPP 4.3 Database • MySQL Server 5. Database The SEDAP database settings and structure are described in this section...contains all the backend java functionality including the web services, should be placed in the webapps directory inside the Tomcat installation
NASA Technical Reports Server (NTRS)
Benowitz, E.; Niessner, A.
2003-01-01
This work involves developing representative mission-critical spacecraft software using the Real-Time Specification for Java (RTSJ). This work currently leverages actual flight software used in the design of actual flight software in the NASA's Deep Space 1 (DSI), which flew in 1998.
Data-driven approach to human motion modeling with Lua and gesture description language
NASA Astrophysics Data System (ADS)
Hachaj, Tomasz; Koptyra, Katarzyna; Ogiela, Marek R.
2017-03-01
The aim of this paper is to present the novel proposition of the human motion modelling and recognition approach that enables real time MoCap signal evaluation. By motions (actions) recognition we mean classification. The role of this approach is to propose the syntactic description procedure that can be easily understood, learnt and used in various motion modelling and recognition tasks in all MoCap systems no matter if they are vision or wearable sensor based. To do so we have prepared extension of Gesture Description Language (GDL) methodology that enables movements description and real-time recognition so that it can use not only positional coordinates of body joints but virtually any type of discreetly measured output MoCap signals like accelerometer, magnetometer or gyroscope. We have also prepared and evaluated the cross-platform implementation of this approach using Lua scripting language and JAVA technology. This implementation is called Data Driven GDL (DD-GDL). In tested scenarios the average execution speed is above 100 frames per second which is an acquisition time of many popular MoCap solutions.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.
Introduction of Virtualization Technology to Multi-Process Model Checking
NASA Technical Reports Server (NTRS)
Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu
2009-01-01
Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.
Program Synthesizes UML Sequence Diagrams
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2006-01-01
A computer program called "Rational Sequence" generates Universal Modeling Language (UML) sequence diagrams of a target Java program running on a Java virtual machine (JVM). Rational Sequence thereby performs a reverse engineering function that aids in the design documentation of the target Java program. Whereas previously, the construction of sequence diagrams was a tedious manual process, Rational Sequence generates UML sequence diagrams automatically from the running Java code.
Real-time Java simulations of multiple interference dielectric filters
NASA Astrophysics Data System (ADS)
Kireev, Alexandre N.; Martin, Olivier J. F.
2008-12-01
An interactive Java applet for real-time simulation and visualization of the transmittance properties of multiple interference dielectric filters is presented. The most commonly used interference filters as well as the state-of-the-art ones are embedded in this platform-independent applet which can serve research and education purposes. The Transmittance applet can be freely downloaded from the site http://cpc.cs.qub.ac.uk. Program summaryProgram title: Transmittance Catalogue identifier: AEBQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5778 No. of bytes in distributed program, including test data, etc.: 90 474 Distribution format: tar.gz Programming language: Java Computer: Developed on PC-Pentium platform Operating system: Any Java-enabled OS. Applet was tested on Windows ME, XP, Sun Solaris, Mac OS RAM: Variable Classification: 18 Nature of problem: Sophisticated wavelength selective multiple interference filters can include some tens or even hundreds of dielectric layers. The spectral response of such a stack is not obvious. On the other hand, there is a strong demand from application designers and students to get a quick insight into the properties of a given filter. Solution method: A Java applet was developed for the computation and the visualization of the transmittance of multilayer interference filters. It is simple to use and the embedded filter library can serve educational purposes. Also, its ability to handle complex structures will be appreciated as a useful research and development tool. Running time: Real-time simulations
Java Mission Evaluation Workstation System
NASA Technical Reports Server (NTRS)
Pettinger, Ross; Watlington, Tim; Ryley, Richard; Harbour, Jeff
2006-01-01
The Java Mission Evaluation Workstation System (JMEWS) is a collection of applications designed to retrieve, display, and analyze both real-time and recorded telemetry data. This software is currently being used by both the Space Shuttle Program (SSP) and the International Space Station (ISS) program. JMEWS was written in the Java programming language to satisfy the requirement of platform independence. An object-oriented design was used to satisfy additional requirements and to make the software easily extendable. By virtue of its platform independence, JMEWS can be used on the UNIX workstations in the Mission Control Center (MCC) and on office computers. JMEWS includes an interactive editor that allows users to easily develop displays that meet their specific needs. The displays can be developed and modified while viewing data. By simply selecting a data source, the user can view real-time, recorded, or test data.
Real-Time Payload Control and Monitoring on the World Wide Web
NASA Technical Reports Server (NTRS)
Sun, Charles; Windrem, May; Givens, John J. (Technical Monitor)
1998-01-01
World Wide Web (W3) technologies such as the Hypertext Transfer Protocol (HTTP) and the Java object-oriented programming environment offer a powerful, yet relatively inexpensive, framework for distributed application software development. This paper describes the design of a real-time payload control and monitoring system that was developed with W3 technologies at NASA Ames Research Center. Based on Java Development Toolkit (JDK) 1.1, the system uses an event-driven "publish and subscribe" approach to inter-process communication and graphical user-interface construction. A C Language Integrated Production System (CLIPS) compatible inference engine provides the back-end intelligent data processing capability, while Oracle Relational Database Management System (RDBMS) provides the data management function. Preliminary evaluation shows acceptable performance for some classes of payloads, with Java's portability and multimedia support identified as the most significant benefit.
Productive High Performance Parallel Programming with Auto-tuned Domain-Specific Embedded Languages
2013-01-02
Compilation JVM Java Virtual Machine KB Kilobyte KDT Knowledge Discovery Toolbox LAPACK Linear Algebra Package LLVM Low-Level Virtual Machine LOC Lines...different starting points. Leo Meyerovich also helped solidify some of the ideas here in discussions during Par Lab retreats. I would also like to thank...multi-timestep computations by blocking in both time and space. 88 Implementation Output Approx DSL Type Language Language Parallelism LoC Graphite
Simulation for Dynamic Situation Awareness and Prediction III
2010-03-01
source Java ™ library for capturing and sending network packets; 4) Groovy – an open source, Java -based scripting language (version 1.6 or newer). Open...DMOTH Analyzer application. Groovy is an open source dynamic scripting language for the Java Virtual Machine. It is consistent with Java syntax...between temperature, pressure, wind and relative humidity, and 3) a precipitation editing algorithm. The Editor can be used to prepare scripted changes
So Wide a Web, So Little Time.
ERIC Educational Resources Information Center
McConville, David; And Others
1996-01-01
Discusses new trends in the World Wide Web. Highlights include multimedia; digitized audio-visual files; compression technology; telephony; virtual reality modeling language (VRML); open architecture; and advantages of Java, an object-oriented programming language, including platform independence, distributed development, and pay-per-use software.…
CPU Performance Counter-Based Problem Diagnosis for Software Systems
2009-09-01
application servers and implementation techniques), this thesis only used the Enterprise Java Bean (EJB) SessionBean version of RUBiS. The PHP and Servlet ...collection statistics at the Java Virtual Machine (JVM) level can be reused for any Java application. Other examples of gray-box instrumentation include path...used gray-box approaches. For example, PinPoint [11, 14] and [29] use request tracing to diagnose Java exceptions, endless calls, and null calls in
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745
Della Mea, V; Beltrami, C A
2000-01-01
The last five years experience has definitely demonstrated the possible applications of the Internet for telepathology. They may be listed as follows: (a) teleconsultation via multimedia e-mail; (b) teleconsultation via web-based tools; (c) distant education by means of World Wide Web; (d) virtual microscope management through Web and Java interfaces; (e) real-time consultations through Internet-based videoconferencing. Such applications have led to the recognition of some important limits of the Internet, when dealing with telemedicine: (i) no guarantees on the quality of service (QoS); (ii) inadequate security and privacy; (iii) for some countries, low bandwidth and thus low responsiveness for real-time applications. Currently, there are several innovations in the world of the Internet. Different initiatives have been aimed at an amelioration of the Internet protocols, in order to have quality of service, multimedia support, security and other advanced services, together with greater bandwidth. The forthcoming Internet improvements, although induced by electronic commerce, video on demand, and other commercial needs, are of real interest also for telemedicine, because they solve the limits currently slowing down the use of Internet. When such new services will be available, telepathology applications may switch from research to daily practice in a fast way.
The Design and Development of a Web-Interface for the Software Engineering Automation System
2001-09-01
application on the Internet. 14. SUBJECT TERMS Computer Aided Prototyping, Real Time Systems , Java 15. NUMBER OF...difficult. Developing the entire system only to find it does not meet the customer’s needs is a tremendous waste of time. Real - time systems need a...software prototyping is an iterative software development methodology utilized to improve the analysis and design of real - time systems [2]. One
Bernal-Rusiel, Jorge L; Rannou, Nicolas; Gollub, Randy L; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E; Pienaar, Rudolph
2017-01-01
In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView , a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution.
World Reaction to Virtual Space
NASA Technical Reports Server (NTRS)
1999-01-01
DRaW Computing developed virtual reality software for the International Space Station. Open Worlds, as the software has been named, can be made to support Java scripting and virtual reality hardware devices. Open Worlds permits the use of VRML script nodes to add virtual reality capabilities to the user's applications.
Time multiplexing for increased FOV and resolution in virtual reality
NASA Astrophysics Data System (ADS)
Miñano, Juan C.; Benitez, Pablo; Grabovičkić, Dejan; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj
2017-06-01
We introduce a time multiplexing strategy to increase the total pixel count of the virtual image seen in a VR headset. This translates into an improvement of the pixel density or the Field of View FOV (or both) A given virtual image is displayed by generating a succession of partial real images, each representing part of the virtual image and together representing the virtual image. Each partial real image uses the full set of physical pixels available in the display. The partial real images are successively formed and combine spatially and temporally to form a virtual image viewable from the eye position. Partial real images are imaged through different optical channels depending of its time slot. Shutters or other schemes are used to avoid that a partial real image be imaged through the wrong optical channels or at the wrong time slot. This time multiplexing strategy needs real images be shown at high frame rates (>120fps). Available display and shutters technologies are discussed. Several optical designs for achieving this time multiplexing scheme in a compact format are shown. This time multiplexing scheme allows increasing the resolution/FOV of the virtual image not only by increasing the physical pixel density but also by decreasing the pixels switching time, a feature that may be simpler to achieve in certain circumstances.
Notice and Credits Page - NOAA's National Weather Service
- Visolve is a software application (free for personal use) that transforms colors of the computer display Mac OS X 10.2 or later. (Purchase) - A 30-day free trial of eyePilot is available from eyePilot web site - http://www.colorhelper.com/ Java Java Virtual Machine - free download from java.com Adobe Reader
Dormido, Raquel; Sánchez, José; Duro, Natividad; Dormido-Canto, Sebastián; Guinaldo, María; Dormido, Sebastián
2014-03-06
This paper describes an interactive virtual laboratory for experimenting with an outdoor tubular photobioreactor (henceforth PBR for short). This virtual laboratory it makes possible to: (a) accurately reproduce the structure of a real plant (the PBR designed and built by the Department of Chemical Engineering of the University of Almería, Spain); (b) simulate a generic tubular PBR by changing the PBR geometry; (c) simulate the effects of changing different operating parameters such as the conditions of the culture (pH, biomass concentration, dissolved O2, inyected CO2, etc.); (d) simulate the PBR in its environmental context; it is possible to change the geographic location of the system or the solar irradiation profile; (e) apply different control strategies to adjust different variables such as the CO2 injection, culture circulation rate or culture temperature in order to maximize the biomass production; (f) simulate the harvesting. In this way, users can learn in an intuitive way how productivity is affected by any change in the design. It facilitates the learning of how to manipulate essential variables for microalgae growth to design an optimal PBR. The simulator has been developed with Easy Java Simulations, a freeware open-source tool developed in Java, specifically designed for the creation of interactive dynamic simulations.
Dormido, Raquel; Sánchez, José; Duro, Natividad; Dormido-Canto, Sebastián; Guinaldo, María; Dormido, Sebastián
2014-01-01
This paper describes an interactive virtual laboratory for experimenting with an outdoor tubular photobioreactor (henceforth PBR for short). This virtual laboratory it makes possible to: (a) accurately reproduce the structure of a real plant (the PBR designed and built by the Department of Chemical Engineering of the University of Almería, Spain); (b) simulate a generic tubular PBR by changing the PBR geometry; (c) simulate the effects of changing different operating parameters such as the conditions of the culture (pH, biomass concentration, dissolved O2, inyected CO2, etc.); (d) simulate the PBR in its environmental context; it is possible to change the geographic location of the system or the solar irradiation profile; (e) apply different control strategies to adjust different variables such as the CO2 injection, culture circulation rate or culture temperature in order to maximize the biomass production; (f) simulate the harvesting. In this way, users can learn in an intuitive way how productivity is affected by any change in the design. It facilitates the learning of how to manipulate essential variables for microalgae growth to design an optimal PBR. The simulator has been developed with Easy Java Simulations, a freeware open-source tool developed in Java, specifically designed for the creation of interactive dynamic simulations. PMID:24662450
Ganier, Franck; Hoareau, Charlotte; Tisseau, Jacques
2014-01-01
Virtual reality opens new opportunities for operator training in complex tasks. It lowers costs and has fewer constraints than traditional training. The ultimate goal of virtual training is to transfer knowledge gained in a virtual environment to an actual real-world setting. This study tested whether a maintenance procedure could be learnt equally well by virtual-environment and conventional training. Forty-two adults were divided into three equally sized groups: virtual training (GVT® [generic virtual training]), conventional training (using a real tank suspension and preparation station) and control (no training). Participants then performed the procedure individually in the real environment. Both training types (conventional and virtual) produced similar levels of performance when the procedure was carried out in real conditions. Performance level for the two trained groups was better in terms of success and time taken to complete the task, time spent consulting job instructions and number of times the instructor provided guidance.
VERSE - Virtual Equivalent Real-time Simulation
NASA Technical Reports Server (NTRS)
Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel
2005-01-01
Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.
Virtual reality applied to teletesting
NASA Astrophysics Data System (ADS)
van den Berg, Thomas J.; Smeenk, Roland J. M.; Mazy, Alain; Jacques, Patrick; Arguello, Luis; Mills, Simon
2003-05-01
The activity "Virtual Reality applied to Teletesting" is related to a wider European Space Agency (ESA) initiative of cost reduction, in particular the reduction of test costs. Reduction of costs of space related projects have to be performed on test centre operating costs and customer company costs. This can accomplished by increasing the automation and remote testing ("teletesting") capabilities of the test centre. Main problems related to teletesting are a lack of situational awareness and the separation of control over the test environment. The objective of the activity is to evaluate the use of distributed computing and Virtual Reality technology to support the teletesting of a payload under vacuum conditions, and to provide a unified man-machine interface for the monitoring and control of payload, vacuum chamber and robotics equipment. The activity includes the development and testing of a "Virtual Reality Teletesting System" (VRTS). The VRTS is deployed at one of the ESA certified test centres to perform an evaluation and test campaign using a real payload. The VRTS is entirely written in the Java programming language, using the J2EE application model. The Graphical User Interface runs as an applet in a Web browser, enabling easy access from virtually any place.
Internet-based distributed collaborative environment for engineering education and design
NASA Astrophysics Data System (ADS)
Sun, Qiuli
2001-07-01
This research investigates the use of the Internet for engineering education, design, and analysis through the presentation of a Virtual City environment. The main focus of this research was to provide an infrastructure for engineering education, test the concept of distributed collaborative design and analysis, develop and implement the Virtual City environment, and assess the environment's effectiveness in the real world. A three-tier architecture was adopted in the development of the prototype, which contains an online database server, a Web server as well as multi-user servers, and client browsers. The environment is composed of five components, a 3D virtual world, multiple Internet-based multimedia modules, an online database, a collaborative geometric modeling module, and a collaborative analysis module. The environment was designed using multiple Intenet-based technologies, such as Shockwave, Java, Java 3D, VRML, Perl, ASP, SQL, and a database. These various technologies together formed the basis of the environment and were programmed to communicate smoothly with each other. Three assessments were conducted over a period of three semesters. The Virtual City is open to the public at www.vcity.ou.edu. The online database was designed to manage the changeable data related to the environment. The virtual world was used to implement 3D visualization and tie the multimedia modules together. Students are allowed to build segments of the 3D virtual world upon completion of appropriate undergraduate courses in civil engineering. The end result is a complete virtual world that contains designs from all of their coursework and is viewable on the Internet. The environment is a content-rich educational system, which can be used to teach multiple engineering topics with the help of 3D visualization, animations, and simulations. The concept of collaborative design and analysis using the Internet was investigated and implemented. Geographically dispersed users can build the same geometric model simultaneously over the Internet and communicate with each other through a chat room. They can also conduct finite element analysis collaboratively on the same object over the Internet. They can mesh the same object, apply and edit the same boundary conditions and forces, obtain the same analysis results, and then discuss the results through the Internet.
Morrison, James J; Hostetter, Jason; Wang, Kenneth; Siegel, Eliot L
2015-02-01
Real-time mining of large research trial datasets enables development of case-based clinical decision support tools. Several applicable research datasets exist including the National Lung Screening Trial (NLST), a dataset unparalleled in size and scope for studying population-based lung cancer screening. Using these data, a clinical decision support tool was developed which matches patient demographics and lung nodule characteristics to a cohort of similar patients. The NLST dataset was converted into Structured Query Language (SQL) tables hosted on a web server, and a web-based JavaScript application was developed which performs real-time queries. JavaScript is used for both the server-side and client-side language, allowing for rapid development of a robust client interface and server-side data layer. Real-time data mining of user-specified patient cohorts achieved a rapid return of cohort cancer statistics and lung nodule distribution information. This system demonstrates the potential of individualized real-time data mining using large high-quality clinical trial datasets to drive evidence-based clinical decision-making.
Multimedia consultation session recording and playback using Java-based browser in global PACS
NASA Astrophysics Data System (ADS)
Martinez, Ralph; Shah, Pinkesh J.; Yu, Yuan-Pin
1998-07-01
The current version of the Global PACS software system uses a Java-based implementation of the Remote Consultation and Diagnosis (RCD) system. The Java RCD includes a multimedia consultation session between physicians that includes text, static image, image annotation, and audio data. The JAVA RCD allows 2-4 physicians to collaborate on a patient case. It allows physicians to join the session via WWW Java-enabled browsers or stand alone RCD application. The RCD system includes a distributed database archive system for archiving and retrieving patient and session data. The RCD system can be used for store and forward scenarios, case reviews, and interactive RCD multimedia sessions. The RCD system operates over the Internet, telephone lines, or in a private Intranet. A multimedia consultation session can be recorded, and then played back at a later time for review, comments, and education. A session can be played back using Java-enabled WWW browsers on any operating system platform. The JAVA RCD system shows that a case diagnosis can be captured digitally and played back with the original real-time temporal relationships between data streams. In this paper, we describe design and implementation of the RCD session playback.
Prins, Pjotr; Goto, Naohisa; Yates, Andrew; Gautier, Laurent; Willis, Scooter; Fields, Christopher; Katayama, Toshiaki
2012-01-01
Open-source software (OSS) encourages computer programmers to reuse software components written by others. In evolutionary bioinformatics, OSS comes in a broad range of programming languages, including C/C++, Perl, Python, Ruby, Java, and R. To avoid writing the same functionality multiple times for different languages, it is possible to share components by bridging computer languages and Bio* projects, such as BioPerl, Biopython, BioRuby, BioJava, and R/Bioconductor. In this chapter, we compare the two principal approaches for sharing software between different programming languages: either by remote procedure call (RPC) or by sharing a local call stack. RPC provides a language-independent protocol over a network interface; examples are RSOAP and Rserve. The local call stack provides a between-language mapping not over the network interface, but directly in computer memory; examples are R bindings, RPy, and languages sharing the Java Virtual Machine stack. This functionality provides strategies for sharing of software between Bio* projects, which can be exploited more often. Here, we present cross-language examples for sequence translation, and measure throughput of the different options. We compare calling into R through native R, RSOAP, Rserve, and RPy interfaces, with the performance of native BioPerl, Biopython, BioJava, and BioRuby implementations, and with call stack bindings to BioJava and the European Molecular Biology Open Software Suite. In general, call stack approaches outperform native Bio* implementations and these, in turn, outperform RPC-based approaches. To test and compare strategies, we provide a downloadable BioNode image with all examples, tools, and libraries included. The BioNode image can be run on VirtualBox-supported operating systems, including Windows, OSX, and Linux.
Bernal-Rusiel, Jorge L.; Rannou, Nicolas; Gollub, Randy L.; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E.; Pienaar, Rudolph
2017-01-01
In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView, a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution. PMID:28507515
Distributed Object Technology with CORBA and Java: Key Concepts and Implications.
1997-06-01
commercial use should be addressed to the SEI Licensing Agent. NO WARRANTY THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL...retrieval. This power is not derived from the language per se, but from the architecture-neutral approach used by Java. The Java Virtual Machine...pattern that is focused on performance considerations, the PCo archi- tecture also uses CORBA interface definition language (IDL) to model the
NASA Astrophysics Data System (ADS)
Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.
2012-12-01
Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.
CrocoBLAST: Running BLAST efficiently in the age of next-generation sequencing.
Tristão Ramos, Ravi José; de Azevedo Martins, Allan Cézar; da Silva Delgado, Gabrielle; Ionescu, Crina-Maria; Ürményi, Turán Peter; Silva, Rosane; Koca, Jaroslav
2017-11-15
CrocoBLAST is a tool for dramatically speeding up BLAST+ execution on any computer. Alignments that would take days or weeks with NCBI BLAST+ can be run overnight with CrocoBLAST. Additionally, CrocoBLAST provides features critical for NGS data analysis, including: results identical to those of BLAST+; compatibility with any BLAST+ version; real-time information regarding calculation progress and remaining run time; access to partial alignment results; queueing, pausing, and resuming BLAST+ calculations without information loss. CrocoBLAST is freely available online, with ample documentation (webchem.ncbr.muni.cz/Platform/App/CrocoBLAST). No installation or user registration is required. CrocoBLAST is implemented in C, while the graphical user interface is implemented in Java. CrocoBLAST is supported under Linux and Windows, and can be run under Mac OS X in a Linux virtual machine. jkoca@ceitec.cz. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Web-GIS-based SARS epidemic situation visualization
NASA Astrophysics Data System (ADS)
Lu, Xiaolin
2004-03-01
In order to research, perform statistical analysis and broadcast the information of SARS epidemic situation according to the relevant spatial position, this paper proposed a unified global visualization information platform for SARS epidemic situation based on Web-GIS and scientific virtualization technology. To setup the unified global visual information platform, the architecture of Web-GIS based interoperable information system is adopted to enable public report SARS virus information to health cure center visually by using the web visualization technology. A GIS java applet is used to visualize the relationship between spatial graphical data and virus distribution, and other web based graphics figures such as curves, bars, maps and multi-dimensional figures are used to visualize the relationship between SARS virus tendency with time, patient number or locations. The platform is designed to display the SARS information in real time, simulate visually for real epidemic situation and offer an analyzing tools for health department and the policy-making government department to support the decision-making for preventing against the SARS epidemic virus. It could be used to analyze the virus condition through visualized graphics interface, isolate the areas of virus source, and control the virus condition within shortest time. It could be applied to the visualization field of SARS preventing systems for SARS information broadcasting, data management, statistical analysis, and decision supporting.
Web-based three-dimensional geo-referenced visualization
NASA Astrophysics Data System (ADS)
Lin, Hui; Gong, Jianhua; Wang, Freeman
1999-12-01
This paper addresses several approaches to implementing web-based, three-dimensional (3-D), geo-referenced visualization. The discussion focuses on the relationship between multi-dimensional data sets and applications, as well as the thick/thin client and heavy/light server structure. Two models of data sets are addressed in this paper. One is the use of traditional 3-D data format such as 3-D Studio Max, Open Inventor 2.0, Vis5D and OBJ. The other is modelled by a web-based language such as VRML. Also, traditional languages such as C and C++, as well as web-based programming tools such as Java, Java3D and ActiveX, can be used for developing applications. The strengths and weaknesses of each approach are elaborated. Four practical solutions for using VRML and Java, Java and Java3D, VRML and ActiveX and Java wrapper classes (Java and C/C++), to develop applications are presented for web-based, real-time interactive and explorative visualization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wickstrom, Gregory Lloyd; Gale, Jason Carl; Ma, Kwok Kee
The Sandia Secure Processor (SSP) is a new native Java processor that has been specifically designed for embedded applications. The SSP's design is a system composed of a core Java processor that directly executes Java bytecodes, on-chip intelligent IO modules, and a suite of software tools for simulation and compiling executable binary files. The SSP is unique in that it provides a way to control real-time IO modules for embedded applications. The system software for the SSP is a 'class loader' that takes Java .class files (created with your favorite Java compiler), links them together, and compiles a binary. Themore » complete SSP system provides very powerful functionality with very light hardware requirements with the potential to be used in a wide variety of small-system embedded applications. This paper gives a detail description of the Sandia Secure Processor and its unique features.« less
JMS Proxy and C/C++ Client SDK
NASA Technical Reports Server (NTRS)
Wolgast, Paul; Pechkam, Paul
2007-01-01
JMS Proxy and C/C++ Client SDK (JMS signifies "Java messaging service" and "SDK" signifies "software development kit") is a software package for developing interfaces that enable legacy programs (here denoted "clients") written in the C and C++ languages to communicate with each other via a JMS broker. This package consists of two main components: the JMS proxy server component and the client C library SDK component. The JMS proxy server component implements a native Java process that receives and responds to requests from clients. This component can run on any computer that supports Java and a JMS client. The client C library SDK component is used to develop a JMS client program running in each affected C or C++ environment, without need for running a Java virtual machine in the affected computer. A C client program developed by use of this SDK has most of the quality-of-service characteristics of standard Java-based client programs, including the following: Durable subscriptions; Asynchronous message receipt; Such standard JMS message qualities as "TimeToLive," "Message Properties," and "DeliveryMode" (as the quoted terms are defined in previously published JMS documentation); and Automatic reconnection of a JMS proxy to a restarted JMS broker.
Thin client (web browser)-based collaboration for medical imaging and web-enabled data.
Le, Tuong Huu; Malhi, Nadeem
2002-01-01
Utilizing thin client software and open source server technology, a collaborative architecture was implemented allowing for sharing of Digital Imaging and Communications in Medicine (DICOM) and non-DICOM images with real-time markup. Using the Web browser as a thin client integrated with standards-based components, such as DHTML (dynamic hypertext markup language), JavaScript, and Java, collaboration was achieved through a Web server/proxy server combination utilizing Java Servlets and Java Server Pages. A typical collaborative session involved the driver, who directed the navigation of the other collaborators, the passengers, and provided collaborative markups of medical and nonmedical images. The majority of processing was performed on the server side, allowing for the client to remain thin and more accessible.
Evaluation of the cognitive effects of travel technique in complex real and virtual environments.
Suma, Evan A; Finkelstein, Samantha L; Reid, Myra; V Babu, Sabarish; Ulinski, Amy C; Hodges, Larry F
2010-01-01
We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Foster, I.; Gawor, J.
In this paper we report on the features of the Java Commodity Grid Kit. The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus protocols, allowing the Java CoG Kit to communicate also with the C Globus reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well as numerous additional libraries and frameworks developed by the Java community tomore » enable network, Internet, enterprise, and peer-to peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus software. In this paper we also report on the efforts to develop server side Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Globus jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
Utah Virtual Lab: JAVA interactivity for teaching science and statistics on line.
Malloy, T E; Jensen, G C
2001-05-01
The Utah on-line Virtual Lab is a JAVA program run dynamically off a database. It is embedded in StatCenter (www.psych.utah.edu/learn/statsampler.html), an on-line collection of tools and text for teaching and learning statistics. Instructors author a statistical virtual reality that simulates theories and data in a specific research focus area by defining independent, predictor, and dependent variables and the relations among them. Students work in an on-line virtual environment to discover the principles of this simulated reality: They go to a library, read theoretical overviews and scientific puzzles, and then go to a lab, design a study, collect and analyze data, and write a report. Each student's design and data analysis decisions are computer-graded and recorded in a database; the written research report can be read by the instructor or by other students in peer groups simulating scientific conventions.
Real Time Monitor of Grid job executions
NASA Astrophysics Data System (ADS)
Colling, D. J.; Martyniak, J.; McGough, A. S.; Křenek, A.; Sitera, J.; Mulač, M.; Dvořák, F.
2010-04-01
In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.
NASA Astrophysics Data System (ADS)
McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.
2017-12-01
Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.
PrismTech Data Distribution Service Java API Evaluation
NASA Technical Reports Server (NTRS)
Riggs, Cortney
2008-01-01
My internship duties with Launch Control Systems required me to start performance testing of an Object Management Group's (OMG) Data Distribution Service (DDS) specification implementation by PrismTech Limited through the Java programming language application programming interface (API). DDS is a networking middleware for Real-Time Data Distribution. The performance testing involves latency, redundant publishers, extended duration, redundant failover, and read performance. Time constraints allowed only for a data throughput test. I have designed the testing applications to perform all performance tests when time is allowed. Performance evaluation data such as megabits per second and central processing unit (CPU) time consumption were not easily attainable through the Java programming language; they required new methods and classes created in the test applications. Evaluation of this product showed the rate that data can be sent across the network. Performance rates are better on Linux platforms than AIX and Sun platforms. Compared to previous C++ programming language API, the performance evaluation also shows the language differences for the implementation. The Java API of the DDS has a lower throughput performance than the C++ API.
A low-cost mobile adaptive tracking system for chronic pulmonary patients in home environment.
Işik, Ali Hakan; Güler, Inan; Sener, Melahat Uzel
2013-01-01
The main objective of this study is presenting a real-time mobile adaptive tracking system for patients diagnosed with diseases such as asthma or chronic obstructive pulmonary disease and application results at home. The main role of the system is to support and track chronic pulmonary patients in real time who are comfortable in their home environment. It is not intended to replace the doctor, regular treatment, and diagnosis. In this study, the Java 2 micro edition-based system is integrated with portable spirometry, smartphone, extensible markup language-based Web services, Web server, and Web pages for visualizing pulmonary function test results. The Bluetooth(®) (Bluetooth SIG, Kirkland, WA) virtual serial port protocol is used to obtain the test results from spirometry. General packet radio service, wireless local area network, or third-generation-based wireless networks are used to send the test results from a smartphone to the remote database. The system provides real-time classification of test results with the back propagation artificial neural network algorithm on a mobile smartphone. It also provides the generation of appropriate short message service-based notification and sending of all data to the Web server. In this study, the test results of 486 patients, obtained from Atatürk Chest Diseases and Thoracic Surgery Training and Research Hospital in Ankara, Turkey, are used as the training and test set in the algorithm. The algorithm has 98.7% accuracy, 97.83% specificity, 97.63% sensitivity, and 0.946 correlation values. The results show that the system is cheap (900 Euros) and reliable. The developed real-time system provides improvement in classification accuracy and facilitates tracking of chronic pulmonary patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Laszewski, G.; Gawor, J.; Lane, P.
In this paper we report on the features of the Java Commodity Grid Kit (Java CoG Kit). The Java CoG Kit provides middleware for accessing Grid functionality from the Java framework. Java CoG Kit middleware is general enough to design a variety of advanced Grid applications with quite different user requirements. Access to the Grid is established via Globus Toolkit protocols, allowing the Java CoG Kit to also communicate with the services distributed as part of the C Globus Toolkit reference implementation. Thus, the Java CoG Kit provides Grid developers with the ability to utilize the Grid, as well asmore » numerous additional libraries and frameworks developed by the Java community to enable network, Internet, enterprise and peer-to-peer computing. A variety of projects have successfully used the client libraries of the Java CoG Kit to access Grids driven by the C Globus Toolkit software. In this paper we also report on the efforts to develop serverside Java CoG Kit components. As part of this research we have implemented a prototype pure Java resource management system that enables one to run Grid jobs on platforms on which a Java virtual machine is supported, including Windows NT machines.« less
Dynamic Data-Driven Prognostics and Condition Monitoring of On-board Electronics
2012-08-27
of functionality and accessibility; it is an open language unlike Java or Visual meaning that it is also free. It is also one of the most popular...and C# are able to run without the use of a virtual machine like Java . 4.2.1.5 Implementation For building of an OSA-CBM system, the primer...documentation [7] recommends the following steps: 1. Choose a middleware technology (DCOM, CORBA, Web Services, Java RMI, etc.). 2. Transform OSA-CBM UML
Java 3D Interactive Visualization for Astrophysics
NASA Astrophysics Data System (ADS)
Chae, K.; Edirisinghe, D.; Lingerfelt, E. J.; Guidry, M. W.
2003-05-01
We are developing a series of interactive 3D visualization tools that employ the Java 3D API. We have applied this approach initially to a simple 3-dimensional galaxy collision model (restricted 3-body approximation), with quite satisfactory results. Running either as an applet under Web browser control, or as a Java standalone application, this program permits real-time zooming, panning, and 3-dimensional rotation of the galaxy collision simulation under user mouse and keyboard control. We shall also discuss applications of this technology to 3-dimensional visualization for other problems of astrophysical interest such as neutron star mergers and the time evolution of element/energy production networks in X-ray bursts. *Managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.
ERIC Educational Resources Information Center
Martinez, Guadalupe; Naranjo, Francisco L.; Perez, Angel L.; Suero, Maria Isabel; Pardo, Pedro J.
2011-01-01
This study compared the educational effects of computer simulations developed in a hyper-realistic virtual environment with the educational effects of either traditional schematic simulations or a traditional optics laboratory. The virtual environment was constructed on the basis of Java applets complemented with a photorealistic visual output.…
Avatars, Virtual Reality Technology, and the U.S. Military: Emerging Policy Issues
2008-04-09
called “ Sentient Worldwide Simulation,” which will “mirror” real life and automatically follow real-world events in real time. Some virtual world...cities, with the final goal of creating a fully functioning virtual model of the entire world, which will be known as the Sentient Worldwide Simulation
LivePhantom: Retrieving Virtual World Light Data to Real Environments.
Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.
LivePhantom: Retrieving Virtual World Light Data to Real Environments
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera’s position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems. PMID:27930663
Eng, J
1997-01-01
Java is a programming language that runs on a "virtual machine" built into World Wide Web (WWW)-browsing programs on multiple hardware platforms. Web pages were developed with Java to enable Web-browsing programs to overlay transparent graphics and text on displayed images so that the user could control the display of labels and annotations on the images, a key feature not available with standard Web pages. This feature was extended to include the presentation of normal radiologic anatomy. Java programming was also used to make Web browsers compatible with the Digital Imaging and Communications in Medicine (DICOM) file format. By enhancing the functionality of Web pages, Java technology should provide greater incentive for using a Web-based approach in the development of radiology teaching material.
NASA Technical Reports Server (NTRS)
Mehhtz, Peter
2005-01-01
JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.
Visualization of Vgi Data Through the New NASA Web World Wind Virtual Globe
NASA Astrophysics Data System (ADS)
Brovelli, M. A.; Kilsedar, C. E.; Zamboni, G.
2016-06-01
GeoWeb 2.0, laying the foundations of Volunteered Geographic Information (VGI) systems, has led to platforms where users can contribute to the geographic knowledge that is open to access. Moreover, as a result of the advancements in 3D visualization, virtual globes able to visualize geographic data even on browsers emerged. However the integration of VGI systems and virtual globes has not been fully realized. The study presented aims to visualize volunteered data in 3D, considering also the ease of use aspects for general public, using Free and Open Source Software (FOSS). The new Application Programming Interface (API) of NASA, Web World Wind, written in JavaScript and based on Web Graphics Library (WebGL) is cross-platform and cross-browser, so that the virtual globe created using this API can be accessible through any WebGL supported browser on different operating systems and devices, as a result not requiring any installation or configuration on the client-side, making the collected data more usable to users, which is not the case with the World Wind for Java as installation and configuration of the Java Virtual Machine (JVM) is required. Furthermore, the data collected through various VGI platforms might be in different formats, stored in a traditional relational database or in a NoSQL database. The project developed aims to visualize and query data collected through Open Data Kit (ODK) platform and a cross-platform application, where data is stored in a relational PostgreSQL and NoSQL CouchDB databases respectively.
2004-09-01
Rosetti USN U.S. Navy Chesterton, IN 6. Erik Chaum NUWC Newport, RI 7. David Bellino NPRI Newport, RI 8. Dick Nadolink NUWC Newport, RI...found at (http://www.parallelgraphics.com/products/cortona). G. JFREECHART JFreeChart is an open source Java API created by David Gilbert and...www.xj3d.org/. Accessed 3 September 2004. Hunter, David , Kurt Cagle, and Chris Dix, eds. Beginning XML, Second Edition. Indianapolis, IN
NASA Astrophysics Data System (ADS)
Gintautas, Vadas; Hubler, Alfred
2006-03-01
As worldwide computer resources increase in power and decrease in cost, real-time simulations of physical systems are becoming increasingly prevalent, from laboratory models to stock market projections and entire ``virtual worlds'' in computer games. Often, these systems are meticulously designed to match real-world systems as closely as possible. We study the limiting behavior of a virtual horizontally driven pendulum coupled to its real-world counterpart, where the interaction occurs on a time scale that is much shorter than the time scale of the dynamical system. We find that if the physical parameters of the virtual system match those of the real system within a certain tolerance, there is a qualitative change in the behavior of the two-pendulum system as the strength of the coupling is increased. Applications include a new method to measure the physical parameters of a real system and the use of resonance spectroscopy to refine a computer model. As virtual systems better approximate real ones, even very weak interactions may produce unexpected and dramatic behavior. The research is supported by the National Science Foundation Grant No. NSF PHY 01-40179, NSF DMS 03-25939 ITR, and NSF DGE 03-38215.
Virtual Proprioception for eccentric training.
LeMoyne, Robert; Mastroianni, Timothy
2017-07-01
Wireless inertial sensors enable quantified feedback, which can be applied to evaluate the efficacy of therapy and rehabilitation. In particular eccentric training promotes a beneficial rehabilitation and strength training strategy. Virtual Proprioception for eccentric training applies real-time feedback from a wireless gyroscope platform enabled through a software application for a smartphone. Virtual Proprioception for eccentric training is applied to the eccentric phase of a biceps brachii strength training and contrasted to a biceps brachii strength training scenario without feedback. During the operation of Virtual Proprioception for eccentric training the intent is to not exceed a prescribed gyroscope signal threshold based on the real-time presentation of the gyroscope signal, in order to promote the eccentric aspect of the strength training endeavor. The experimental trial data is transmitted wireless through connectivity to the Internet as an email attachment for remote post-processing. A feature set is derived from the gyroscope signal for machine learning classification of the two scenarios of Virtual Proprioception real-time feedback for eccentric training and eccentric training without feedback. Considerable classification accuracy is achieved through the application of a multilayer perceptron neural network for distinguishing between the Virtual Proprioception real-time feedback for eccentric training and eccentric training without feedback.
Workload-Driven Design and Evaluation of Large-Scale Data-Centric Systems
2012-05-09
in the batch zone in and out of a low-power state, e.g., sending a “ hibernate ” command via ssh and using Wake-on-LAN or related technologies [85]. If...parameter values for experiments with stand-alone jobs. The mapred.child.java.opts parameter sets the maximum virtual memory of the Java child pro- cesses
Architecture, Design, and Development of an HTML/JavaScript Web-Based Group Support System.
ERIC Educational Resources Information Center
Romano, Nicholas C., Jr.; Nunamaker, Jay F., Jr.; Briggs, Robert O.; Vogel, Douglas R.
1998-01-01
Examines the need for virtual workspaces and describes the architecture, design, and development of GroupSystems for the World Wide Web (GSWeb), an HTML/JavaScript Web-based Group Support System (GSS). GSWeb, an application interface similar to a Graphical User Interface (GUI), is currently used by teams around the world and relies on user…
SimPackJ/S: a web-oriented toolkit for discrete event simulation
NASA Astrophysics Data System (ADS)
Park, Minho; Fishwick, Paul A.
2002-07-01
SimPackJ/S is the JavaScript and Java version of SimPack, which means SimPackJ/S is a collection of JavaScript and Java libraries and executable programs for computer simulations. The main purpose of creating SimPackJ/S is that we allow existing SimPack users to expand simulation areas and provide future users with a freeware simulation toolkit to simulate and model a system in web environments. One of the goals for this paper is to introduce SimPackJ/S. The other goal is to propose translation rules for converting C to JavaScript and Java. Most parts demonstrate the translation rules with examples. In addition, we discuss a 3D dynamic system model and overview an approach to 3D dynamic systems using SimPackJ/S. We explain an interface between SimPackJ/S and the 3D language--Virtual Reality Modeling Language (VRML). This paper documents how to translate C to JavaScript and Java and how to utilize SimPackJ/S within a 3D web environment.
A Security-façade Library for Virtual-observatory Software
NASA Astrophysics Data System (ADS)
Rixon, G.
2009-09-01
The security-façade library implements, for Java, IVOA's security standards. It supports the authentication mechanisms for SOAP and REST web-services, the sign-on mechanisms (with MyProxy, AstroGrid Accounts protocol or local credential-caches), the delegation protocol, and RFC3820-enabled HTTPS for Apache Tomcat. Using the façade, a developer who is not a security specialist can easily add access control to a virtual-observatory service and call secured services from an application. The library has been an internal part of AstroGrid software for some time and it is now offered for use by other developers.
Two web-based laboratories of the FisL@bs network: Hooke's and Snell's laws
NASA Astrophysics Data System (ADS)
de la Torre, L.; Sánchez, J.; Dormido, S.; Sánchez, J. P.; Yuste, M.; Carreras, C.
2011-03-01
FisL@bs is a network of remote and virtual laboratories for physics university education via the Internet that offers students the possibility of performing hands-on experiments in different fields of physics in two ways: simulation and real remote operation. This paper gives a detailed account of a novel way in physics in which distance learning students can gain practical experience autonomously. FisL@bs uses the same structure as AutomatL@bs, a network of virtual and remote laboratories for learning/teaching of control engineering, which has been in operation for four years. Students can experiment with the laboratories offered using an Internet connection and a Java-compatible web browser. This paper, specially intended for university educators but easily comprehensible even for undergraduate students, explains how the portal works and the hardware and software tools used to create it. In addition, it also describes two physics experiments already available: spring elasticity and the laws of reflection and refraction.
ERIC Educational Resources Information Center
Liu, S.; Tang, J.; Deng, C.; Li, X.-F.; Gaudiot, J.-L.
2011-01-01
Java Virtual Machine (JVM) education has become essential in training embedded software engineers as well as virtual machine researchers and practitioners. However, due to the lack of suitable instructional tools, it is difficult for students to obtain any kind of hands-on experience and to attain any deep understanding of JVM design. To address…
An Online Virtual Laboratory of Electricity
ERIC Educational Resources Information Center
Gómez Tejedor, J. A.; Moltó Martínez, G.; Barros Vidaurre, C.
2008-01-01
In this article, we describe a Java-based virtual laboratory, accessible via the Internet by means of a Web browser. This remote laboratory enables the students to build both direct and alternating current circuits. The program includes a graphical user interface which resembles the connection board, and also the electrical components and tools…
A Java application for tissue section image analysis.
Kamalov, R; Guillaud, M; Haskins, D; Harrison, A; Kemp, R; Chiu, D; Follen, M; MacAulay, C
2005-02-01
The medical industry has taken advantage of Java and Java technologies over the past few years, in large part due to the language's platform-independence and object-oriented structure. As such, Java provides powerful and effective tools for developing tissue section analysis software. The background and execution of this development are discussed in this publication. Object-oriented structure allows for the creation of "Slide", "Unit", and "Cell" objects to simulate the corresponding real-world objects. Different functions may then be created to perform various tasks on these objects, thus facilitating the development of the software package as a whole. At the current time, substantial parts of the initially planned functionality have been implemented. Getafics 1.0 is fully operational and currently supports a variety of research projects; however, there are certain features of the software that currently introduce unnecessary complexity and inefficiency. In the future, we hope to include features that obviate these problems.
Quadrado, Virgínia Helena; Silva, Talita Dias da; Favero, Francis Meire; Tonks, James; Massetti, Thais; Monteiro, Carlos Bandeira de Mello
2017-11-10
To examine whether performance improvements in the virtual environment generalize to the natural environment. we had 64 individuals, 32 of which were individuals with DMD and 32 were typically developing individuals. The groups practiced two coincidence timing tasks. In the more tangible button-press task, the individuals were required to 'intercept' a falling virtual object at the moment it reached the interception point by pressing a key on the computer. In the more abstract task, they were instructed to 'intercept' the virtual object by making a hand movement in a virtual environment using a webcam. For individuals with DMD, conducting a coincidence timing task in a virtual environment facilitated transfer to the real environment. However, we emphasize that a task practiced in a virtual environment should have higher rates of difficulties than a task practiced in a real environment. IMPLICATIONS FOR REHABILITATION Virtual environments can be used to promote improved performance in ?real-world? environments. Virtual environments offer the opportunity to create paradigms similar ?real-life? tasks, however task complexity and difficulty levels can be manipulated, graded and enhanced to increase likelihood of success in transfer of learning and performance. Individuals with DMD, in particular, showed immediate performance benefits after using virtual reality.
The ASSERT Virtual Machine Kernel: Support for Preservation of Temporal Properties
NASA Astrophysics Data System (ADS)
Zamorano, J.; de la Puente, J. A.; Pulido, J. A.; Urueña
2008-08-01
A new approach to building embedded real-time software has been developed in the ASSERT project. One of its key elements is the concept of a virtual machine preserving the non-functional properties of the system, and especially real-time properties, all the way down from high- level design models down to executable code. The paper describes one instance of the virtual machine concept that provides support for the preservation of temporal properties both at the source code level —by accept- ing only "legal" entities, i.e. software components with statically analysable real-tim behaviour— and at run-time —by monitoring the temporal behaviour of the system. The virtual machine has been validated on several pilot projects carried out by aerospace companies in the framework of the ASSERT project.
Design and Development of a Virtual Facility Tour Using iPIX(TM) Technology
NASA Technical Reports Server (NTRS)
Farley, Douglas L.
2002-01-01
The capabilities of the iPIX virtual tour software, in conjunction with a web-based interface create a unique and valuable system that provides users with an efficient virtual capability to tour facilities while being able to acquire the necessary technical content is demonstrated. A users guide to the Mechanics and Durability Branch's virtual tour is presented. The guide provides the user with instruction on operating both scripted and unscripted tours as well as a discussion of the tours for Buildings 1148, 1205 and 1256 and NASA Langley Research Center. Furthermore, an indepth discussion has been presented on how to develop a virtual tour using the iPIX software interface with conventional html and JavaScript. The main aspects for discussion are on network and computing issues associated with using this capability. A discussion of how to take the iPIX pictures, manipulate them and bond them together to form hemispherical images is also presented. Linking of images with additional multimedia content is discussed. Finally, a method to integrate the iPIX software with conventional HTML and JavaScript to facilitate linking with multi-media is presented.
RTSJ Memory Areas and Their Affects on the Performance of a Flight-Like Attitude Control System
NASA Technical Reports Server (NTRS)
Niessner, Albert F.; Benowitz, Edward G.
2003-01-01
The two most important factors in improving performance in any software system, but especially a real-time, embedded system, are knowing which components are the low performers and knowing what can be done to improve their performance. The word performance with respect to a real-time, embedded system does not necessarily mean fast execution, which is the common definition when discussing non real-time systems. It also includes meeting all of the specified execution dead-lines and executing at the correct time without sacrificing non real-time performance. Using a Java prototype of an existing control system used on Deep Space 1[1], the effects from adding memory areas are measured and evaluated with respect to improving performance.
Distribution Locational Real-Time Pricing Based Smart Building Control and Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Jun; Dai, Xiaoxiao; Zhang, Yingchen
This paper proposes an real-virtual parallel computing scheme for smart building operations aiming at augmenting overall social welfare. The University of Denver's campus power grid and Ritchie fitness center is used for demonstrating the proposed approach. An artificial virtual system is built in parallel to the real physical system to evaluate the overall social cost of the building operation based on the social science based working productivity model, numerical experiment based building energy consumption model and the power system based real-time pricing mechanism. Through interactive feedback exchanged between the real and virtual system, enlarged social welfare, including monetary cost reductionmore » and energy saving, as well as working productivity improvements, can be achieved.« less
Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj
2008-03-01
The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.
Success Factors for Adoption of Real-Time Java
2010-04-01
use Java in an effective way in order to achieve objectives If d ’t l t ti f bj ti d d ’t lt i t l i 9© 2010 Atego. All rights reserved. you on p an o...order to effectively use object-oriented programming practices. Te ps d’execution des algos 400 450 500 550 600 650 700 n m s TacticalPicture...find all garbage, nor to defragment the available free pool Common operations may have surprising effects (e.g. entering a mutual exclusion region
Adding Automatic Evaluation to Interactive Virtual Labs
ERIC Educational Resources Information Center
Farias, Gonzalo; Muñoz de la Peña, David; Gómez-Estern, Fabio; De la Torre, Luis; Sánchez, Carlos; Dormido, Sebastián
2016-01-01
Automatic evaluation is a challenging field that has been addressed by the academic community in order to reduce the assessment workload. In this work we present a new element for the authoring tool Easy Java Simulations (EJS). This element, which is named automatic evaluation element (AEE), provides automatic evaluation to virtual and remote…
Internet, Multimedia and Virtual Laboratories in a 'Third World' Environment.
ERIC Educational Resources Information Center
Monge-Najera, Julian Antonio; Rivas Rossi, Marta; Mendez-Estrada, Victor Hugo
2001-01-01
Describes the development of low-cost multimedia courses and materials for use on the Internet, as well as virtual laboratories, at the Universidad Estatal a Distancia (Costa Rica). Explains how simultaneous production of traditional printed materials and online courses, outsourcing, and the use of HTML and Java can reduce costs for developing…
2016-03-01
science IT information technology JBOD just a bunch of disks JDBC java database connectivity xviii JPME Joint Professional Military Education JSO...Joint Service Officer JVM java virtual machine MPP massively parallel processing MPTE Manpower, Personnel, Training, and Education NAVMAC Navy...27 external database, whether it is MySQL , Oracle, DB2, or SQL Server (Teller, 2015). Connectors optimize the data transfer by obtaining metadata
2015-09-28
the performance of log-and- replay can degrade significantly for VMs configured with multiple virtual CPUs, since the shared memory communication...whether based on checkpoint replication or log-and- replay , existing HA ap- proaches use in- memory backups. The backup VM sits in the memory of a...efficiently. 15. SUBJECT TERMS High-availability virtual machines, live migration, memory and traffic overheads, application suspension, Java
Progress in using real-time GPS for seismic monitoring of the Cascadia megathrust
NASA Astrophysics Data System (ADS)
Szeliga, W. M.; Melbourne, T. I.; Santillan, V. M.; Scrivner, C.; Webb, F.
2014-12-01
We report on progress in our development of a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone. This system is based on 1 Hz point position estimates computed in the ITRF08 reference frame. Convergence from phase and range observables to point position estimates is accelerated using a Kalman filter based, on-line stream editor. Positions are estimated using a short-arc approach and algorithms from JPL's GIPSY-OASIS software with satellite clock and orbit products from the International GNSS Service (IGS). The resulting positions show typical RMS scatter of 2.5 cm in the horizontal and 5 cm in the vertical with latencies below 2 seconds. To facilitate the use of these point position streams for applications such as seismic monitoring, we broadcast real-time positions and covariances using custom-built streaming software. This software is capable of buffering 24-hour streams for hundreds of stations and providing them through a REST-ful web interface. To demonstrate the power of this approach, we have developed a Java-based front-end that provides a real-time visual display of time-series, vector displacement, and contoured peak ground displacement. We have also implemented continuous estimation of finite fault slip along the Cascadia megathrust using an NIF approach. The resulting continuous slip distributions are combined with pre-computed tsunami Green's functions to generate real-time tsunami run-up estimates for the entire Cascadia coastal margin. This Java-based front-end is available for download through the PANGA website. We currently analyze 80 PBO and PANGA stations along the Cascadia margin and are gearing up to process all 400+ real-time stations operating in the Pacific Northwest, many of which are currently telemetered in real-time to CWU. These will serve as milestones towards our over-arching goal of extending our processing to include all of the available real-time streams from the Pacific rim. In addition, we are developing methodologies to combine our real-time solutions with those from Scripps Institute of Oceanography's PPP-AR real-time solutions as well as real-time solutions from the USGS. These combined products should improve the robustness and reliability of real-time point-position streams in the near future.
Building Geospatial Web Services for Ecological Monitoring and Forecasting
NASA Astrophysics Data System (ADS)
Hiatt, S. H.; Hashimoto, H.; Melton, F. S.; Michaelis, A. R.; Milesi, C.; Nemani, R. R.; Wang, W.
2008-12-01
The Terrestrial Observation and Prediction System (TOPS) at NASA Ames Research Center is a modeling system that generates a suite of gridded data products in near real-time that are designed to enhance management decisions related to floods, droughts, forest fires, human health, as well as crop, range, and forest production. While these data products introduce great possibilities for assisting management decisions and informing further research, realization of their full potential is complicated by their shear volume and by the need for a necessary infrastructure for remotely browsing, visualizing, and analyzing the data. In order to address these difficulties we have built an OGC-compliant WMS and WCS server based on an open source software stack that provides standardized access to our archive of data. This server is built using the open source Java library GeoTools which achieves efficient I/O and image rendering through Java Advanced Imaging. We developed spatio-temporal raster management capabilities using the PostGrid raster indexation engine. We provide visualization and browsing capabilities through a customized Ajax web interface derived from the kaMap project. This interface allows resource managers to quickly assess ecosystem conditions and identify significant trends and anomalies from within their web browser without the need to download source data or install special software. Our standardized web services also expose TOPS data to a range of potential clients, from web mapping applications to virtual globes and desktop GIS packages. However, support for managing the temporal dimension of our data is currently limited in existing software systems. Future work will attempt to overcome this shortcoming by building time-series visualization and analysis tools that can be integrated with existing geospatial software.
Modular VO oriented Java EE service deployer
NASA Astrophysics Data System (ADS)
Molinaro, Marco; Cepparo, Francesco; De Marco, Marco; Knapic, Cristina; Apollo, Pietro; Smareglia, Riccardo
2014-07-01
The International Virtual Observatory Alliance (IVOA) has produced many standards and recommendations whose aim is to generate an architecture that starts from astrophysical resources, in a general sense, and ends up in deployed consumable services (that are themselves astrophysical resources). Focusing on the Data Access Layer (DAL) system architecture, that these standards define, in the last years a web based application has been developed and maintained at INAF-OATs IA2 (Italian National institute for Astrophysics - Astronomical Observatory of Trieste, Italian center of Astronomical Archives) to try to deploy and manage multiple VO (Virtual Observatory) services in a uniform way: VO-Dance. However a set of criticalities have arisen since when the VO-Dance idea has been produced, plus some major changes underwent and are undergoing at the IVOA DAL layer (and related standards): this urged IA2 to identify a new solution for its own service layer. Keeping on the basic ideas from VO-Dance (simple service configuration, service instantiation at call time and modularity) while switching to different software technologies (e.g. dismissing Java Reflection in favour of Enterprise Java Bean, EJB, based solution), the new solution has been sketched out and tested for feasibility. Here we present the results originating from this test study. The main constraints for this new project come from various fields. A better homogenized solution rising from IVOA DAL standards: for example the new DALI (Data Access Layer Interface) specification that acts as a common interface system for previous and oncoming access protocols. The need for a modular system where each component is based upon a single VO specification allowing services to rely on common capabilities instead of homogenizing them inside service components directly. The search for a scalable system that takes advantage from distributed systems. The constraints find answer in the adopted solutions hereafter sketched. The development of the new system using Java Enterprise technologies can better benefit from existing libraries to build up the single tokens implementing the IVOA standards. Each component can be built from single standards and each deployed service (i.e. service components instantiations) can consume the other components' exposed methods and services without the need of homogenizing them in dedicated libraries. Scalability can be achieved in an easier way by deploying components or sets of services on a distributed environment and using JNDI (Java Naming and Directory Interface) and RMI (Remote Method Invocation) technologies. Single service configuration will not be significantly different from the VO-Dance solution given that Java class instantiation that benefited from Java Reflection will only be moved to Java EJB pooling (and not, e.g. embedded in bundles for subsequent deployment).
Integrating and Visualizing Tropical Cyclone Data Using the Real Time Mission Monitor
NASA Technical Reports Server (NTRS)
Goodman, H. Michael; Blakeslee, Richard; Conover, Helen; Hall, John; He, Yubin; Regner, Kathryn
2009-01-01
The Real Time Mission Monitor (RTMM) is a visualization and information system that fuses multiple Earth science data sources, to enable real time decision-making for airborne and ground validation experiments. Developed at the NASA Marshall Space Flight Center, RTMM is a situational awareness, decision-support system that integrates satellite imagery, radar, surface and airborne instrument data sets, model output parameters, lightning location observations, aircraft navigation data, soundings, and other applicable Earth science data sets. The integration and delivery of this information is made possible using data acquisition systems, network communication links, network server resources, and visualizations through the Google Earth virtual globe application. RTMM is extremely valuable for optimizing individual Earth science airborne field experiments. Flight planners, scientists, and managers appreciate the contributions that RTMM makes to their flight projects. A broad spectrum of interdisciplinary scientists used RTMM during field campaigns including the hurricane-focused 2006 NASA African Monsoon Multidisciplinary Analyses (NAMMA), 2007 NOAA-NASA Aerosonde Hurricane Noel flight, 2007 Tropical Composition, Cloud, and Climate Coupling (TC4), plus a soil moisture (SMAP-VEX) and two arctic research experiments (ARCTAS) in 2008. Improving and evolving RTMM is a continuous process. RTMM recently integrated the Waypoint Planning Tool, a Java-based application that enables aircraft mission scientists to easily develop a pre-mission flight plan through an interactive point-and-click interface. Individual flight legs are automatically calculated "on the fly". The resultant flight plan is then immediately posted to the Google Earth-based RTMM for interested scientists to view the planned flight track and subsequently compare it to the actual real time flight progress. We are planning additional capabilities to RTMM including collaborations with the Jet Propulsion Laboratory in the joint development of a Tropical Cyclone Integrated Data Exchange and Analysis System (TC IDEAS) which will serve as a web portal for access to tropical cyclone data, visualizations and model output.
VESL: The Virtual Earth Sheet Laboratory for Ice Sheet Modeling and Visualization
NASA Astrophysics Data System (ADS)
Cheng, D. L. C.; Larour, E. Y.; Quinn, J. D.; Halkides, D. J.
2017-12-01
We present the Virtual Earth System Laboratory (VESL), a scientific modeling and visualization tool delivered through an integrated web portal. This allows for the dissemination of data, simulation of physical processes, and promotion of climate literacy. The current iteration leverages NASA's Ice Sheet System Model (ISSM), a state-of-the-art polar ice sheet dynamics model developed at the Jet Propulsion Lab and UC Irvine. We utilize the Emscripten source-to-source compiler to convert the C/C++ ISSM engine core to JavaScript, and bundled pre/post-processing JS scripts to be compatible with the existing ISSM Python/Matlab API. Researchers using VESL will be able to effectively present their work for public dissemination with little-to-no additional post-processing. Moreover, the portal allows for real time visualization and editing of models, cloud based computational simulation, and downloads of relevant data. This allows for faster publication in peer-reviewed journals and adaption of results for educational applications. Through application of this concept to multiple aspects of the Earth System, VESL is able to broaden data applications in the geosciences and beyond. At this stage, we still seek feedback from the greater scientific and public outreach communities regarding the ease of use and feature set of VESL. As we plan its expansion, we aim to achieve more rapid communication and presentation of scientific results.
NASA Astrophysics Data System (ADS)
Senthilkumar, K.; Ruchika Mehra Vijayan, E.
2017-11-01
This paper aims to illustrate real time analysis of large scale data. For practical implementation we are performing sentiment analysis on live Twitter feeds for each individual tweet. To analyze sentiments we will train our data model on sentiWordNet, a polarity assigned wordNet sample by Princeton University. Our main objective will be to efficiency analyze large scale data on the fly using distributed computation. Apache Spark and Apache Hadoop eco system is used as distributed computation platform with Java as development language
Virtual Boutique: a 3D modeling and content-based management approach to e-commerce
NASA Astrophysics Data System (ADS)
Paquet, Eric; El-Hakim, Sabry F.
2000-12-01
The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.
NASA Technical Reports Server (NTRS)
Montgomery, Kevin; Bruyns, Cynthia D.
2002-01-01
We present schemes for real-time generalized interactions such as probing, piercing, cauterizing and ablating virtual tissues. These methods have been implemented in a robust, real-time (haptic rate) surgical simulation environment allowing us to model procedures including animal dissection, microsurgery, hysteroscopy, and cleft lip repair.
Efficiently Scheduling Multi-core Guest Virtual Machines on Multi-core Hosts in Network Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B; Perumalla, Kalyan S
2011-01-01
Virtual machine (VM)-based simulation is a method used by network simulators to incorporate realistic application behaviors by executing actual VMs as high-fidelity surrogates for simulated end-hosts. A critical requirement in such a method is the simulation time-ordered scheduling and execution of the VMs. Prior approaches such as time dilation are less efficient due to the high degree of multiplexing possible when multiple multi-core VMs are simulated on multi-core host systems. We present a new simulation time-ordered scheduler to efficiently schedule multi-core VMs on multi-core real hosts, with a virtual clock realized on each virtual core. The distinguishing features of ourmore » approach are: (1) customizable granularity of the VM scheduling time unit on the simulation time axis, (2) ability to take arbitrary leaps in virtual time by VMs to maximize the utilization of host (real) cores when guest virtual cores idle, and (3) empirically determinable optimality in the tradeoff between total execution (real) time and time-ordering accuracy levels. Experiments show that it is possible to get nearly perfect time-ordered execution, with a slight cost in total run time, relative to optimized non-simulation VM schedulers. Interestingly, with our time-ordered scheduler, it is also possible to reduce the time-ordering error from over 50% of non-simulation scheduler to less than 1% realized by our scheduler, with almost the same run time efficiency as that of the highly efficient non-simulation VM schedulers.« less
Ma, Hui-Ing; Hwang, Wen-Juh; Fang, Jing-Jing; Kuo, Jui-Kun; Wang, Ching-Yi; Leong, Iat-Fai; Wang, Tsui-Ying
2011-10-01
To investigate whether practising reaching for virtual moving targets would improve motor performance in people with Parkinson's disease. Randomized pretest-posttest control group design. A virtual reality laboratory in a university setting. Thirty-three adults with Parkinson's disease. The virtual reality training required 60 trials of reaching for fast-moving virtual balls with the dominant hand. The control group had 60 practice trials turning pegs with their non-dominant hand. Pretest and posttest required reaching with the dominant hand to grasp real stationary balls and balls moving at different speeds down a ramp. Success rates and kinematic data (movement time, peak velocity and percentage of movement time for acceleration phase) from pretest and posttest were recorded to determine the immediate transfer effects. Compared with the control group, the virtual reality training group became faster (F = 9.08, P = 0.005) and more forceful (F = 9.36, P = 0.005) when reaching for real stationary balls. However, there was no significant difference in success rate or movement kinematics between the two groups when reaching for real moving balls. A short virtual reality training programme improved the movement speed of discrete aiming tasks when participants reached for real stationary objects. However, the transfer effect was minimal when reaching for real moving objects.
Zhou, Xiangmin; Zhang, Nan; Sha, Desong; Shen, Yunhe; Tamma, Kumar K; Sweet, Robert
2009-01-01
The inability to render realistic soft-tissue behavior in real time has remained a barrier to face and content aspects of validity for many virtual reality surgical training systems. Biophysically based models are not only suitable for training purposes but also for patient-specific clinical applications, physiological modeling and surgical planning. When considering the existing approaches for modeling soft tissue for virtual reality surgical simulation, the computer graphics-based approach lacks predictive capability; the mass-spring model (MSM) based approach lacks biophysically realistic soft-tissue dynamic behavior; and the finite element method (FEM) approaches fail to meet the real-time requirement. The present development stems from physics fundamental thermodynamic first law; for a space discrete dynamic system directly formulates the space discrete but time continuous governing equation with embedded material constitutive relation and results in a discrete mechanics framework which possesses a unique balance between the computational efforts and the physically realistic soft-tissue dynamic behavior. We describe the development of the discrete mechanics framework with focused attention towards a virtual laparoscopic nephrectomy application.
ERIC Educational Resources Information Center
Wee, Loo Kang; Ning, Hwee Tiang
2014-01-01
This paper presents the customization of Easy Java Simulation models, used with actual laboratory instruments, to create active experiential learning for measurements. The laboratory instruments are the vernier caliper and the micrometer. Three computer model design ideas that complement real equipment are discussed. These ideas involve (1) a…
Progress on the CWU READI Analysis Center
NASA Astrophysics Data System (ADS)
Melbourne, T. I.; Szeliga, W. M.; Santillan, V. M.; Scrivner, C.
2015-12-01
Real-time GPS position streams are desirable for a variety of seismic monitoring and hazard mitigation applications. We report on progress in our development of a comprehensive real-time GPS-based seismic monitoring system for the Cascadia subduction zone. This system is based on 1 Hz point position estimates computed in the ITRF08 reference frame. Convergence from phase and range observables to point position estimates is accelerated using a Kalman filter based, on-line stream editor that produces independent estimations of carrier phase integer biases and other parameters. Positions are then estimated using a short-arc approach and algorithms from JPL's GIPSY-OASIS software with satellite clock and orbit products from the International GNSS Service (IGS). The resulting positions show typical RMS scatter of 2.5 cm in the horizontal and 5 cm in the vertical with latencies below 2 seconds. To facilitate the use of these point position streams for applications such as seismic monitoring, we broadcast real-time positions and covariances using custom-built aggregation-distribution software based on RabbitMQ messaging platform. This software is capable of buffering 24-hour streams for hundreds of stations and providing them through a REST-ful web interface. To demonstrate the power of this approach, we have developed a Java-based front-end that provides a real-time visual display of time-series, displacement vector fields, and map-view, contoured, peak ground displacement. This Java-based front-end is available for download through the PANGA website. We are currently analyzing 80 PBO and PANGA stations along the Cascadia margin and gearing up to process all 400+ real-time stations that are operating in the Pacific Northwest, many of which are currently telemetered in real-time to CWU. These will serve as milestones towards our over-arching goal of extending our processing to include all of the available real-time streams from the Pacific rim. In addition, we have developed a Kalman filter to combine CWU real-time PPP solutions with those from Scripps Institute of Oceanography's PPP-AR real-time solutions as well as real-time solutions from the USGS. These combined products should improve the robustness and reliability of real-time point-position streams in the near future.
Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.
Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz
2015-01-01
This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.
Relaunch of the Interactive Plasma Physics Educational Experience (IPPEX)
NASA Astrophysics Data System (ADS)
Dominguez, A.; Rusaitis, L.; Zwicker, A.; Stotler, D. P.
2015-11-01
In the late 1990's PPPL's Science Education Department developed an innovative online site called the Interactive Plasma Physics Educational Experience (IPPEX). It featured (among other modules) two Java based applications which simulated tokamak physics: A steady state tokamak (SST) and a time dependent tokamak (TDT). The physics underlying the SST and the TDT are based on the ASPECT code which is a global power balance code developed to evaluate the performance of fusion reactor designs. We have relaunched the IPPEX site with updated modules and functionalities: The site itself is now dynamic on all platforms. The graphic design of the site has been modified to current standards. The virtual tokamak programming has been redone in Javascript, taking advantage of the speed and compactness of the code. The GUI of the tokamak has been completely redesigned, including more intuitive representations of changes in the plasma, e.g., particles moving along magnetic field lines. The use of GPU accelerated computation provides accurate and smooth visual representations of the plasma. We will present the current version of IPPEX as well near term plans of incorporating real time NSTX-U data into the simulation.
Popova, A Yu; Kuzkin, B P; Demina, Yu V; Dubyansky, V M; Kulichenko, A N; Maletskaya, O V; Shayakhmetov, O Kh; Semenko, O V; Nazarenko, Yu V; Agapitov, D S; Mezentsev, V M; Kharchenko, T V; Efremenko, D V; Oroby, V G; Klindukhov, V P; Grechanaya, T V; Nikolaevich, P N; Tesheva, S Ch; Rafeenko, G K
2015-01-01
To improve the sanitary and epidemiological surveillance at the Olympic Games has developed a system of GIS for monitoring objects and situations in the region of Sochi. The system is based on software package ArcGIS, version 10.2 server, with Web-java.lang. Object, Web-server Apach, and software developed in language java. During th execution of the tasks are solved: the stratification of the region of the Olympic Games for the private and aggregate epidemiological risk OCI various eti- ologies, ranking epidemiologically important facilities for the sanitary and hygienic conditions, monitoring of infectious diseases (in real time according to the preliminary diagnosis). GIS monitoring has shown its effectiveness: Information received from various sources, but focused on one portal. Information was available in real time all the specialists involved in ensuring epidemiological well-being and use at work during the Olympic Games in Sochi.
Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion
Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo
2017-01-01
In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145
Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo
2017-05-05
In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.
Jia, Shiyu; Zhang, Weizhong; Yu, Xiaokang; Pan, Zhenkuan
2015-09-01
Surgical simulators need to simulate interactive cutting of deformable objects in real time. The goal of this work was to design an interactive cutting algorithm that eliminates traditional cutting state classification and can work simultaneously with real-time GPU-accelerated deformation without affecting its numerical stability. A modified virtual node method for cutting is proposed. Deformable object is modeled as a real tetrahedral mesh embedded in a virtual tetrahedral mesh, and the former is used for graphics rendering and collision, while the latter is used for deformation. Cutting algorithm first subdivides real tetrahedrons to eliminate all face and edge intersections, then splits faces, edges and vertices along cutting tool trajectory to form cut surfaces. Next virtual tetrahedrons containing more than one connected real tetrahedral fragments are duplicated, and connectivity between virtual tetrahedrons is updated. Finally, embedding relationship between real and virtual tetrahedral meshes is updated. Co-rotational linear finite element method is used for deformation. Cutting and collision are processed by CPU, while deformation is carried out by GPU using OpenCL. Efficiency of GPU-accelerated deformation algorithm was tested using block models with varying numbers of tetrahedrons. Effectiveness of our cutting algorithm under multiple cuts and self-intersecting cuts was tested using a block model and a cylinder model. Cutting of a more complex liver model was performed, and detailed performance characteristics of cutting, deformation and collision were measured and analyzed. Our cutting algorithm can produce continuous cut surfaces when traditional minimal element creation algorithm fails. Our GPU-accelerated deformation algorithm remains stable with constant time step under multiple arbitrary cuts and works on both NVIDIA and AMD GPUs. GPU-CPU speed ratio can be as high as 10 for models with 80,000 tetrahedrons. Forty to sixty percent real-time performance and 100-200 Hz simulation rate are achieved for the liver model with 3,101 tetrahedrons. Major bottlenecks for simulation efficiency are cutting, collision processing and CPU-GPU data transfer. Future work needs to improve on these areas.
Virtual Reality Enhanced Instructional Learning
ERIC Educational Resources Information Center
Nachimuthu, K.; Vijayakumari, G.
2009-01-01
Virtual Reality (VR) is a creation of virtual 3D world in which one can feel and sense the world as if it is real. It is allowing engineers to design machines and Educationists to design AV [audiovisual] equipment in real time but in 3-dimensional hologram as if the actual material is being made and worked upon. VR allows a least-cost (energy…
Virtual interactive presence and augmented reality (VIPAR) for remote surgical assistance.
Shenai, Mahesh B; Dillavou, Marcus; Shum, Corey; Ross, Douglas; Tubbs, Richard S; Shih, Alan; Guthrie, Barton L
2011-03-01
Surgery is a highly technical field that combines continuous decision-making with the coordination of spatiovisual tasks. We designed a virtual interactive presence and augmented reality (VIPAR) platform that allows a remote surgeon to deliver real-time virtual assistance to a local surgeon, over a standard Internet connection. The VIPAR system consisted of a "local" and a "remote" station, each situated over a surgical field and a blue screen, respectively. Each station was equipped with a digital viewpiece, composed of 2 cameras for stereoscopic capture, and a high-definition viewer displaying a virtual field. The virtual field was created by digitally compositing selected elements within the remote field into the local field. The viewpieces were controlled by workstations mutually connected by the Internet, allowing virtual remote interaction in real time. Digital renderings derived from volumetric MRI were added to the virtual field to augment the surgeon's reality. For demonstration, a fixed-formalin cadaver head and neck were obtained, and a carotid endarterectomy (CEA) and pterional craniotomy were performed under the VIPAR system. The VIPAR system allowed for real-time, virtual interaction between a local (resident) and remote (attending) surgeon. In both carotid and pterional dissections, major anatomic structures were visualized and identified. Virtual interaction permitted remote instruction for the local surgeon, and MRI augmentation provided spatial guidance to both surgeons. Camera resolution, color contrast, time lag, and depth perception were identified as technical issues requiring further optimization. Virtual interactive presence and augmented reality provide a novel platform for remote surgical assistance, with multiple applications in surgical training and remote expert assistance.
Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach
Tian, Yuan; Guan, Tao; Wang, Cheng
2010-01-01
To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278
Massetti, Thais; Fávero, Francis Meire; Menezes, Lilian Del Ciello de; Alvarez, Mayra Priscila Boscolo; Crocetta, Tânia Brusque; Guarnieri, Regiani; Nunes, Fátima L S; Monteiro, Carlos Bandeira de Mello; Silva, Talita Dias da
2018-04-01
To evaluate whether people with Duchenne muscular dystrophy (DMD) practicing a task in a virtual environment could improve performance given a similar task in a real environment, as well as distinguishing whether there is transference between performing the practice in virtual environment and then a real environment and vice versa. Twenty-two people with DMD were evaluated and divided into two groups. The goal was to reach out and touch a red cube. Group A began with the real task and had to touch a real object, and Group B began with the virtual task and had to reach a virtual object using the Kinect system. ANOVA showed that all participants decreased the movement time from the first (M = 973 ms) to the last block of acquisition (M = 783 ms) in both virtual and real tasks and motor learning could be inferred by the short-term retention and transfer task (with increasing distance of the target). However, the evaluation of task performance demonstrated that the virtual task provided an inferior performance when compared to the real task in all phases of the study, and there was no effect for sequence. Both virtual and real tasks promoted improvement of performance in the acquisition phase, short-term retention, and transfer. However, there was no transference of learning between environments. In conclusion, it is recommended that the use of virtual environments for individuals with DMD needs to be considered carefully.
Collaborative Resource Allocation
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Wax, Allan; Lam, Raymond; Baldwin, John; Borden, Chester
2007-01-01
Collaborative Resource Allocation Networking Environment (CRANE) Version 0.5 is a prototype created to prove the newest concept of using a distributed environment to schedule Deep Space Network (DSN) antenna times in a collaborative fashion. This program is for all space-flight and terrestrial science project users and DSN schedulers to perform scheduling activities and conflict resolution, both synchronously and asynchronously. Project schedulers can, for the first time, participate directly in scheduling their tracking times into the official DSN schedule, and negotiate directly with other projects in an integrated scheduling system. A master schedule covers long-range, mid-range, near-real-time, and real-time scheduling time frames all in one, rather than the current method of separate functions that are supported by different processes and tools. CRANE also provides private workspaces (both dynamic and static), data sharing, scenario management, user control, rapid messaging (based on Java Message Service), data/time synchronization, workflow management, notification (including emails), conflict checking, and a linkage to a schedule generation engine. The data structure with corresponding database design combines object trees with multiple associated mortal instances and relational database to provide unprecedented traceability and simplify the existing DSN XML schedule representation. These technologies are used to provide traceability, schedule negotiation, conflict resolution, and load forecasting from real-time operations to long-range loading analysis up to 20 years in the future. CRANE includes a database, a stored procedure layer, an agent-based middle tier, a Web service wrapper, a Windows Integrated Analysis Environment (IAE), a Java application, and a Web page interface.
Bowman, Ellen Lambert; Liu, Lei
2017-01-01
Virtual reality has great potential in training road safety skills to individuals with low vision but the feasibility of such training has not been demonstrated. We tested the hypotheses that low vision individuals could learn useful skills in virtual streets and could apply them to improve real street safety. Twelve participants, whose vision was too poor to use the pedestrian signals were taught by a certified orientation and mobility specialist to determine the safest time to cross the street using the visual and auditory signals made by the start of previously stopped cars at a traffic-light controlled street intersection. Four participants were trained in real streets and eight in virtual streets presented on 3 projection screens. The crossing timing of all participants was evaluated in real streets before and after training. The participants were instructed to say "GO" at the time when they felt the safest to cross the street. A safety score was derived to quantify the GO calls based on its occurrence in the pedestrian phase (when the pedestrian sign did not show DON'T WALK). Before training, > 50% of the GO calls from all participants fell in the DON'T WALK phase of the traffic cycle and thus were totally unsafe. 20% of the GO calls fell in the latter half of the pedestrian phase. These calls were unsafe because one initiated crossing this late might not have sufficient time to walk across the street. After training, 90% of the GO calls fell in the early half of the pedestrian phase. These calls were safer because one initiated crossing in the pedestrian phase and had at least half of the pedestrian phase for walking across. Similar safety changes occurred in both virtual street and real street trained participants. An ANOVA showed a significant increase of the safety scores after training and there was no difference in this safety improvement between the virtual street and real street trained participants. This study demonstrated that virtual reality-based orientation and mobility training could be as efficient as real street training in improving street safety in individuals with severely impaired vision.
Monitoring and Acquisition Real-time System (MARS)
NASA Technical Reports Server (NTRS)
Holland, Corbin
2013-01-01
MARS is a graphical user interface (GUI) written in MATLAB and Java, allowing the user to configure and control the Scalable Parallel Architecture for Real-Time Acquisition and Analysis (SPARTAA) data acquisition system. SPARTAA not only acquires data, but also allows for complex algorithms to be applied to the acquired data in real time. The MARS client allows the user to set up and configure all settings regarding the data channels attached to the system, as well as have complete control over starting and stopping data acquisition. It provides a unique "Test" programming environment, allowing the user to create tests consisting of a series of alarms, each of which contains any number of data channels. Each alarm is configured with a particular algorithm, determining the type of processing that will be applied on each data channel and tested against a defined threshold. Tests can be uploaded to SPARTAA, thereby teaching it how to process the data. The uniqueness of MARS is in its capability to be adaptable easily to many test configurations. MARS sends and receives protocols via TCP/IP, which allows for quick integration into almost any test environment. The use of MATLAB and Java as the programming languages allows for developers to integrate the software across multiple operating platforms.
Kozlov, Michail D; Johansen, Mark K
2010-12-01
The purpose of this research was to illustrate the broad usefulness of simple video-game-based virtual environments (VEs) for psychological research on real-world behavior. To this end, this research explored several high-level social phenomena in a simple, inexpensive computer-game environment: the reduced likelihood of helping under time pressure and the bystander effect, which is reduced helping in the presence of bystanders. In the first experiment, participants had to find the exit in a virtual labyrinth under either high or low time pressure. They encountered rooms with and without virtual bystanders, and in each room, a virtual person requested assistance. Participants helped significantly less frequently under time pressure but the presence/absence of a small number of bystanders did not significantly moderate helping. The second experiment increased the number of virtual bystanders, and participants were instructed to imagine that these were real people. Participants helped significantly less in rooms with large numbers of bystanders compared to rooms with no bystanders, thus demonstrating a bystander effect. These results indicate that even sophisticated high-level social behaviors can be observed and experimentally manipulated in simple VEs, thus implying the broad usefulness of this paradigm in psychological research as a good compromise between experimental control and ecological validity.
LVC interaction within a mixed-reality training system
NASA Astrophysics Data System (ADS)
Pollock, Brice; Winer, Eliot; Gilbert, Stephen; de la Cruz, Julio
2012-03-01
The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to empower LVC interaction in a reconfigurable, mixed reality environment. This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and virtually through commercial and developed game engines. Evaluation involving military trained personnel found this system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server process all live and virtual entity data from system components to create a cohesive virtual world across all distributed simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and virtual immersive environments for training in real-time across many distributed systems.
Wireless and wearable EEG system for evaluating driver vigilance.
Lin, Chin-Teng; Chuang, Chun-Hsiang; Huang, Chih-Sheng; Tsai, Shu-Fang; Lu, Shao-Wei; Chen, Yen-Hsuan; Ko, Li-Wei
2014-04-01
Brain activity associated with attention sustained on the task of safe driving has received considerable attention recently in many neurophysiological studies. Those investigations have also accurately estimated shifts in drivers' levels of arousal, fatigue, and vigilance, as evidenced by variations in their task performance, by evaluating electroencephalographic (EEG) changes. However, monitoring the neurophysiological activities of automobile drivers poses a major measurement challenge when using a laboratory-oriented biosensor technology. This work presents a novel dry EEG sensor based mobile wireless EEG system (referred to herein as Mindo) to monitor in real time a driver's vigilance status in order to link the fluctuation of driving performance with changes in brain activities. The proposed Mindo system incorporates the use of a wireless and wearable EEG device to record EEG signals from hairy regions of the driver conveniently. Additionally, the proposed system can process EEG recordings and translate them into the vigilance level. The study compares the system performance between different regression models. Moreover, the proposed system is implemented using JAVA programming language as a mobile application for online analysis. A case study involving 15 study participants assigned a 90 min sustained-attention driving task in an immersive virtual driving environment demonstrates the reliability of the proposed system. Consistent with previous studies, power spectral analysis results confirm that the EEG activities correlate well with the variations in vigilance. Furthermore, the proposed system demonstrated the feasibility of predicting the driver's vigilance in real time.
Open multi-agent control architecture to support virtual-reality-based man-machine interfaces
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel
2001-10-01
Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.
Functional performance comparison between real and virtual tasks in older adults
Bezerra, Ítalla Maria Pinheiro; Crocetta, Tânia Brusque; Massetti, Thais; da Silva, Talita Dias; Guarnieri, Regiani; Meira, Cassio de Miranda; Arab, Claudia; de Abreu, Luiz Carlos; de Araujo, Luciano Vieira; Monteiro, Carlos Bandeira de Mello
2018-01-01
Abstract Introduction: Ageing is usually accompanied by deterioration of physical abilities, such as muscular strength, sensory sensitivity, and functional capacity, making chronic diseases, and the well-being of older adults new challenges to global public health. Objective: The purpose of this study was to evaluate whether a task practiced in a virtual environment could promote better performance and enable transfer to the same task in a real environment. Method: The study evaluated 65 older adults of both genders, aged 60 to 82 years (M = 69.6, SD = 6.3). A timing coincident task was applied to measure the perceptual-motor ability to perform a motor response. The participants were divided into 2 groups: started in a real interface and started in a virtual interface. Results: All subjects improved their performance during the practice, but improvement was not observed for the real interface, as the participants were near maximum performance from the beginning of the task. However, there was no transfer of performance from the virtual to real environment or vice versa. Conclusions: The virtual environment was shown to provide improvement of performance with a short-term motor learning protocol in a timing coincident task. This result suggests that the practice of tasks in a virtual environment seems to be a promising tool for the assessment and training of healthy older adults, even though there was no transfer of performance to a real environment. Trial registration: ISRCTN02960165. Registered 8 November 2016. PMID:29369177
A Proposed Framework for Collaborative Design in a Virtual Environment
NASA Astrophysics Data System (ADS)
Breland, Jason S.; Shiratuddin, Mohd Fairuz
This paper describes a proposed framework for a collaborative design in a virtual environment. The framework consists of components that support a true collaborative design in a real-time 3D virtual environment. In support of the proposed framework, a prototype application is being developed. The authors envision the framework will have, but not limited to the following features: (1) real-time manipulation of 3D objects across the network, (2) support for multi-designer activities and information access, (3) co-existence within same virtual space, etc. This paper also discusses a proposed testing to determine the possible benefits of a collaborative design in a virtual environment over other forms of collaboration, and results from a pilot test.
ERIC Educational Resources Information Center
Francescucci, Anthony; Foster, Mary
2013-01-01
Previous research on blended course offerings focuses on the addition of asynchronous online content to an existing course. While some explore synchronous communication, few control for differences between treatment groups. This study investigates the impact of teaching a blended course, using a virtual, interactive, real-time, instructor-led…
NASA Astrophysics Data System (ADS)
Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry
2016-03-01
In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.
Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks
NASA Astrophysics Data System (ADS)
Karpov, Kirill; Fedotova, Irina; Siemens, Eduard
2017-07-01
In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.
Vision-based overlay of a virtual object into real scene for designing room interior
NASA Astrophysics Data System (ADS)
Harasaki, Shunsuke; Saito, Hideo
2001-10-01
In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).
Renaud, Patrice; Joyal, Christian; Stoleru, Serge; Goyette, Mathieu; Weiskopf, Nikolaus; Birbaumer, Niels
2011-01-01
This chapter proposes a prospective view on using a real-time functional magnetic imaging (rt-fMRI) brain-computer interface (BCI) application as a new treatment for pedophilia. Neurofeedback mediated by interactive virtual stimuli is presented as the key process in this new BCI application. Results on the diagnostic discriminant power of virtual characters depicting sexual stimuli relevant to pedophilia are given. Finally, practical and ethical implications are briefly addressed. Copyright © 2011 Elsevier B.V. All rights reserved.
Intercepting real and simulated falling objects: what is the difference?
Baurès, Robin; Benguigui, Nicolas; Amorim, Michel-Ange; Hecht, Heiko
2009-10-30
The use of virtual reality is nowadays common in many studies in the field of human perception and movement control, particularly in interceptive actions. However, the ecological validity of the simulation is often taken for granted without having been formally established. If participants were to perceive the real situation and its virtual equivalent in a different fashion, the generalization of the results obtained in virtual reality to real life would be highly questionable. We tested the ecological validity of virtual reality in this context by comparing the timing of interceptive actions based upon actually falling objects and their simulated counterparts. The results show very limited differences as a function of whether participants were confronted with a real ball or a simulation thereof. And when present, such differences were limited to the first trial only. This result validates the use of virtual reality when studying interceptive actions of accelerated stimuli.
Research on 3D virtual campus scene modeling based on 3ds Max and VRML
NASA Astrophysics Data System (ADS)
Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue
2015-12-01
With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.
Extending body space in immersive virtual reality: a very long arm illusion.
Kilteni, Konstantina; Normand, Jean-Marie; Sanchez-Vives, Maria V; Slater, Mel
2012-01-01
Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part - the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2-C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions.
Extending Body Space in Immersive Virtual Reality: A Very Long Arm Illusion
Kilteni, Konstantina; Normand, Jean-Marie; Sanchez-Vives, Maria V.; Slater, Mel
2012-01-01
Recent studies have shown that a fake body part can be incorporated into human body representation through synchronous multisensory stimulation on the fake and corresponding real body part – the most famous example being the Rubber Hand Illusion. However, the extent to which gross asymmetries in the fake body can be assimilated remains unknown. Participants experienced, through a head-tracked stereo head-mounted display a virtual body coincident with their real body. There were 5 conditions in a between-groups experiment, with 10 participants per condition. In all conditions there was visuo-motor congruence between the real and virtual dominant arm. In an Incongruent condition (I), where the virtual arm length was equal to the real length, there was visuo-tactile incongruence. In four Congruent conditions there was visuo-tactile congruence, but the virtual arm lengths were either equal to (C1), double (C2), triple (C3) or quadruple (C4) the real ones. Questionnaire scores and defensive withdrawal movements in response to a threat showed that the overall level of ownership was high in both C1 and I, and there was no significant difference between these conditions. Additionally, participants experienced ownership over the virtual arm up to three times the length of the real one, and less strongly at four times the length. The illusion did decline, however, with the length of the virtual arm. In the C2–C4 conditions although a measure of proprioceptive drift positively correlated with virtual arm length, there was no correlation between the drift and ownership of the virtual arm, suggesting different underlying mechanisms between ownership and drift. Overall, these findings extend and enrich previous results that multisensory and sensorimotor information can reconstruct our perception of the body shape, size and symmetry even when this is not consistent with normal body proportions. PMID:22829891
Enhancements and Evolution of the Real Time Mission Monitor
NASA Technical Reports Server (NTRS)
Goodman, Michael; Blakeslee, Richard; Hardin, Danny; Hall, John; He, Yubin; Regner, Kathryn
2008-01-01
The Real Time Mission Monitor (RTMM) is a visualization and information system that fuses multiple Earth science data sources, to enable real time decision-making for airborne and ground validation experiments. Developed at the National Aeronautics and Space Administration (NASA) Marshall Space Flight Center, RTMM is a situational awareness, decision-support system that integrates satellite imagery, radar, surface and airborne instrument data sets, model output parameters, lightning location observations, aircraft navigation data, soundings, and other applicable Earth science data sets. The integration and delivery of this information is made possible using data acquisition systems, network communication links, network server resources, and visualizations through the Google Earth virtual globe application. RTMM has proven extremely valuable for optimizing individual Earth science airborne field experiments. Flight planners, mission scientists, instrument scientists and program managers alike appreciate the contributions that RTMM makes to their flight projects. We have received numerous plaudits from a wide variety of scientists who used RTMM during recent field campaigns including the 2006 NASA African Monsoon Multidisciplinary Analyses (NAMMA), 2007 Tropical Composition, Cloud, and Climate Coupling (TC4), 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) missions, the 2007-2008 NOAA-NASA Aerosonde Hurricane flights and the 2008 Soil Moisture Active-Passive Validation Experiment (SMAP-VEX). Improving and evolving RTMM is a continuous process. RTMM recently integrated the Waypoint Planning Tool, a Java-based application that enables aircraft mission scientists to easily develop a pre-mission flight plan through an interactive point-and-click interface. Individual flight legs are automatically calculated for altitude, latitude, longitude, flight leg distance, cumulative distance, flight leg time, cumulative time, and satellite overpass intersections. The resultant flight plan is then generated in KML and quickly posted to the Google Earth-based RTMM for interested scientists to view the planned flight track and then compare it to the actual real time flight progress. A description of the system architecture, components, and applications along with reviews and animations of RTMM during the field campaigns, plus planned enhancements and future opportunities will be presented.
Enhancements and Evolution of the Real Time Mission Monitor
NASA Astrophysics Data System (ADS)
Goodman, M.; Blakeslee, R.; Hardin, D.; Hall, J.; He, Y.; Regner, K.
2008-12-01
The Real Time Mission Monitor (RTMM) is a visualization and information system that fuses multiple Earth science data sources, to enable real time decision-making for airborne and ground validation experiments. Developed at the National Aeronautics and Space Administration (NASA) Marshall Space Flight Center, RTMM is a situational awareness, decision-support system that integrates satellite imagery, radar, surface and airborne instrument data sets, model output parameters, lightning location observations, aircraft navigation data, soundings, and other applicable Earth science data sets. The integration and delivery of this information is made possible using data acquisition systems, network communication links, network server resources, and visualizations through the Google Earth virtual earth application. RTMM has proven extremely valuable for optimizing individual Earth science airborne field experiments. Flight planners, mission scientists, instrument scientists and program managers alike appreciate the contributions that RTMM makes to their flight projects. RTMM has received numerous plaudits from a wide variety of scientists who used RTMM during recent field campaigns including the 2006 NASA African Monsoon Multidisciplinary Analyses (NAMMA), 2007 Tropical Composition, Cloud, and Climate Coupling (TC4), 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) missions, the 2007-2008 NOAA-NASA Aerosonde Hurricane flights and the 2008 Soil Moisture Active-Passive Validation Experiment (SMAP-VEX). Improving and evolving RTMM is a continuous process. RTMM recently integrated the Waypoint Planning Tool, a Java-based application that enables aircraft mission scientists to easily develop a pre-mission flight plan through an interactive point-and-click interface. Individual flight legs are automatically calculated for altitude, latitude, longitude, flight leg distance, cumulative distance, flight leg time, cumulative time, and satellite overpass intersections. The resultant flight plan is then generated in KML and quickly posted to the Google Earth-based RTMM for planning discussions, as well as comparisons to real time flight tracks in progress. A description of the system architecture, components, and applications along with reviews and animations of RTMM during the field campaigns, plus planned enhancements and future opportunities will be presented.
Safe, Multiphase Bounds Check Elimination in Java
2010-01-28
production of mobile code from source code, JIT compilation in the virtual ma- chine, and application code execution. The code producer uses...invariants, and inequality constraint analysis) to identify and prove redundancy of bounds checks. During class-loading and JIT compilation, the virtual...unoptimized code if the speculated invariants do not hold. The combined effect of the multiple phases is to shift the effort as- sociated with bounds
FROG: Time Series Analysis for the Web Service Era
NASA Astrophysics Data System (ADS)
Allan, A.
2005-12-01
The FROG application is part of the next generation Starlink{http://www.starlink.ac.uk} software work (Draper et al. 2005) and released under the GNU Public License{http://www.gnu.org/copyleft/gpl.html} (GPL). Written in Java, it has been designed for the Web and Grid Service era as an extensible, pluggable, tool for time series analysis and display. With an integrated SOAP server the packages functionality is exposed to the user for use in their own code, and to be used remotely over the Grid, as part of the Virtual Observatory (VO).
OAS :: Office of the Inspector General
Internal Audit Real Estate Strategy Calendar Calendar of Conferences in Headquarters OAS Logo OAS Logo It este sitio de web. Afigura-se o JavaScript está desativado ou desligado. Por favor ative o JavaScript
Estimating Distance in Real and Virtual Environments: Does Order Make a Difference?
Ziemer, Christine J.; Plumert, Jodie M.; Cremer, James F.; Kearney, Joseph K.
2010-01-01
This investigation examined how the order in which people experience real and virtual environments influences their distance estimates. Participants made two sets of distance estimates in one of the following conditions: 1) real environment first, virtual environment second; 2) virtual environment first, real environment second; 3) real environment first, real environment second; or 4) virtual environment first, virtual environment second. In Experiment 1, participants imagined how long it would take to walk to targets in real and virtual environments. Participants’ first estimates were significantly more accurate in the real than in the virtual environment. When the second environment was the same as the first environment (real-real and virtual-virtual), participants’ second estimates were also more accurate in the real than in the virtual environment. When the second environment differed from the first environment (real-virtual and virtual-real), however, participants’ second estimates did not differ significantly across the two environments. A second experiment in which participants walked blindfolded to targets in the real environment and imagined how long it would take to walk to targets in the virtual environment replicated these results. These subtle, yet persistent order effects suggest that memory can play an important role in distance perception. PMID:19525540
Virtual Cerebral Aneurysm Clipping with Real-Time Haptic Force Feedback in Neurosurgical Education.
Gmeiner, Matthias; Dirnberger, Johannes; Fenz, Wolfgang; Gollwitzer, Maria; Wurm, Gabriele; Trenkler, Johannes; Gruber, Andreas
2018-04-01
Realistic, safe, and efficient modalities for simulation-based training are highly warranted to enhance the quality of surgical education, and they should be incorporated in resident training. The aim of this study was to develop a patient-specific virtual cerebral aneurysm-clipping simulator with haptic force feedback and real-time deformation of the aneurysm and vessels. A prototype simulator was developed from 2012 to 2016. Evaluation of virtual clipping by blood flow simulation was integrated in this software, and the prototype was evaluated by 18 neurosurgeons. In 4 patients with different medial cerebral artery aneurysms, virtual clipping was performed after real-life surgery, and surgical results were compared regarding clip application, surgical trajectory, and blood flow. After head positioning and craniotomy, bimanual virtual aneurysm clipping with an original forceps was performed. Blood flow simulation demonstrated residual aneurysm filling or branch stenosis. The simulator improved anatomic understanding for 89% of neurosurgeons. Simulation of head positioning and craniotomy was considered realistic by 89% and 94% of users, respectively. Most participants agreed that this simulator should be integrated into neurosurgical education (94%). Our illustrative cases demonstrated that virtual aneurysm surgery was possible using the same trajectory as in real-life cases. Both virtual clipping and blood flow simulation were realistic in broad-based but not calcified aneurysms. Virtual clipping of a calcified aneurysm could be performed using the same surgical trajectory, but not the same clip type. We have successfully developed a virtual aneurysm-clipping simulator. Next, we will prospectively evaluate this device for surgical procedure planning and education. Copyright © 2018 Elsevier Inc. All rights reserved.
HVS: an image-based approach for constructing virtual environments
NASA Astrophysics Data System (ADS)
Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao
1998-09-01
Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.
Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B; Dimas, Antigone S; Gutierrez-Arcelus, Maria; Stranger, Barbara E; Deloukas, Panos; Dermitzakis, Emmanouil T
2010-10-01
Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. http://www.sanger.ac.uk/resources/software/genevar.
NASA Astrophysics Data System (ADS)
Meertens, C. M.; Murray, D.; McWhirter, J.
2004-12-01
Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and control the visualization. A Jython-based formulation facility allows computations on disparate data sets using simple formulas. Although the IDV is an advanced tool for research, its flexible architecture has also been exploited for educational purposes with the Virtual Geophysical Exploration Environment (VGEE) development. The VGEE demonstration added physical concept models to the IDV and curricula for atmospheric science education intended for the high school to graduate student levels.
NASA Technical Reports Server (NTRS)
Jefferson, David; Beckman, Brian
1986-01-01
This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.
JavaScript Access to DICOM Network and Objects in Web Browser.
Drnasin, Ivan; Grgić, Mislav; Gogić, Goran
2017-10-01
Digital imaging and communications in medicine (DICOM) 3.0 standard provides the baseline for the picture archiving and communication systems (PACS). The development of Internet and various communication media initiated demand for non-DICOM access to PACS systems. Ever-increasing utilization of the web browsers, laptops and handheld devices, as opposed to desktop applications and static organizational computers, lead to development of different web technologies. The DICOM standard officials accepted those subsequently as tools of alternative access. This paper provides an overview of the current state of development of the web access technology to the DICOM repositories. It presents a different approach of using HTML5 features of the web browsers through the JavaScript language and the WebSocket protocol by enabling real-time communication with DICOM repositories. JavaScript DICOM network library, DICOM to WebSocket proxy and a proof-of-concept web application that qualifies as a DICOM 3.0 device were developed.
Innovative application of virtual display technique in virtual museum
NASA Astrophysics Data System (ADS)
Zhang, Jiankang
2017-09-01
Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.
2015-05-01
application ,1 while the simulated PLC software is the open source ModbusPal Java application . When queried using the Modbus TCP protocol, ModbusPal reports...and programmable logic controller ( PLC ) components. The HMI and PLC components were instantiated with software and installed in multiple virtual...creating and capturing HMI– PLC network traffic over a 24-h period in the virtualized network and inspect the packets for errors. Test the
Real-time visual simulation of APT system based on RTW and Vega
NASA Astrophysics Data System (ADS)
Xiong, Shuai; Fu, Chengyu; Tang, Tao
2012-10-01
The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.
Liu, Lei
2017-01-01
Virtual reality has great potential in training road safety skills to individuals with low vision but the feasibility of such training has not been demonstrated. We tested the hypotheses that low vision individuals could learn useful skills in virtual streets and could apply them to improve real street safety. Twelve participants, whose vision was too poor to use the pedestrian signals were taught by a certified orientation and mobility specialist to determine the safest time to cross the street using the visual and auditory signals made by the start of previously stopped cars at a traffic-light controlled street intersection. Four participants were trained in real streets and eight in virtual streets presented on 3 projection screens. The crossing timing of all participants was evaluated in real streets before and after training. The participants were instructed to say “GO” at the time when they felt the safest to cross the street. A safety score was derived to quantify the GO calls based on its occurrence in the pedestrian phase (when the pedestrian sign did not show DON’T WALK). Before training, > 50% of the GO calls from all participants fell in the DON’T WALK phase of the traffic cycle and thus were totally unsafe. 20% of the GO calls fell in the latter half of the pedestrian phase. These calls were unsafe because one initiated crossing this late might not have sufficient time to walk across the street. After training, 90% of the GO calls fell in the early half of the pedestrian phase. These calls were safer because one initiated crossing in the pedestrian phase and had at least half of the pedestrian phase for walking across. Similar safety changes occurred in both virtual street and real street trained participants. An ANOVA showed a significant increase of the safety scores after training and there was no difference in this safety improvement between the virtual street and real street trained participants. This study demonstrated that virtual reality-based orientation and mobility training could be as efficient as real street training in improving street safety in individuals with severely impaired vision. PMID:28445540
Home | Departamento de Salud de Puerto Rico
Publicaciones Registros Suicidio Tuberculosis Centro de Aprendizaje Virtual sobre temas de Preparación en Salud Tuberculosis Centro de Aprendizaje Virtual sobre temas de Preparación en Salud Pública Informe de Casos de Departamento de Salud Search... Search It looks like your browser does not have JavaScript enabled
Control of vertical posture while elevating one foot to avoid a real or virtual obstacle.
Ida, Hirofumi; Mohapatra, Sambit; Aruin, Alexander
2017-06-01
The purpose of this study is to investigate the control of vertical posture during obstacle avoidance in a real versus a virtual reality (VR) environment. Ten healthy participants stood upright and lifted one leg to avoid colliding with a real obstacle sliding on the floor toward a participant and with its virtual image. Virtual obstacles were delivered by a head mounted display (HMD) or a 3D projector. The acceleration of the foot, center of pressure, and electrical activity of the leg and trunk muscles were measured and analyzed during the time intervals typical for early postural adjustments (EPAs), anticipatory postural adjustments (APAs), and compensatory postural adjustments (CPAs). The results showed that the peak acceleration of foot elevation in the HMD condition decreased significantly when compared with that of the real and 3D projector conditions. Reduced activity of the leg and trunk muscles was seen when dealing with virtual obstacles (HMD and 3D projector) as compared with that seen when dealing with real obstacles. These effects were more pronounced during APAs and CPAs. The onsets of muscle activities in the supporting limb were seen during EPAs and APAs. The observed modulation of muscle activity and altered patterns of movement seen while avoiding a virtual obstacle should be considered when designing virtual rehabilitation protocols.
Rule-Based Runtime Verification
NASA Technical Reports Server (NTRS)
Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik
2003-01-01
We present a rule-based framework for defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics, interval logics, forms of quantified temporal logics, and so on. Our logic, EAGLE, is implemented as a Java library and involves novel techniques for rule definition, manipulation and execution. Monitoring is done on a state-by-state basis, without storing the execution trace.
A Smart Itsy Bitsy Spider for the Web.
ERIC Educational Resources Information Center
Chen, Hsinchun; Chung, Yi-Ming; Ramsey, Marshall; Yang, Christopher C.
1998-01-01
This study tested two Web personal spiders (i.e., agents that take users' requests and perform real-time customized searches) based on best first-search and genetic-algorithm techniques. Both results were comparable and complementary, although the genetic algorithm obtained higher recall value. The Java-based interface was found to be necessary…
Evaluating real-time Java for mission-critical large-scale embedded systems
NASA Technical Reports Server (NTRS)
Sharp, D. C.; Pla, E.; Luecke, K. R.; Hassan, R. J.
2003-01-01
This paper describes benchmarking results on an RT JVM. This paper extends previously published results by including additional tests, by being run on a recently available pre-release version of the first commercially supported RTSJ implementation, and by assessing results based on our experience with avionics systems in other languages.
A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...
A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...
Web GIS in practice V: 3-D interactive and real-time mapping in Second Life
Boulos, Maged N Kamel; Burden, David
2007-01-01
This paper describes technologies from Daden Limited for geographically mapping and accessing live news stories/feeds, as well as other real-time, real-world data feeds (e.g., Google Earth KML feeds and GeoRSS feeds) in the 3-D virtual world of Second Life, by plotting and updating the corresponding Earth location points on a globe or some other suitable form (in-world), and further linking those points to relevant information and resources. This approach enables users to visualise, interact with, and even walk or fly through, the plotted data in 3-D. Users can also do the reverse: put pins on a map in the virtual world, and then view the data points on the Web in Google Maps or Google Earth. The technologies presented thus serve as a bridge between mirror worlds like Google Earth and virtual worlds like Second Life. We explore the geo-data display potential of virtual worlds and their likely convergence with mirror worlds in the context of the future 3-D Internet or Metaverse, and reflect on the potential of such technologies and their future possibilities, e.g. their use to develop emergency/public health virtual situation rooms to effectively manage emergencies and disasters in real time. The paper also covers some of the issues associated with these technologies, namely user interface accessibility and individual privacy. PMID:18042275
Augmenting breath regulation using a mobile driven virtual reality therapy framework.
Abushakra, Ahmad; Faezipour, Miad
2014-05-01
This paper presents a conceptual framework of a virtual reality therapy to assist individuals, especially lung cancer patients or those with breathing disorders to regulate their breath through real-time analysis of respiration movements using a smartphone. Virtual reality technology is an attractive means for medical simulations and treatment, particularly for patients with cancer. The theories, methodologies and approaches, and real-world dynamic contents for all the components of this virtual reality therapy (VRT) via a conceptual framework using the smartphone will be discussed. The architecture and technical aspects of the offshore platform of the virtual environment will also be presented.
Parallel-distributed mobile robot simulator
NASA Astrophysics Data System (ADS)
Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo
1996-06-01
The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.
The Way Point Planning Tool: Real Time Flight Planning for Airborne Science
NASA Technical Reports Server (NTRS)
He, Yubin; Blakeslee, Richard; Goodman, Michael; Hall, John
2012-01-01
Airborne real time observation are a major component of NASA's Earth Science research and satellite ground validation studies. For mission scientist, planning a research aircraft mission within the context of meeting the science objective is a complex task because it requires real time situational awareness of the weather conditions that affect the aircraft track. Multiple aircraft are often involved in the NASA field campaigns the coordination of the aircraft with satellite overpasses, other airplanes and the constantly evolving dynamic weather conditions often determine the success of the campaign. A flight planning tool is needed to provide situational awareness information to the mission scientist and help them plan and modify the flight tracks successfully. Scientists at the University of Alabama Huntsville and the NASA Marshal Space Flight Center developed the Waypoint Planning Tool (WPT), an interactive software tool that enables scientist to develop their own flight plans (also known as waypoints), with point and click mouse capabilities on a digital map filled with time raster and vector data. The development of this Waypoint Planning Tool demonstrates the significance of mission support in responding to the challenges presented during NASA field campaigns. Analyses during and after each campaign helped identify both issues and new requirements, initiating the next wave of development. Currently the Waypoint Planning Tool has gone through three rounds of development and analysis processes. The development of this waypoint tool is directly affected by the technology advances on GIS/Mapping technologies. From the standalone Google Earth application and simple KML functionalities to the Google Earth Plugin and Java Web Start/Applet on web platform, as well as to the rising open source GIS tools with new JavaScript frameworks, the Waypoint planning Tool has entered its third phase of technology advancement. The newly innovated, cross-platform, modular designed JavaScript-controled Waypoint tool is planned to be integrated with the NASA Airborne Science Mission Tool Suite. Adapting new technologies for the Waypoint Planning Tool ensures its success in helping scientist reach their mission objectives. This presentation will discuss the development process of the Waypoint Planning Tool in responding to field campaign challenges, identifying new information technologies, and describing the capabilities and features of the Waypoint Planning Tool with the real time aspect, interactive nature, and the resultant benefits to the airborne science community.
Della Mea, V.; Beltrami, C. A.
2000-01-01
The last five years experience has definitely demonstrated the possible applications of the Internet for telepathology. They may be listed as follows: (a) teleconsultation via multimedia e‐mail; (b) teleconsultation via web‐based tools; (c) distant education by means of World Wide Web; (d) virtual microscope management through Web and Java interfaces; (e) real‐time consultations through Internet‐based videoconferencing. Such applications have led to the recognition of some important limits of the Internet, when dealing with telemedicine: (i) no guarantees on the quality of service (QoS); (ii) inadequate security and privacy; (iii) for some countries, low bandwidth and thus low responsiveness for real‐time applications. Currently, there are several innovations in the world of the Internet. Different initiatives have been aimed at an amelioration of the Internet protocols, in order to have quality of service, multimedia support, security and other advanced services, together with greater bandwidth. The forthcoming Internet improvements, although induced by electronic commerce, video on demand, and other commercial needs, are of real interest also for telemedicine, because they solve the limits currently slowing down the use of Internet. When such new services will be available, telepathology applications may switch from research to daily practice in a fast way. PMID:11339559
Realistic Real-Time Outdoor Rendering in Augmented Reality
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480
Realistic real-time outdoor rendering in augmented reality.
Kolivand, Hoshang; Sunar, Mohd Shahrizal
2014-01-01
Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.
Spectroscopic analysis in the virtual observatory environment with SPLAT-VO
NASA Astrophysics Data System (ADS)
Škoda, P.; Draper, P. W.; Neves, M. C.; Andrešič, D.; Jenness, T.
2014-11-01
SPLAT-VO is a powerful graphical tool for displaying, comparing, modifying and analysing astronomical spectra, as well as searching and retrieving spectra from services around the world using Virtual Observatory (VO) protocols and services. The development of SPLAT-VO started in 1999, as part of the Starlink StarJava initiative, sometime before that of the VO, so initial support for the VO was necessarily added once VO standards and services became available. Further developments were supported by the Joint Astronomy Centre, Hawaii until 2009. Since end of 2011 development of SPLAT-VO has been continued by the German Astrophysical Virtual Observatory, and the Astronomical Institute of the Academy of Sciences of the Czech Republic. From this time several new features have been added, including support for the latest VO protocols, along with new visualization and spectra storing capabilities. This paper presents the history of SPLAT-VO, its capabilities, recent additions and future plans, as well as a discussion on the motivations and lessons learned up to now.
NASA Astrophysics Data System (ADS)
He, M.; Hardin, D. M.; Goodman, M.; Blakeslee, R.
2008-05-01
The Real Time Mission Monitor (RTMM) is an interactive visualization application based on Google Earth, that provides situational awareness and field asset management during NASA field campaigns. The RTMM can integrate data and imagery from numerous sources including GOES-12, GOES-10, and TRMM satellites. Simultaneously, it can display data and imagery from surface observations including Nexrad, NPOL and SMART- R radars. In addition to all these it can display output from models and real-time flight tracks of all aircraft involved in the experiment. In some instances the RTMM can also display measurements from scientific instruments as they are being flown. All data are recorded and archived in an on-line system enabling playback and review of all sorties. This is invaluable in preparing for future deployments and in exercising case studies. The RTMM facilitates pre-flight planning, in-flight monitoring, development of adaptive flight strategies and post- flight data analyses and assessments. Since the RTMM is available via the internet - during the actual experiment - project managers, scientists and mission planners can collaborate no matter where they are located as long as they have a viable internet connection. In addition, the system is open so that the general public can also view the experiment, in-progress, with Google Earth. Predecessors of RTMM were originally deployed in 2002 as part of the Altus Cumulus Electrification Study (ACES) to monitor uninhabited aerial vehicles near thunderstorms. In 2005 an interactive Java-based web prototype supported the airborne Lightning Instrument Package (LIP) during the Tropical Cloud Systems and Processes (TCSP) experiment. In 2006 the technology was adapted to the 3D Google Earth virtual globe and in 2007 its capabilities were extended to support multiple NASA aircraft (ER-2, WB-57, DC-8) during Tropical Composition, Clouds and Climate Coupling (TC4) experiment and 2007 Summer Aerosonde field study. In April 2008 the RTMM will be flown in the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) experiment to study the atmospheric composition in the Arctic.
Yang, Tsun-Po; Beazley, Claude; Montgomery, Stephen B.; Dimas, Antigone S.; Gutierrez-Arcelus, Maria; Stranger, Barbara E.; Deloukas, Panos; Dermitzakis, Emmanouil T.
2010-01-01
Summary: Genevar (GENe Expression VARiation) is a database and Java tool designed to integrate multiple datasets, and provides analysis and visualization of associations between sequence variation and gene expression. Genevar allows researchers to investigate expression quantitative trait loci (eQTL) associations within a gene locus of interest in real time. The database and application can be installed on a standard computer in database mode and, in addition, on a server to share discoveries among affiliations or the broader community over the Internet via web services protocols. Availability: http://www.sanger.ac.uk/resources/software/genevar Contact: emmanouil.dermitzakis@unige.ch PMID:20702402
Paolini, Gabriele; Peruzzi, Agnese; Mirelman, Anat; Cereatti, Andrea; Gaukrodger, Stephen; Hausdorff, Jeffrey M; Della Croce, Ugo
2014-09-01
The use of virtual reality for the provision of motor-cognitive gait training has been shown to be effective for a variety of patient populations. The interaction between the user and the virtual environment is achieved by tracking the motion of the body parts and replicating it in the virtual environment in real time. In this paper, we present the validation of a novel method for tracking foot position and orientation in real time, based on the Microsoft Kinect technology, to be used for gait training combined with virtual reality. The validation of the motion tracking method was performed by comparing the tracking performance of the new system against a stereo-photogrammetric system used as gold standard. Foot position errors were in the order of a few millimeters (average RMSD from 4.9 to 12.1 mm in the medio-lateral and vertical directions, from 19.4 to 26.5 mm in the anterior-posterior direction); the foot orientation errors were also small (average %RMSD from 5.6% to 8.8% in the medio-lateral and vertical directions, from 15.5% to 18.6% in the anterior-posterior direction). The results suggest that the proposed method can be effectively used to track feet motion in virtual reality and treadmill-based gait training programs.
NASA Technical Reports Server (NTRS)
Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)
2001-01-01
We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.
Virtual reality welder training
NASA Astrophysics Data System (ADS)
White, Steven A.; Reiners, Dirk; Prachyabrued, Mores; Borst, Christoph W.; Chambers, Terrence L.
2010-01-01
This document describes the Virtual Reality Simulated MIG Lab (sMIG), a system for Virtual Reality welder training. It is designed to reproduce the experience of metal inert gas (MIG) welding faithfully enough to be used as a teaching tool for beginning welding students. To make the experience as realistic as possible it employs physically accurate and tracked input devices, a real-time welding simulation, real-time sound generation and a 3D display for output. Thanks to being a fully digital system it can go beyond providing just a realistic welding experience by giving interactive and immediate feedback to the student to avoid learning wrong movements from day 1.
ERIC Educational Resources Information Center
Yang, Mau-Tsuen; Liao, Wan-Che
2014-01-01
The physical-virtual immersion and real-time interaction play an essential role in cultural and language learning. Augmented reality (AR) technology can be used to seamlessly merge virtual objects with real-world images to realize immersions. Additionally, computer vision (CV) technology can recognize free-hand gestures from live images to enable…
Extending DoD Modeling and Simulation with Web 2.0, Ajax and X3D
2007-09-01
Supported by Gavin King, who created the well known and industry respected Hibernate , (O/R) Object Relational Mapping tool, which binds Java ...most likely a Hibernate derivative). The preceding is where eBay differs from a pure Java EE specification “by the book” implementation. A truly... Java language has come a long way in providing real world case studies and scalable solutions for the enterprise that are currently in production on
Fiorelli, Alfonso; Raucci, Antonio; Cascone, Roberto; Reginelli, Alfonso; Di Natale, Davide; Santoriello, Carlo; Capuozzo, Antonio; Grassi, Roberto; Serra, Nicola; Polverino, Mario; Santini, Mario
2017-04-01
We proposed a new virtual bronchoscopy tool to improve the accuracy of traditional transbronchial needle aspiration for mediastinal staging. Chest-computed tomographic images (1 mm thickness) were reconstructed with Osirix software to produce a virtual bronchoscopic simulation. The target adenopathy was identified by measuring its distance from the carina on multiplanar reconstruction images. The static images were uploaded in iMovie Software, which produced a virtual bronchoscopic movie from the images; the movie was then transferred to a tablet computer to provide real-time guidance during a biopsy. To test the validity of our tool, we divided all consecutive patients undergoing transbronchial needle aspiration retrospectively in two groups based on whether the biopsy was guided by virtual bronchoscopy (virtual bronchoscopy group) or not (traditional group). The intergroup diagnostic yields were statistically compared. Our analysis included 53 patients in the traditional and 53 in the virtual bronchoscopy group. The sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy for the traditional group were 66.6%, 100%, 100%, 10.53% and 67.92%, respectively, and for the virtual bronchoscopy group were 84.31%, 100%, 100%, 20% and 84.91%, respectively. The sensitivity ( P = 0.011) and diagnostic accuracy ( P = 0.011) of sampling the paratracheal station were better for the virtual bronchoscopy group than for the traditional group; no significant differences were found for the subcarinal lymph node. Our tool is simple, economic and available in all centres. It guided in real time the needle insertion, thereby improving the accuracy of traditional transbronchial needle aspiration, especially when target lesions are located in a difficult site like the paratracheal station. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Bats' avoidance of real and virtual objects: implications for the sonar coding of object size.
Goerlitz, Holger R; Genzel, Daria; Wiegrebe, Lutz
2012-01-01
Fast movement in complex environments requires the controlled evasion of obstacles. Sonar-based obstacle evasion involves analysing the acoustic features of object-echoes (e.g., echo amplitude) that correlate with this object's physical features (e.g., object size). Here, we investigated sonar-based obstacle evasion in bats emerging in groups from their day roost. Using video-recordings, we first show that the bats evaded a small real object (ultrasonic loudspeaker) despite the familiar flight situation. Secondly, we studied the sonar coding of object size by adding a larger virtual object. The virtual object echo was generated by real-time convolution of the bats' calls with the acoustic impulse response of a large spherical disc and played from the loudspeaker. Contrary to the real object, the virtual object did not elicit evasive flight, despite the spectro-temporal similarity of real and virtual object echoes. Yet, their spatial echo features differ: virtual object echoes lack the spread of angles of incidence from which the echoes of large objects arrive at a bat's ears (sonar aperture). We hypothesise that this mismatch of spectro-temporal and spatial echo features caused the lack of virtual object evasion and suggest that the sonar aperture of object echoscapes contributes to the sonar coding of object size. Copyright © 2011 Elsevier B.V. All rights reserved.
Wong, Kit Fai
2011-01-01
Virtual blood bank is the computer-controlled, electronically linked information management system that allows online ordering and real-time, remote delivery of blood for transfusion. It connects the site of testing to the point of care at a remote site in a real-time fashion with networked computers thus maintaining the integrity of immunohematology test results. It has taken the advantages of information and communication technologies to ensure the accuracy of patient, specimen and blood component identification and to enhance personnel traceability and system security. The built-in logics and process constraints in the design of the virtual blood bank can guide the selection of appropriate blood and minimize transfusion risk. The quality of blood inventory is ascertained and monitored, and an audit trail for critical procedures in the transfusion process is provided by the paperless system. Thus, the virtual blood bank can help ensure that the right patient receives the right amount of the right blood component at the right time. PMID:21383930
Development of real-time motion capture system for 3D on-line games linked with virtual character
NASA Astrophysics Data System (ADS)
Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck
2004-10-01
Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.
1991-01-01
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.
NASA Astrophysics Data System (ADS)
Wenzel, Elizabeth M.; Fisher, Scott S.; Stone, Philip K.; Foster, Scott H.
1991-03-01
The real time acoustic display capabilities are described which were developed for the Virtual Environment Workstation (VIEW) Project at NASA-Ames. The acoustic display is capable of generating localized acoustic cues in real time over headphones. An auditory symbology, a related collection of representational auditory 'objects' or 'icons', can be designed using ACE (Auditory Cue Editor), which links both discrete and continuously varying acoustic parameters with information or events in the display. During a given display scenario, the symbology can be dynamically coordinated in real time with 3-D visual objects, speech, and gestural displays. The types of displays feasible with the system range from simple warnings and alarms to the acoustic representation of multidimensional data or events.
McCorkle, Doug
2017-12-27
Ames Laboratory scientist Doug McCorkle explains osgBullet, a 3-D virtual simulation software, and how it helps engineers design complex products and systems in a realistic, real-time virtual environment.
A Visual Editor in Java for View
NASA Technical Reports Server (NTRS)
Stansifer, Ryan
2000-01-01
In this project we continued the development of a visual editor in the Java programming language to create screens on which to display real-time data. The data comes from the numerous systems monitoring the operation of the space shuttle while on the ground and in space, and from the many tests of subsystems. The data can be displayed on any computer platform running a Java-enabled World Wide Web (WWW) browser and connected to the Internet. Previously a special-purpose program bad been written to display data on emulations of character-based display screens used for many years at NASA. The goal now is to display bit-mapped screens created by a visual editor. We report here on the visual editor that creates the display screens. This project continues the work we bad done previously. Previously we had followed the design of the 'beanbox,' a prototype visual editor created by Sun Microsystems. We abandoned this approach and implemented a prototype using a more direct approach. In addition, our prototype is based on newly released Java 2 graphical user interface (GUI) libraries. The result has been a visually more appealing appearance and a more robust application.
Zhou, Y; Murata, T; Defanti, T A
2000-01-01
Despite their attractive properties, networked virtual environments (net-VEs) are notoriously difficult to design, implement, and test due to the concurrency, real-time and networking features in these systems. Net-VEs demand high quality-of-service (QoS) requirements on the network to maintain natural and real-time interactions among users. The current practice for net-VE design is basically trial and error, empirical, and totally lacks formal methods. This paper proposes to apply a Petri net formal modeling technique to a net-VE-NICE (narrative immersive constructionist/collaborative environment), predict the net-VE performance based on simulation, and improve the net-VE performance. NICE is essentially a network of collaborative virtual reality systems called the CAVE-(CAVE automatic virtual environment). First, we introduce extended fuzzy-timing Petri net (EFTN) modeling and analysis techniques. Then, we present EFTN models of the CAVE, NICE, and transport layer protocol used in NICE: transmission control protocol (TCP). We show the possibility analysis based on the EFTN model for the CAVE. Then, by using these models and design/CPN as the simulation tool, we conducted various simulations to study real-time behavior, network effects and performance (latencies and jitters) of NICE. Our simulation results are consistent with experimental data.
Development of Waypoint Planning Tool in Response to NASA Field Campaign Challenges
NASA Technical Reports Server (NTRS)
He, Matt; Hardin, Danny; Conover, Helen; Graves, Sara; Meyer, Paul; Blakeslee, Richard; Goodman, Michael
2012-01-01
Airborne real time observations are a major component of NASA's Earth Science research and satellite ground validation studies. For mission scientists, planning a research aircraft mission within the context of meeting the science objectives is a complex task because it requires real time situational awareness of the weather conditions that affect the aircraft track. Multiple aircrafts are often involved in NASA field campaigns. The coordination of the aircrafts with satellite overpasses, other airplanes and the constantly evolving, dynamic weather conditions often determines the success of the campaign. A flight planning tool is needed to provide situational awareness information to the mission scientists, and help them plan and modify the flight tracks. Scientists at the University of Alabama-Huntsville and the NASA Marshall Space Flight Center developed the Waypoint Planning Tool, an interactive software tool that enables scientists to develop their own flight plans (also known as waypoints) with point -and-click mouse capabilities on a digital map filled with real time raster and vector data. The development of this Waypoint Planning Tool demonstrates the significance of mission support in responding to the challenges presented during NASA field campaigns. Analysis during and after each campaign helped identify both issues and new requirements, and initiated the next wave of development. Currently the Waypoint Planning Tool has gone through three rounds of development and analysis processes. The development of this waypoint tool is directly affected by the technology advances on GIS/Mapping technologies. From the standalone Google Earth application and simple KML functionalities, to Google Earth Plugin and Java Web Start/Applet on web platform, and to the rising open source GIS tools with new JavaScript frameworks, the Waypoint Planning Tool has entered its third phase of technology advancement. The newly innovated, cross ]platform, modular designed JavaScript ]controlled Way Point Tool is planned to be integrated with NASA Airborne Science Mission Tool Suite. Adapting new technologies for the Waypoint Planning Tool ensures its success in helping scientists reach their mission objectives. This presentation will discuss the development processes of the Waypoint Planning Tool in responding to field campaign challenges, identify new information technologies, and describe the capabilities and features of the Waypoint Planning Tool with the real time aspect, interactive nature, and the resultant benefits to the airborne science community.
Development of Way Point Planning Tool in Response to NASA Field Campaign Challenges
NASA Astrophysics Data System (ADS)
He, M.; Hardin, D. M.; Conover, H.; Graves, S. J.; Meyer, P.; Blakeslee, R. J.; Goodman, M. L.
2012-12-01
Airborne real time observations are a major component of NASA's Earth Science research and satellite ground validation studies. For mission scientists, planning a research aircraft mission within the context of meeting the science objectives is a complex task because it requires real time situational awareness of the weather conditions that affect the aircraft track. Multiple aircrafts are often involved in NASA field campaigns. The coordination of the aircrafts with satellite overpasses, other airplanes and the constantly evolving, dynamic weather conditions often determines the success of the campaign. A flight planning tool is needed to provide situational awareness information to the mission scientists, and help them plan and modify the flight tracks. Scientists at the University of Alabama-Huntsville and the NASA Marshall Space Flight Center developed the Waypoint Planning Tool, an interactive software tool that enables scientists to develop their own flight plans (also known as waypoints) with point-and-click mouse capabilities on a digital map filled with real time raster and vector data. The development of this Waypoint Planning Tool demonstrates the significance of mission support in responding to the challenges presented during NASA field campaigns. Analysis during and after each campaign helped identify both issues and new requirements, and initiated the next wave of development. Currently the Waypoint Planning Tool has gone through three rounds of development and analysis processes. The development of this waypoint tool is directly affected by the technology advances on GIS/Mapping technologies. From the standalone Google Earth application and simple KML functionalities, to Google Earth Plugin and Java Web Start/Applet on web platform, and to the rising open source GIS tools with new JavaScript frameworks, the Waypoint Planning Tool has entered its third phase of technology advancement. The newly innovated, cross-platform, modular designed JavaScript-controlled Way Point Tool is planned to be integrated with NASA Airborne Science Mission Tool Suite. Adapting new technologies for the Waypoint Planning Tool ensures its success in helping scientists reach their mission objectives. This presentation will discuss the development processes of the Waypoint Planning Tool in responding to field campaign challenges, identify new information technologies, and describe the capabilities and features of the Waypoint Planning Tool with the real time aspect, interactive nature, and the resultant benefits to the airborne science community.
NASA Astrophysics Data System (ADS)
Knight, Claire; Munro, Malcolm
2001-07-01
Distributed component based systems seem to be the immediate future for software development. The use of such techniques, object oriented languages, and the combination with ever more powerful higher-level frameworks has led to the rapid creation and deployment of such systems to cater for the demand of internet and service driven business systems. This diversity of solution through both components utilised and the physical/virtual locations of those components can provide powerful resolutions to the new demand. The problem lies in the comprehension and maintenance of such systems because they then have inherent uncertainty. The components combined at any given time for a solution may differ, the messages generated, sent, and/or received may differ, and the physical/virtual locations cannot be guaranteed. Trying to account for this uncertainty and to build in into analysis and comprehension tools is important for both development and maintenance activities.
Model Checker for Java Programs
NASA Technical Reports Server (NTRS)
Visser, Willem
2007-01-01
Java Pathfinder (JPF) is a verification and testing environment for Java that integrates model checking, program analysis, and testing. JPF consists of a custom-made Java Virtual Machine (JVM) that interprets bytecode, combined with a search interface to allow the complete behavior of a Java program to be analyzed, including interleavings of concurrent programs. JPF is implemented in Java, and its architecture is highly modular to support rapid prototyping of new features. JPF is an explicit-state model checker, because it enumerates all visited states and, therefore, suffers from the state-explosion problem inherent in analyzing large programs. It is suited to analyzing programs less than 10kLOC, but has been successfully applied to finding errors in concurrent programs up to 100kLOC. When an error is found, a trace from the initial state to the error is produced to guide the debugging. JPF works at the bytecode level, meaning that all of Java can be model-checked. By default, the software checks for all runtime errors (uncaught exceptions), assertions violations (supports Java s assert), and deadlocks. JPF uses garbage collection and symmetry reductions of the heap during model checking to reduce state-explosion, as well as dynamic partial order reductions to lower the number of interleavings analyzed. JPF is capable of symbolic execution of Java programs, including symbolic execution of complex data such as linked lists and trees. JPF is extensible as it allows for the creation of listeners that can subscribe to events during searches. The creation of dedicated code to be executed in place of regular classes is supported and allows users to easily handle native calls and to improve the efficiency of the analysis.
Virtual Acoustics: Evaluation of Psychoacoustic Parameters
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1997-01-01
Current virtual acoustic displays for teleconferencing and virtual reality are usually limited to very simple or non-existent renderings of reverberation, a fundamental part of the acoustic environmental context that is encountered in day-to-day hearing. Several research efforts have produced results that suggest that environmental cues dramatically improve perceptual performance within virtual acoustic displays, and that is possible to manipulate signal processing parameters to effectively reproduce important aspects of virtual acoustic perception in real-time. However, the computational resources for rendering reverberation remain formidable. Our efforts at NASA Ames have been focused using a several perceptual threshold metrics, to determine how various "trade-offs" might be made in real-time acoustic rendering. This includes both original work and confirmation of existing data that was obtained in real rather than virtual environments. The talk will consider the importance of using individualized versus generalized pinnae cues (the "Head-Related Transfer Function"); the use of head movement cues; threshold data for early reflections and late reverberation; and consideration of the necessary accuracy for measuring and rendering octave-band absorption characteristics of various wall surfaces. In addition, a consideration of the analysis-synthesis of the reverberation within "everyday spaces" (offices, conference rooms) will be contrasted to the commonly used paradigm of concert hall spaces.
2D and 3D Traveling Salesman Problem
ERIC Educational Resources Information Center
Haxhimusa, Yll; Carpenter, Edward; Catrambone, Joseph; Foldes, David; Stefanov, Emil; Arns, Laura; Pizlo, Zygmunt
2011-01-01
When a two-dimensional (2D) traveling salesman problem (TSP) is presented on a computer screen, human subjects can produce near-optimal tours in linear time. In this study we tested human performance on a real and virtual floor, as well as in a three-dimensional (3D) virtual space. Human performance on the real floor is as good as that on a…
A temporal bone surgery simulator with real-time feedback for surgical training.
Wijewickrema, Sudanthi; Ioannou, Ioanna; Zhou, Yun; Piromchai, Patorn; Bailey, James; Kennedy, Gregor; O'Leary, Stephen
2014-01-01
Timely feedback on surgical technique is an important aspect of surgical skill training in any learning environment, be it virtual or otherwise. Feedback on technique should be provided in real-time to allow trainees to recognize and amend their errors as they occur. Expert surgeons have typically carried out this task, but they have limited time available to spend with trainees. Virtual reality surgical simulators offer effective, repeatable training at relatively low cost, but their benefits may not be fully realized while they still require the presence of experts to provide feedback. We attempt to overcome this limitation by introducing a real-time feedback system for surgical technique within a temporal bone surgical simulator. Our evaluation study shows that this feedback system performs exceptionally well with respect to accuracy and effectiveness.
Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee
2012-05-01
Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.
Tangible display systems: bringing virtual surfaces into the real world
NASA Astrophysics Data System (ADS)
Ferwerda, James A.
2012-03-01
We are developing tangible display systems that enable natural interaction with virtual surfaces. Tangible display systems are based on modern mobile devices that incorporate electronic image displays, graphics hardware, tracking systems, and digital cameras. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of surfaces with complex textures and material properties illuminated by environment-mapped lighting, can be rendered to the screen at interactive rates. Tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. In this way, tangible displays allow virtual surfaces to be observed and manipulated as naturally as real ones, with the added benefit that surface geometry and material properties can be modified in real-time. We demonstrate the utility of tangible display systems in four application areas: material appearance research; computer-aided appearance design; enhanced access to digital library and museum collections; and new tools for digital artists.
VLP Simulation: An Interactive Simple Virtual Model to Encourage Geoscience Skill about Volcano
NASA Astrophysics Data System (ADS)
Hariyono, E.; Liliasari; Tjasyono, B.; Rosdiana, D.
2017-09-01
The purpose of this study was to describe physics students predicting skills after following the geoscience learning using VLP (Volcano Learning Project) simulation. This research was conducted to 24 physics students at one of the state university in East Java-Indonesia. The method used is the descriptive analysis based on students’ answers related to predicting skills about volcanic activity. The results showed that the learning by using VLP simulation was very potential to develop physics students predicting skills. Students were able to explain logically about volcanic activity and they have been able to predict the potential eruption that will occur based on the real data visualization. It can be concluded that the VLP simulation is very suitable for physics student requirements in developing geosciences skill and recommended as an alternative media to educate the society in an understanding of volcanic phenomena.
Productive confusions: learning from simulations of pandemic virus outbreaks in Second Life
NASA Astrophysics Data System (ADS)
Cárdenas, Micha; Greci, Laura S.; Hurst, Samantha; Garman, Karen; Hoffman, Helene; Huang, Ricky; Gates, Michael; Kho, Kristen; Mehrmand, Elle; Porteous, Todd; Calvitti, Alan; Higginbotham, Erin; Agha, Zia
2011-03-01
Users of immersive virtual reality environments have reported a wide variety of side and after effects including the confusion of characteristics of the real and virtual worlds. Perhaps this side effect of confusing the virtual and real can be turned around to explore the possibilities for immersion with minimal technological support in virtual world group training simulations. This paper will describe observations from my time working as an artist/researcher with the UCSD School of Medicine (SoM) and Veterans Administration San Diego Healthcare System (VASDHS) to develop trainings for nurses, doctors and Hospital Incident Command staff that simulate pandemic virus outbreaks. By examining moments of slippage between realities, both into and out of the virtual environment, moments of the confusion of boundaries between real and virtual, we can better understand methods for creating immersion. I will use the mixing of realities as a transversal line of inquiry, borrowing from virtual reality studies, game studies, and anthropological studies to better understand the mechanisms of immersion in virtual worlds. Focusing on drills conducted in Second Life, I will examine moments of training to learn the software interface, moments within the drill and interviews after the drill.
Demonstration of a real-time implementation of the ICVision holographic stereogram display
NASA Astrophysics Data System (ADS)
Kulick, Jeffrey H.; Jones, Michael W.; Nordin, Gregory P.; Lindquist, Robert G.; Kowel, Stephen T.; Thomsen, Axel
1995-07-01
There is increasing interest in real-time autostereoscopic 3D displays. Such systems allow 3D objects or scenes to be viewed by one or more observers with correct motion parallax without the need for glasses or other viewing aids. Potential applications of such systems include mechanical design, training and simulation, medical imaging, virtual reality, and architectural design. One approach to the development of real-time autostereoscopic display systems has been to develop real-time holographic display systems. The approach taken by most of the systems is to compute and display a number of holographic lines at one time, and then use a scanning system to replicate the images throughout the display region. The approach taken in the ICVision system being developed at the University of Alabama in Huntsville is very different. In the ICVision display, a set of discrete viewing regions called virtual viewing slits are created by the display. Each pixel is required fill every viewing slit with different image data. When the images presented in two virtual viewing slits separated by an interoccular distance are filled with stereoscopic pair images, the observer sees a 3D image. The images are computed so that a different stereo pair is presented each time the viewer moves 1 eye pupil diameter (approximately mm), thus providing a series of stereo views. Each pixel is subdivided into smaller regions, called partial pixels. Each partial pixel is filled with a diffraction grating that is just that required to fill an individual virtual viewing slit. The sum of all the partial pixels in a pixel then fill all the virtual viewing slits. The final version of the ICVision system will form diffraction gratings in a liquid crystal layer on the surface of VLSI chips in real time. Processors embedded in the VLSI chips will compute the display in real- time. In the current version of the system, a commercial AMLCD is sandwiched with a diffraction grating array. This paper will discuss the design details of a protable 3D display based on the integration of a diffractive optical element with a commercial off-the-shelf AMLCD. The diffractive optic contains several hundred thousand partial-pixel gratings and the AMLCD modulates the light diffracted by the gratings.
Get the Real Picture About College Drinking | NIH MedlinePlus the Magazine
... JavaScript on. Get the Real Picture About College Drinking Past Issues / Fall 2015 Table of Contents Get the Real Picture About College Drinking Learn more at CollegeDrinkingPrevention.gov Fall 2015 Issue: ...
Adamovich, S.V.; August, K.; Merians, A.; Tunik, E.
2017-01-01
Purpose Emerging evidence shows that interactive virtual environments (VEs) may be a promising tool for studying sensorimotor processes and for rehabilitation. However, the potential of VEs to recruit action observation-execution neural networks is largely unknown. For the first time, a functional MRI-compatible virtual reality system (VR) has been developed to provide a window into studying brain-behavior interactions. This system is capable of measuring the complex span of hand-finger movements and simultaneously streaming this kinematic data to control the motion of representations of human hands in virtual reality. Methods In a blocked fMRI design, thirteen healthy subjects observed, with the intent to imitate (OTI), finger sequences performed by the virtual hand avatar seen in 1st person perspective and animated by pre-recorded kinematic data. Following this, subjects imitated the observed sequence while viewing the virtual hand avatar animated by their own movement in real-time. These blocks were interleaved with rest periods during which subjects viewed static virtual hand avatars and control trials in which the avatars were replaced with moving non-anthropomorphic objects. Results We show three main findings. First, both observation with intent to imitate and imitation with real-time virtual avatar feedback, were associated with activation in a distributed frontoparietal network typically recruited for observation and execution of real-world actions. Second, we noted a time-variant increase in activation in the left insular cortex for observation with intent to imitate actions performed by the virtual avatar. Third, imitation with virtual avatar feedback (relative to the control condition) was associated with a localized recruitment of the angular gyrus, precuneus, and extrastriate body area, regions which are (along with insular cortex) associated with the sense of agency. Conclusions Our data suggest that the virtual hand avatars may have served as disembodied training tools in the observation condition and as embodied “extensions” of the subject’s own body (pseudo-tools) in the imitation. These data advance our understanding of the brain-behavior interactions when performing actions in VE and have implications in the development of observation- and imitation-based VR rehabilitation paradigms. PMID:19531876
Innovative Technology for Teaching Introductory Astronomy
NASA Astrophysics Data System (ADS)
Guidry, Mike
The application of state-of-the-art technology (primarily Java and Flash MX Actionscript on the client side and Java PHP PERL XML and SQL databasing on the server side) to the teaching of introductory astronomy will be discussed. A completely online syllabus in introductory astronomy built around more than 350 interactive animations called ""Online Journey through Astronomy"" and a new set of 20 online virtual laboratories in astronomy that we are currently developing will be used as illustration. In addition to demonstration of the technology our experience using these technologies to teach introductory astronomy to thousands of students in settings ranging from traditional classrooms to full distance learning will be summarized. Recent experiments using Java and vector graphics programming of handheld devices (Personal Digital Assistants and cell phones) with wireless wide-area connectivity for applications in astronomy education will also be described.
[Temperature Measurement with Bluetooth under Android Platform].
Wang, Shuai; Shen, Hao; Luo, Changze
2015-03-01
To realize the real-time transmission of temperature data and display using the platform of intelligent mobile phone and bluetooth. Application of Arduino Uno R3 in temperature data acquisition of digital temperature sensor DS18B20 acquisition, through the HC-05 bluetooth transmits the data to the intelligent smart phone Android system, realizes transmission of temperature data. Using Java language to write applications program under Android development environment, can achieve real-time temperature data display, storage and drawing temperature fluctuations drawn graphics. Temperature sensor is experimentally tested to meet the body temperature measurement precision and accuracy. This paper can provide a reference for other smart phone mobile medical product development.
Advanced Visualization of Experimental Data in Real Time Using LiveView3D
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Fleming, Gary A.
2006-01-01
LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.
An Optimized Trajectory Planning for Welding Robot
NASA Astrophysics Data System (ADS)
Chen, Zhilong; Wang, Jun; Li, Shuting; Ren, Jun; Wang, Quan; Cheng, Qunchao; Li, Wentao
2018-03-01
In order to improve the welding efficiency and quality, this paper studies the combined planning between welding parameters and space trajectory for welding robot and proposes a trajectory planning method with high real-time performance, strong controllability and small welding error. By adding the virtual joint at the end-effector, the appropriate virtual joint model is established and the welding process parameters are represented by the virtual joint variables. The trajectory planning is carried out in the robot joint space, which makes the control of the welding process parameters more intuitive and convenient. By using the virtual joint model combined with the B-spline curve affine invariant, the welding process parameters are indirectly controlled by controlling the motion curve of the real joint. To solve the optimal time solution as the goal, the welding process parameters and joint space trajectory joint planning are optimized.
A model for flexible tools used in minimally invasive medical virtual environments.
Soler, Francisco; Luzon, M Victoria; Pop, Serban R; Hughes, Chris J; John, Nigel W; Torres, Juan Carlos
2011-01-01
Within the limits of current technology, many applications of a virtual environment will trade-off accuracy for speed. This is not an acceptable compromise in a medical training application where both are essential. Efficient algorithms must therefore be developed. The purpose of this project is the development and validation of a novel physics-based real time tool manipulation model, which is easy to integrate into any medical virtual environment that requires support for the insertion of long flexible tools into complex geometries. This encompasses medical specialities such as vascular interventional radiology, endoscopy, and laparoscopy, where training, prototyping of new instruments/tools and mission rehearsal can all be facilitated by using an immersive medical virtual environment. Our model recognises and uses accurately patient specific data and adapts to the geometrical complexity of the vessel in real time.
Real-time global illumination on mobile device
NASA Astrophysics Data System (ADS)
Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.
2014-02-01
We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.
A Context-Aware Method for Authentically Simulating Outdoors Shadows for Mobile Augmented Reality.
Barreira, Joao; Bessa, Maximino; Barbosa, Luis; Magalhaes, Luis
2018-03-01
Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.
An Integrated Simulation Module for Cyber-Physical Automation Systems †
Ferracuti, Francesco; Freddi, Alessandro; Monteriù, Andrea; Prist, Mariorosario
2016-01-01
The integration of Wireless Sensors Networks (WSNs) into Cyber Physical Systems (CPSs) is an important research problem to solve in order to increase the performances, safety, reliability and usability of wireless automation systems. Due to the complexity of real CPSs, emulators and simulators are often used to replace the real control devices and physical connections during the development stage. The most widespread simulators are free, open source, expandable, flexible and fully integrated into mathematical modeling tools; however, the connection at a physical level and the direct interaction with the real process via the WSN are only marginally tackled; moreover, the simulated wireless sensor motes are not able to generate the analogue output typically required for control purposes. A new simulation module for the control of a wireless cyber-physical system is proposed in this paper. The module integrates the COntiki OS JAva Simulator (COOJA), a cross-level wireless sensor network simulator, and the LabVIEW system design software from National Instruments. The proposed software module has been called “GILOO” (Graphical Integration of Labview and cOOja). It allows one to develop and to debug control strategies over the WSN both using virtual or real hardware modules, such as the National Instruments Real-Time Module platform, the CompactRio, the Supervisory Control And Data Acquisition (SCADA), etc. To test the proposed solution, we decided to integrate it with one of the most popular simulators, i.e., the Contiki OS, and wireless motes, i.e., the Sky mote. As a further contribution, the Contiki Sky DAC driver and a new “Advanced Sky GUI” have been proposed and tested in the COOJA Simulator in order to provide the possibility to develop control over the WSN. To test the performances of the proposed GILOO software module, several experimental tests have been made, and interesting preliminary results are reported. The GILOO module has been applied to a smart home mock-up where a networked control has been developed for the LED lighting system. PMID:27164109
An Integrated Simulation Module for Cyber-Physical Automation Systems.
Ferracuti, Francesco; Freddi, Alessandro; Monteriù, Andrea; Prist, Mariorosario
2016-05-05
The integration of Wireless Sensors Networks (WSNs) into Cyber Physical Systems (CPSs) is an important research problem to solve in order to increase the performances, safety, reliability and usability of wireless automation systems. Due to the complexity of real CPSs, emulators and simulators are often used to replace the real control devices and physical connections during the development stage. The most widespread simulators are free, open source, expandable, flexible and fully integrated into mathematical modeling tools; however, the connection at a physical level and the direct interaction with the real process via the WSN are only marginally tackled; moreover, the simulated wireless sensor motes are not able to generate the analogue output typically required for control purposes. A new simulation module for the control of a wireless cyber-physical system is proposed in this paper. The module integrates the COntiki OS JAva Simulator (COOJA), a cross-level wireless sensor network simulator, and the LabVIEW system design software from National Instruments. The proposed software module has been called "GILOO" (Graphical Integration of Labview and cOOja). It allows one to develop and to debug control strategies over the WSN both using virtual or real hardware modules, such as the National Instruments Real-Time Module platform, the CompactRio, the Supervisory Control And Data Acquisition (SCADA), etc. To test the proposed solution, we decided to integrate it with one of the most popular simulators, i.e., the Contiki OS, and wireless motes, i.e., the Sky mote. As a further contribution, the Contiki Sky DAC driver and a new "Advanced Sky GUI" have been proposed and tested in the COOJA Simulator in order to provide the possibility to develop control over the WSN. To test the performances of the proposed GILOO software module, several experimental tests have been made, and interesting preliminary results are reported. The GILOO module has been applied to a smart home mock-up where a networked control has been developed for the LED lighting system.
A remote patient monitoring system using a Java-enabled 3G mobile phone.
Zhang, Pu; Kogure, Yuichi; Matsuoka, Hiroki; Akutagawa, Masatake; Kinouchi, Yohsuke; Zhang, Qinyu
2007-01-01
Telemedicine systems have become an important supporting for the medical staffs. As the development of the mobile phones, it is possible to apply the mobile phones to be a part of telemedicine systems. We developed an innovative Remote Patient Monitoring System using a Java-enabled 3G mobile phone. By using this system, doctors can monitor the vital biosignals of patients in ICU/CCU, such as ECG, RESP, SpO2, EtCO2 and so on by using the real-time waveform and data monitoring and list trend data monitoring functions of installed Java jiglet application on the mobile phone. Futhermore, doctors can check the patients' information by using the patient information checking function. The 3G mobile phone used has the ability to implement the application as the same time as being used to mak a voice call. Therefore, the doctor can get more and more information both from the browsing the screen of the mobile phone and the communicating with the medical staffs who are beside the patients and the monitors. The system can be conducted to evaluate the diagnostic accuracy, efficiency, and safety of telediagnosis.
Virtualized Traffic: reconstructing traffic flows from discrete spatiotemporal data.
Sewall, Jason; van den Berg, Jur; Lin, Ming C; Manocha, Dinesh
2011-01-01
We present a novel concept, Virtualized Traffic, to reconstruct and visualize continuous traffic flows from discrete spatiotemporal data provided by traffic sensors or generated artificially to enhance a sense of immersion in a dynamic virtual world. Given the positions of each car at two recorded locations on a highway and the corresponding time instances, our approach can reconstruct the traffic flows (i.e., the dynamic motions of multiple cars over time) between the two locations along the highway for immersive visualization of virtual cities or other environments. Our algorithm is applicable to high-density traffic on highways with an arbitrary number of lanes and takes into account the geometric, kinematic, and dynamic constraints on the cars. Our method reconstructs the car motion that automatically minimizes the number of lane changes, respects safety distance to other cars, and computes the acceleration necessary to obtain a smooth traffic flow subject to the given constraints. Furthermore, our framework can process a continuous stream of input data in real time, enabling the users to view virtualized traffic events in a virtual world as they occur. We demonstrate our reconstruction technique with both synthetic and real-world input. © 2011 IEEE Published by the IEEE Computer Society
Virtual healthcare delivery: defined, modeled, and predictive barriers to implementation identified.
Harrop, V M
2001-01-01
Provider organizations lack: 1. a definition of "virtual" healthcare delivery relative to the products, services, and processes offered by dot.coms, web-compact disk healthcare content providers, telemedicine, and telecommunications companies, and 2. a model for integrating real and virtual healthcare delivery. This paper defines virtual healthcare delivery as asynchronous, outsourced, and anonymous, then proposes a 2x2 Real-Virtual Healthcare Delivery model focused on real and virtual patients and real and virtual provider organizations. Using this model, provider organizations can systematically deconstruct healthcare delivery in the real world and reconstruct appropriate pieces in the virtual world. Observed barriers to virtual healthcare delivery are: resistance to telecommunication integrated delivery networks and outsourcing; confusion over virtual infrastructure requirements for telemedicine and full-service web portals, and the impact of integrated delivery networks and outsourcing on extant cultural norms and revenue generating practices. To remain competitive provider organizations must integrate real and virtual healthcare delivery.
Virtual healthcare delivery: defined, modeled, and predictive barriers to implementation identified.
Harrop, V. M.
2001-01-01
Provider organizations lack: 1. a definition of "virtual" healthcare delivery relative to the products, services, and processes offered by dot.coms, web-compact disk healthcare content providers, telemedicine, and telecommunications companies, and 2. a model for integrating real and virtual healthcare delivery. This paper defines virtual healthcare delivery as asynchronous, outsourced, and anonymous, then proposes a 2x2 Real-Virtual Healthcare Delivery model focused on real and virtual patients and real and virtual provider organizations. Using this model, provider organizations can systematically deconstruct healthcare delivery in the real world and reconstruct appropriate pieces in the virtual world. Observed barriers to virtual healthcare delivery are: resistance to telecommunication integrated delivery networks and outsourcing; confusion over virtual infrastructure requirements for telemedicine and full-service web portals, and the impact of integrated delivery networks and outsourcing on extant cultural norms and revenue generating practices. To remain competitive provider organizations must integrate real and virtual healthcare delivery. PMID:11825189
Real-time recording and classification of eye movements in an immersive virtual environment.
Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary
2013-10-10
Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.
Real-time recording and classification of eye movements in an immersive virtual environment
Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary
2013-01-01
Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements. PMID:24113087
Real-time WebRTC-based design for a telepresence wheelchair.
Van Kha Ly Ha; Rifai Chai; Nguyen, Hung T
2017-07-01
This paper presents a novel approach to the telepresence wheelchair system which is capable of real-time video communication and remote interaction. The investigation of this emerging technology aims at providing a low-cost and efficient way for assisted-living of people with disabilities. The proposed system has been designed and developed by deploying the JavaScript with Hyper Text Markup Language 5 (HTML5) and Web Real-time Communication (WebRTC) in which the adaptive rate control algorithm for video transmission is invoked. We conducted experiments in real-world environments, and the wheelchair was controlled from a distance using the Internet browser to compare with existing methods. The results show that the adaptively encoded video streaming rate matches the available bandwidth. The video streaming is high-quality with approximately 30 frames per second (fps) and round trip time less than 20 milliseconds (ms). These performance results confirm that the WebRTC approach is a potential method for developing a telepresence wheelchair system.
Minimizing Input-to-Output Latency in Virtual Environment
NASA Technical Reports Server (NTRS)
Adelstein, Bernard D.; Ellis, Stephen R.; Hill, Michael I.
2009-01-01
A method and apparatus were developed to minimize latency (time delay ) in virtual environment (VE) and other discrete- time computer-base d systems that require real-time display in response to sensor input s. Latency in such systems is due to the sum of the finite time requi red for information processing and communication within and between sensors, software, and displays.
Challenges and solutions for realistic room simulation
NASA Astrophysics Data System (ADS)
Begault, Durand R.
2002-05-01
Virtual room acoustic simulation (auralization) techniques have traditionally focused on answering questions related to speech intelligibility or musical quality, typically in large volumetric spaces. More recently, auralization techniques have been found to be important for the externalization of headphone-reproduced virtual acoustic images. Although externalization can be accomplished using a minimal simulation, data indicate that realistic auralizations need to be responsive to head motion cues for accurate localization. Computational demands increase when providing for the simulation of coupled spaces, small rooms lacking meaningful reverberant decays, or reflective surfaces in outdoor environments. Auditory threshold data for both early reflections and late reverberant energy levels indicate that much of the information captured in acoustical measurements is inaudible, minimizing the intensive computational requirements of real-time auralization systems. Results are presented for early reflection thresholds as a function of azimuth angle, arrival time, and sound-source type, and reverberation thresholds as a function of reverberation time and level within 250-Hz-2-kHz octave bands. Good agreement is found between data obtained in virtual room simulations and those obtained in real rooms, allowing a strategy for minimizing computational requirements of real-time auralization systems.
Applications of virtual reality technology in pathology.
Grimes, G J; McClellan, S A; Goldman, J; Vaughn, G L; Conner, D A; Kujawski, E; McDonald, J; Winokur, T; Fleming, W
1997-01-01
TelePath(SM) a telerobotic system utilizing virtual microscope concepts based on high quality still digital imaging and aimed at real-time support for surgery by remote diagnosis of frozen sections. Many hospitals and clinics have an application for the remote practice of pathology, particularly in the area of reading frozen sections in support of surgery, commonly called anatomic pathology. The goal is to project the expertise of the pathologist into the remote setting by giving the pathologist access to the microscope slides with an image quality and human interface comparable to what the pathologist would experience at a real rather than a virtual microscope. A working prototype of a virtual microscope has been defined and constructed which has the needed performance in both the image quality and human interface areas for a pathologist to work remotely. This is accomplished through the use of telerobotics and an image quality which provides the virtual microscope the same diagnostic capabilities as a real microscope. The examination of frozen sections is performed a two-dimensional world. The remote pathologist is in a virtual world with the same capabilities as a "real" microscope, but response times may be slower depending on the specific computing and telecommunication environments. The TelePath system has capabilities far beyond a normal biological microscope, such as the ability to create a low power image of the entire sample using multiple images digitally matched together; the ability to digitally retrace a viewing trajectory; and the ability to archive images using CD ROM and other mass storage devices.
An efficient framework for Java data processing systems in HPC environments
NASA Astrophysics Data System (ADS)
Fries, Aidan; Castañeda, Javier; Isasi, Yago; Taboada, Guillermo L.; Portell de Mora, Jordi; Sirvent, Raül
2011-11-01
Java is a commonly used programming language, although its use in High Performance Computing (HPC) remains relatively low. One of the reasons is a lack of libraries offering specific HPC functions to Java applications. In this paper we present a Java-based framework, called DpcbTools, designed to provide a set of functions that fill this gap. It includes a set of efficient data communication functions based on message-passing, thus providing, when a low latency network such as Myrinet is available, higher throughputs and lower latencies than standard solutions used by Java. DpcbTools also includes routines for the launching, monitoring and management of Java applications on several computing nodes by making use of JMX to communicate with remote Java VMs. The Gaia Data Processing and Analysis Consortium (DPAC) is a real case where scientific data from the ESA Gaia astrometric satellite will be entirely processed using Java. In this paper we describe the main elements of DPAC and its usage of the DpcbTools framework. We also assess the usefulness and performance of DpcbTools through its performance evaluation and the analysis of its impact on some DPAC systems deployed in the MareNostrum supercomputer (Barcelona Supercomputing Center).
Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality.
Cushman, Laura A; Stein, Karen; Duffy, Charles J
2008-09-16
Older adults get lost, in many cases because of recognized or incipient Alzheimer disease (AD). In either case, getting lost can be a threat to individual and public safety, as well as to personal autonomy and quality of life. Here we compare our previously described real-world navigation test with a virtual reality (VR) version simulating the same navigational environment. Quantifying real-world navigational performance is difficult and time-consuming. VR testing is a promising alternative, but it has not been compared with closely corresponding real-world testing in aging and AD. We have studied navigation using both real-world and virtual environments in the same subjects: young normal controls (YNCs, n = 35), older normal controls (ONCs, n = 26), patients with mild cognitive impairment (MCI, n = 12), and patients with early AD (EAD, n = 14). We found close correlations between real-world and virtual navigational deficits that increased across groups from YNC to ONC, to MCI, and to EAD. Analyses of subtest performance showed similar profiles of impairment in real-world and virtual testing in all four subject groups. The ONC, MCI, and EAD subjects all showed greatest difficulty in self-orientation and scene localization tests. MCI and EAD patients also showed impaired verbal recall about both test environments. Virtual environment testing provides a valid assessment of navigational skills. Aging and Alzheimer disease (AD) share the same patterns of difficulty in associating visual scenes and locations, which is complicated in AD by the accompanying loss of verbally mediated navigational capacities. We conclude that virtual navigation testing reveals deficits in aging and AD that are associated with potentially grave risks to our patients and the community.
High-Performance Java Codes for Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.
Liu, Kaijun; Fang, Binji; Wu, Yi; Li, Ying; Jin, Jun; Tan, Liwen; Zhang, Shaoxiang
2013-09-01
Anatomical knowledge of the larynx region is critical for understanding laryngeal disease and performing required interventions. Virtual reality is a useful method for surgical education and simulation. Here, we assembled segmented cross-section slices of the larynx region from the Chinese Visible Human dataset. The laryngeal structures were precisely segmented manually as 2D images, then reconstructed and displayed as 3D images in the virtual reality Dextrobeam system. Using visualization and interaction with the virtual reality modeling language model, a digital laryngeal anatomy instruction was constructed using HTML and JavaScript languages. The volume larynx models can thus display an arbitrary section of the model and provide a virtual dissection function. This networked teaching system of the digital laryngeal anatomy can be read remotely, displayed locally, and manipulated interactively.
JADOPPT: java based AutoDock preparing and processing tool.
García-Pérez, Carlos; Peláez, Rafael; Therón, Roberto; Luis López-Pérez, José
2017-02-15
AutoDock is a very popular software package for docking and virtual screening. However, currently it is hard work to visualize more than one result from the virtual screening at a time. To overcome this limitation we have designed JADOPPT, a tool for automatically preparing and processing multiple ligand-protein docked poses obtained from AutoDock. It allows the simultaneous visual assessment and comparison of multiple poses through clustering methods. Moreover, it permits the representation of reference ligands with known binding modes, binding site residues, highly scoring regions for the ligand, and the calculated binding energy of the best ranked results. JADOPPT, supplementary material (Case Studies 1 and 2) and video tutorials are available at http://visualanalytics.land/cgarcia/JADOPPT.html. carlosgarcia@usal.es or pelaez@usal.es. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
The 10 Hottest Technologies in Telecom.
ERIC Educational Resources Information Center
Flanagan, Patrick
1996-01-01
Synthesizes opinions of experts regarding technologies deemed most likely to enter the telecommunications mainstream by 1998, including: (1) the Java programming language; (2) voice- over frame relay; (3) virtual local area networks (LANs); (4) cable modems; (5) gigabit LANs; (6) Internet appliances; (7) personal satellite phones; (8) intranets;…
Parallel Computing Using Web Servers and "Servlets".
ERIC Educational Resources Information Center
Lo, Alfred; Bloor, Chris; Choi, Y. K.
2000-01-01
Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…
Shared virtual environments for aerospace training
NASA Technical Reports Server (NTRS)
Loftin, R. Bowen; Voss, Mark
1994-01-01
Virtual environments have the potential to significantly enhance the training of NASA astronauts and ground-based personnel for a variety of activities. A critical requirement is the need to share virtual environments, in real or near real time, between remote sites. It has been hypothesized that the training of international astronaut crews could be done more cheaply and effectively by utilizing such shared virtual environments in the early stages of mission preparation. The Software Technology Branch at NASA's Johnson Space Center has developed the capability for multiple users to simultaneously share the same virtual environment. Each user generates the graphics needed to create the virtual environment. All changes of object position and state are communicated to all users so that each virtual environment maintains its 'currency.' Examples of these shared environments will be discussed and plans for the utilization of the Department of Defense's Distributed Interactive Simulation (DIS) protocols for shared virtual environments will be presented. Finally, the impact of this technology on training and education in general will be explored.
Programming Models for Concurrency and Real-Time
NASA Astrophysics Data System (ADS)
Vitek, Jan
Modern real-time applications are increasingly large, complex and concurrent systems which must meet stringent performance and predictability requirements. Programming those systems require fundamental advances in programming languages and runtime systems. This talk presents our work on Flexotasks, a programming model for concurrent, real-time systems inspired by stream-processing and concurrent active objects. Some of the key innovations in Flexotasks are that it support both real-time garbage collection and region-based memory with an ownership type system for static safety. Communication between tasks is performed by channels with a linear type discipline to avoid copying messages, and by a non-blocking transactional memory facility. We have evaluated our model empirically within two distinct implementations, one based on Purdue’s Ovm research virtual machine framework and the other on Websphere, IBM’s production real-time virtual machine. We have written a number of small programs, as well as a 30 KLOC avionics collision detector application. We show that Flexotasks are capable of executing periodic threads at 10 KHz with a standard deviation of 1.2us and have performance competitive with hand coded C programs.
Virtual faces expressing emotions: an initial concomitant and construct validity study.
Joyal, Christian C; Jacob, Laurence; Cigna, Marie-Hélène; Guay, Jean-Pierre; Renaud, Patrice
2014-01-01
Facial expressions of emotions represent classic stimuli for the study of social cognition. Developing virtual dynamic facial expressions of emotions, however, would open-up possibilities, both for fundamental and clinical research. For instance, virtual faces allow real-time Human-Computer retroactions between physiological measures and the virtual agent. The goal of this study was to initially assess concomitants and construct validity of a newly developed set of virtual faces expressing six fundamental emotions (happiness, surprise, anger, sadness, fear, and disgust). Recognition rates, facial electromyography (zygomatic major and corrugator supercilii muscles), and regional gaze fixation latencies (eyes and mouth regions) were compared in 41 adult volunteers (20 ♂, 21 ♀) during the presentation of video clips depicting real vs. virtual adults expressing emotions. Emotions expressed by each set of stimuli were similarly recognized, both by men and women. Accordingly, both sets of stimuli elicited similar activation of facial muscles and similar ocular fixation times in eye regions from man and woman participants. Further validation studies can be performed with these virtual faces among clinical populations known to present social cognition difficulties. Brain-Computer Interface studies with feedback-feedforward interactions based on facial emotion expressions can also be conducted with these stimuli.
NASA Technical Reports Server (NTRS)
Smith, Jeffrey
2003-01-01
The Bio- Visualization, Imaging and Simulation (BioVIS) Technology Center at NASA's Ames Research Center is dedicated to developing and applying advanced visualization, computation and simulation technologies to support NASA Space Life Sciences research and the objectives of the Fundamental Biology Program. Research ranges from high resolution 3D cell imaging and structure analysis, virtual environment simulation of fine sensory-motor tasks, computational neuroscience and biophysics to biomedical/clinical applications. Computer simulation research focuses on the development of advanced computational tools for astronaut training and education. Virtual Reality (VR) and Virtual Environment (VE) simulation systems have become important training tools in many fields from flight simulation to, more recently, surgical simulation. The type and quality of training provided by these computer-based tools ranges widely, but the value of real-time VE computer simulation as a method of preparing individuals for real-world tasks is well established. Astronauts routinely use VE systems for various training tasks, including Space Shuttle landings, robot arm manipulations and extravehicular activities (space walks). Currently, there are no VE systems to train astronauts for basic and applied research experiments which are an important part of many missions. The Virtual Glovebox (VGX) is a prototype VE system for real-time physically-based simulation of the Life Sciences Glovebox where astronauts will perform many complex tasks supporting research experiments aboard the International Space Station. The VGX consists of a physical display system utilizing duel LCD projectors and circular polarization to produce a desktop-sized 3D virtual workspace. Physically-based modeling tools (Arachi Inc.) provide real-time collision detection, rigid body dynamics, physical properties and force-based controls for objects. The human-computer interface consists of two magnetic tracking devices (Ascention Inc.) attached to instrumented gloves (Immersion Inc.) which co-locate the user's hands with hand/forearm representations in the virtual workspace. Force-feedback is possible in a work volume defined by a Phantom Desktop device (SensAble inc.). Graphics are written in OpenGL. The system runs on a 2.2 GHz Pentium 4 PC. The prototype VGX provides astronauts and support personnel with a real-time physically-based VE system to simulate basic research tasks both on Earth and in the microgravity of Space. The immersive virtual environment of the VGX also makes it a useful tool for virtual engineering applications including CAD development, procedure design and simulation of human-system systems in a desktop-sized work volume.
Teaching Basic Field Skills Using Screen-Based Virtual Reality Landscapes
NASA Astrophysics Data System (ADS)
Houghton, J.; Robinson, A.; Gordon, C.; Lloyd, G. E. E.; Morgan, D. J.
2016-12-01
We are using screen-based virtual reality landscapes, created using the Unity 3D game engine, to augment the training geoscience students receive in preparing for fieldwork. Students explore these landscapes as they would real ones, interacting with virtual outcrops to collect data, determine location, and map the geology. Skills for conducting field geological surveys - collecting, plotting and interpreting data; time management and decision making - are introduced interactively and intuitively. As with real landscapes, the virtual landscapes are open-ended terrains with embedded data. This means the game does not structure student interaction with the information as it is through experience the student learns the best methods to work successfully and efficiently. These virtual landscapes are not replacements for geological fieldwork rather virtual spaces between classroom and field in which to train and reinforcement essential skills. Importantly, these virtual landscapes offer accessible parallel provision for students unable to visit, or fully partake in visiting, the field. The project has received positive feedback from both staff and students. Results show students find it easier to focus on learning these basic field skills in a classroom, rather than field setting, and make the same mistakes as when learning in the field, validating the realistic nature of the virtual experience and providing opportunity to learn from these mistakes. The approach also saves time, and therefore resources, in the field as basic skills are already embedded. 70% of students report increased confidence with how to map boundaries and 80% have found the virtual training a useful experience. We are also developing landscapes based on real places with 3D photogrammetric outcrops, and a virtual urban landscape in which Engineering Geology students can conduct a site investigation. This project is a collaboration between the University of Leeds and Leeds College of Art, UK, and all our virtual landscapes are freely available online at www.see.leeds.ac.uk/virtual-landscapes/.
Research on inosculation between master of ceremonies or players and virtual scene in virtual studio
NASA Astrophysics Data System (ADS)
Li, Zili; Zhu, Guangxi; Zhu, Yaoting
2003-04-01
A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.
NASA Technical Reports Server (NTRS)
Lehmer, R.; Ingram, C.; Jovic, S.; Alderete, J.; Brown, D.; Carpenter, D.; LaForce, S.; Panda, R.; Walker, J.; Chaplin, P.;
2006-01-01
The Virtual Airspace Simulation Technology - Real-Time (VAST-RT) Project, an element cf NASA's Virtual Airspace Modeling and Simulation (VAMS) Project, has been developing a distributed simulation capability that supports an extensible and expandable real-time, human-in-the-loop airspace simulation environment. The VAST-RT system architecture is based on DoD High Level Architecture (HLA) and the VAST-RT HLA Toolbox, a common interface implementation that incorporates a number of novel design features. The scope of the initial VAST-RT integration activity (Capability 1) included the high-fidelity human-in-the-loop simulation facilities located at NASA/Ames Research Center and medium fidelity pseudo-piloted target generators, such as the Airspace Traffic Generator (ATG) being developed as part of VAST-RT, as well as other real-time tools. This capability has been demonstrated in a gate-to-gate simulation. VAST-RT's (Capability 2A) has been recently completed, and this paper will discuss the improved integration of the real-time assets into VAST-RT, including the development of tools to integrate data collected across the simulation environment into a single data set for the researcher. Current plans for the completion of the VAST-RT distributed simulation environment (Capability 2B) and its use to evaluate future airspace capacity enhancing concepts being developed by VAMS will be discussed. Additionally, the simulation environment's application to other airspace and airport research projects is addressed.
Man, mind, and machine: the past and future of virtual reality simulation in neurologic surgery.
Robison, R Aaron; Liu, Charles Y; Apuzzo, Michael L J
2011-11-01
To review virtual reality in neurosurgery, including the history of simulation and virtual reality and some of the current implementations; to examine some of the technical challenges involved; and to propose a potential paradigm for the development of virtual reality in neurosurgery going forward. A search was made on PubMed using key words surgical simulation, virtual reality, haptics, collision detection, and volumetric modeling to assess the current status of virtual reality in neurosurgery. Based on previous results, investigators extrapolated the possible integration of existing efforts and potential future directions. Simulation has a rich history in surgical training, and there are numerous currently existing applications and systems that involve virtual reality. All existing applications are limited to specific task-oriented functions and typically sacrifice visual realism for real-time interactivity or vice versa, owing to numerous technical challenges in rendering a virtual space in real time, including graphic and tissue modeling, collision detection, and direction of the haptic interface. With ongoing technical advancements in computer hardware and graphic and physical rendering, incremental or modular development of a fully immersive, multipurpose virtual reality neurosurgical simulator is feasible. The use of virtual reality in neurosurgery is predicted to change the nature of neurosurgical education, and to play an increased role in surgical rehearsal and the continuing education and credentialing of surgical practitioners. Copyright © 2011 Elsevier Inc. All rights reserved.
nodeGame: Real-time, synchronous, online experiments in the browser.
Balietti, Stefano
2017-10-01
nodeGame is a free, open-source JavaScript/ HTML5 framework for conducting synchronous experiments online and in the lab directly in the browser window. It is specifically designed to support behavioral research along three dimensions: (i) larger group sizes, (ii) real-time (but also discrete time) experiments, and (iii) batches of simultaneous experiments. nodeGame has a modular source code, and defines an API (application programming interface) through which experimenters can create new strategic environments and configure the platform. With zero-install, nodeGame can run on a great variety of devices, from desktop computers to laptops, smartphones, and tablets. The current version of the software is 3.0, and extensive documentation is available on the wiki pages at http://nodegame.org .
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
Model Checking JAVA Programs Using Java Pathfinder
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Pressburger, Thomas
2000-01-01
This paper describes a translator called JAVA PATHFINDER from JAVA to PROMELA, the "programming language" of the SPIN model checker. The purpose is to establish a framework for verification and debugging of JAVA programs based on model checking. This work should be seen in a broader attempt to make formal methods applicable "in the loop" of programming within NASA's areas such as space, aviation, and robotics. Our main goal is to create automated formal methods such that programmers themselves can apply these in their daily work (in the loop) without the need for specialists to manually reformulate a program into a different notation in order to analyze the program. This work is a continuation of an effort to formally verify, using SPIN, a multi-threaded operating system programmed in Lisp for the Deep-Space 1 spacecraft, and of previous work in applying existing model checkers and theorem provers to real applications.
ERIC Educational Resources Information Center
Heiner, Cecily
2009-01-01
Students in introductory programming classes often articulate their questions and information needs incompletely. Consequently, the automatic classification of student questions to provide automated tutorial responses is a challenging problem. This dissertation analyzes 411 questions from an introductory Java programming course by reducing the…
Virtually Exploring A Pillar Of Experimental Physics: The Hertz Experiment
NASA Astrophysics Data System (ADS)
Bonanno, A.; Sapia, P.; Camarca, M.; Oliva, A.
2008-05-01
In the present work we report on the implementation and early assessment of a multimedia learning object, developed using the Java programming language, which also integrates in a creative way some internet freely available educational resources, intended to support the teaching/learning process of the historical Hertz experiment.
ERIC Educational Resources Information Center
Gonzalez-Perez, Maria Alejandra; Velez-Calle, Andres; Cathro, Virginia; Caprar, Dan V.; Taras, Vasyl
2014-01-01
The increasing importance of global virtual teams in business is reflected in the classroom by the increased adoption of activities that facilitate real-time cross-cultural interaction. This article documents the experience of students from two Colombian universities who participated in a collaborative international project using virtual teams as…
Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality
Cushman, Laura A.; Stein, Karen; Duffy, Charles J.
2008-01-01
Background: Older adults get lost, in many cases because of recognized or incipient Alzheimer disease (AD). In either case, getting lost can be a threat to individual and public safety, as well as to personal autonomy and quality of life. Here we compare our previously described real-world navigation test with a virtual reality (VR) version simulating the same navigational environment. Methods: Quantifying real-world navigational performance is difficult and time-consuming. VR testing is a promising alternative, but it has not been compared with closely corresponding real-world testing in aging and AD. We have studied navigation using both real-world and virtual environments in the same subjects: young normal controls (YNCs, n = 35), older normal controls (ONCs, n = 26), patients with mild cognitive impairment (MCI, n = 12), and patients with early AD (EAD, n = 14). Results: We found close correlations between real-world and virtual navigational deficits that increased across groups from YNC to ONC, to MCI, and to EAD. Analyses of subtest performance showed similar profiles of impairment in real-world and virtual testing in all four subject groups. The ONC, MCI, and EAD subjects all showed greatest difficulty in self-orientation and scene localization tests. MCI and EAD patients also showed impaired verbal recall about both test environments. Conclusions: Virtual environment testing provides a valid assessment of navigational skills. Aging and Alzheimer disease (AD) share the same patterns of difficulty in associating visual scenes and locations, which is complicated in AD by the accompanying loss of verbally mediated navigational capacities. We conclude that virtual navigation testing reveals deficits in aging and AD that are associated with potentially grave risks to our patients and the community. GLOSSARY AD = Alzheimer disease; EAD = early Alzheimer disease; MCI = mild cognitive impairment; MMSE = Mini-Mental State Examination; ONC = older normal control; std. wt. = standardized weight; THSD = Tukey honestly significant difference; VR = virtual reality; YNC = young normal control. PMID:18794491
Khurana, Meetika; Walia, Shefali
2017-01-01
Objective: To determine whether there is any difference between virtual reality game–based balance training and real-world task-specific balance training in improving sitting balance and functional performance in individuals with paraplegia. Methods: The study was a pre test–post test experimental design. There were 30 participants (28 males, 2 females) with traumatic spinal cord injury randomly assigned to 2 groups (group A and B). The levels of spinal injury of the participants were between T6 and T12. The virtual reality game–based balance training and real-world task-specific balance training were used as interventions in groups A and B, respectively. The total duration of the intervention was 4 weeks, with a frequency of 5 times a week; each training session lasted 45 minutes. The outcome measures were modified Functional Reach Test (mFRT), t-shirt test, and the self-care component of the Spinal Cord Independence Measure–III (SCIM-III). Results: There was a significant difference for time (p = .001) and Time × Group effect (p = .001) in mFRT scores, group effect (p = .05) in t-shirt test scores, and time effect (p = .001) in the self-care component of SCIM-III. Conclusions: Virtual reality game–based training is better in improving balance and functional performance in individuals with paraplegia than real-world task-specific balance training. PMID:29339902
Khurana, Meetika; Walia, Shefali; Noohu, Majumi M
2017-01-01
Objective: To determine whether there is any difference between virtual reality game-based balance training and real-world task-specific balance training in improving sitting balance and functional performance in individuals with paraplegia. Methods: The study was a pre test-post test experimental design. There were 30 participants (28 males, 2 females) with traumatic spinal cord injury randomly assigned to 2 groups (group A and B). The levels of spinal injury of the participants were between T6 and T12. The virtual reality game-based balance training and real-world task-specific balance training were used as interventions in groups A and B, respectively. The total duration of the intervention was 4 weeks, with a frequency of 5 times a week; each training session lasted 45 minutes. The outcome measures were modified Functional Reach Test (mFRT), t-shirt test, and the self-care component of the Spinal Cord Independence Measure-III (SCIM-III). Results: There was a significant difference for time ( p = .001) and Time × Group effect ( p = .001) in mFRT scores, group effect ( p = .05) in t-shirt test scores, and time effect ( p = .001) in the self-care component of SCIM-III. Conclusions: Virtual reality game-based training is better in improving balance and functional performance in individuals with paraplegia than real-world task-specific balance training.
A COTS-Based Replacement Strategy for Aging Avionics Computers
2001-12-01
Communication Control Unit. A COTS-Based Replacement Strategy for Aging Avionics Computers COTS Microprocessor Real Time Operating System New Native Code...Native Code Objec ts Native Code Thread Real - Time Operating System Legacy Function x Virtual Component Environment Context Switch Thunk Add-in Replace
Hirarchical emotion calculation model for virtual human modellin - biomed 2010.
Zhao, Yue; Wright, David
2010-01-01
This paper introduces a new emotion generation method for virtual human modelling. The method includes a novel hierarchical emotion structure, a group of emotion calculation equations and a simple heuristics decision making mechanism, which enables virtual humans to perform emotionally in real-time according to their internal and external factors. Emotion calculation equations used in this research were derived from psychologic emotion measurements. Virtual humans can utilise the information in virtual memory and emotion calculation equations to generate their own numerical emotion states within the hierarchical emotion structure. Those emotion states are important internal references for virtual humans to adopt appropriate behaviours and also key cues for their decision making. A simple heuristics theory is introduced and integrated into decision making process in order to make the virtual humans decision making more like a real human. A data interface which connects the emotion calculation and the decision making structure together has also been designed and simulated to test the method in Virtools environment.
Using a virtual world for robot planning
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Monaco, John V.; Lin, Yixia; Funk, Christopher; Lyons, Damian
2012-06-01
We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's actions. We report experimental results in indoor environments.
Interreality: The Experiential Use of Technology in the Treatment of Obesity
G, Riva; B.K, Wiederhold; F, Mantovani; A, Gaggioli
2011-01-01
For many of us, obesity is the outcome of an energy imbalance: more energy input than expenditure. However, our waistlines are growing in spite of the huge amount of diets and fat-free/low-calorie products available to cope with this issue. Even when we are able to reduce our waistlines, maintaining the new size is very difficult: in the year after the end of a nutritional and/or behavioral treatment obese persons typically regain from 30% to 50% of their initial losses. A possible strategy for improving the treatment of obesity is the use of advanced information technologies. In the past, different technologies (internet, virtual reality, mobile phones) have shown promising effects in producing a healthy lifestyle in obese patients. Here we suggest that a new technological paradigm - Interreality – that integrates assessment and treatment within a hybrid experiential environment - including both virtual and real worlds - has the potential to improve the clinical outcome of obesity treatments. The potential advantages offered by this approach are: (a) an extended sense of presence: Interreality uses advanced simulations (virtual experiences) to transform health guidelines and provisions in experiences; (b) an extended sense of community: Interreality uses virtual communities to provide users with targeted – but also anonymous, if required - social support in both real and virtual worlds; (c) real-time feedback between physical and virtual worlds: Interreality uses bio and activity sensors and devices (smartphones) both to track in real time the behavior/health status of the user, and to provide targeted suggestions and guidelines. This paper describes in detail the different technologies involved in the Interreality vision. In order to illustrate the concept of Interreality in practice, a clinical scenario is also presented and discussed: Daniela, a 35-year-old fast-food worker with obesity problems. PMID:21559236
Augmenting the thermal flux experiment: A mixed reality approach with the HoloLens
NASA Astrophysics Data System (ADS)
Strzys, M. P.; Kapp, S.; Thees, M.; Kuhn, J.; Lukowicz, P.; Knierim, P.; Schmidt, A.
2017-09-01
In the field of Virtual Reality (VR) and Augmented Reality (AR), technologies have made huge progress during the last years and also reached the field of education. The virtuality continuum, ranging from pure virtuality on one side to the real world on the other, has been successfully covered by the use of immersive technologies like head-mounted displays, which allow one to embed virtual objects into the real surroundings, leading to a Mixed Reality (MR) experience. In such an environment, digital and real objects do not only coexist, but moreover are also able to interact with each other in real time. These concepts can be used to merge human perception of reality with digitally visualized sensor data, thereby making the invisible visible. As a first example, in this paper we introduce alongside the basic idea of this column an MR experiment in thermodynamics for a laboratory course for freshman students in physics or other science and engineering subjects that uses physical data from mobile devices for analyzing and displaying physical phenomena to students.
Virtual reality cerebral aneurysm clipping simulation with real-time haptic feedback.
Alaraj, Ali; Luciano, Cristian J; Bailey, Daniel P; Elsenousi, Abdussalam; Roitberg, Ben Z; Bernardo, Antonio; Banerjee, P Pat; Charbel, Fady T
2015-03-01
With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. To develop and evaluate the usefulness of a new haptic-based virtual reality simulator in the training of neurosurgical residents. A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the ImmersiveTouch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomographic angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-dimensional immersive virtual reality environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from 3 residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Residents thought that the simulation would be useful in preparing for real-life surgery. About two-thirds of the residents thought that the 3-dimensional immersive anatomic details provided a close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They thought the simulation was useful for preoperative surgical rehearsal and neurosurgical training. A third of the residents thought that the technology in its current form provided realistic haptic feedback for aneurysm surgery. Neurosurgical residents thought that the novel immersive VR simulator is helpful in their training, especially because they do not get a chance to perform aneurysm clippings until late in their residency programs.
Research on vehicles and cargos matching model based on virtual logistics platform
NASA Astrophysics Data System (ADS)
Zhuang, Yufeng; Lu, Jiang; Su, Zhiyuan
2018-04-01
Highway less than truckload (LTL) transportation vehicles and cargos matching problem is a joint optimization problem of typical vehicle routing and loading, which is also a hot issue of operational research. This article based on the demand of virtual logistics platform, for the problem of the highway LTL transportation, the matching model of the idle vehicle and the transportation order is set up and the corresponding genetic algorithm is designed. Then the algorithm is implemented by Java. The simulation results show that the solution is satisfactory.
A novel scene management technology for complex virtual battlefield environment
NASA Astrophysics Data System (ADS)
Sheng, Changchong; Jiang, Libing; Tang, Bo; Tang, Xiaoan
2018-04-01
The efficient scene management of virtual environment is an important research content of computer real-time visualization, which has a decisive influence on the efficiency of drawing. However, Traditional scene management methods do not suitable for complex virtual battlefield environments, this paper combines the advantages of traditional scene graph technology and spatial data structure method, using the idea of management and rendering separation, a loose object-oriented scene graph structure is established to manage the entity model data in the scene, and the performance-based quad-tree structure is created for traversing and rendering. In addition, the collaborative update relationship between the above two structural trees is designed to achieve efficient scene management. Compared with the previous scene management method, this method is more efficient and meets the needs of real-time visualization.
Kibria, Muhammad Golam; Ali, Sajjad; Jarwar, Muhammad Aslam; Kumar, Sunil; Chong, Ilyoung
2017-09-22
Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented.
Chong, Ilyoung
2017-01-01
Due to a very large number of connected virtual objects in the surrounding environment, intelligent service features in the Internet of Things requires the reuse of existing virtual objects and composite virtual objects. If a new virtual object is created for each new service request, then the number of virtual object would increase exponentially. The Web of Objects applies the principle of service modularity in terms of virtual objects and composite virtual objects. Service modularity is a key concept in the Web Objects-Enabled Internet of Things (IoT) environment which allows for the reuse of existing virtual objects and composite virtual objects in heterogeneous ontologies. In the case of similar service requests occurring at the same, or different locations, the already-instantiated virtual objects and their composites that exist in the same, or different ontologies can be reused. In this case, similar types of virtual objects and composite virtual objects are searched and matched. Their reuse avoids duplication under similar circumstances, and reduces the time it takes to search and instantiate them from their repositories, where similar functionalities are provided by similar types of virtual objects and their composites. Controlling and maintaining a virtual object means controlling and maintaining a real-world object in the real world. Even though the functional costs of virtual objects are just a fraction of those for deploying and maintaining real-world objects, this article focuses on reusing virtual objects and composite virtual objects, as well as discusses similarity matching of virtual objects and composite virtual objects. This article proposes a logistic model that supports service modularity for the promotion of reusability in the Web Objects-enabled IoT environment. Necessary functional components and a flowchart of an algorithm for reusing composite virtual objects are discussed. Also, to realize the service modularity, a use case scenario is studied and implemented. PMID:28937590
Classification and overview of research in real-time imaging
NASA Astrophysics Data System (ADS)
Sinha, Purnendu; Gorinsky, Sergey V.; Laplante, Phillip A.; Stoyenko, Alexander D.; Marlowe, Thomas J.
1996-10-01
Real-time imaging has application in areas such as multimedia, virtual reality, medical imaging, and remote sensing and control. Recently, the imaging community has witnessed a tremendous growth in research and new ideas in these areas. To lend structure to this growth, we outline a classification scheme and provide an overview of current research in real-time imaging. For convenience, we have categorized references by research area and application.
Smart-Grid Backbone Network Real-Time Delay Reduction via Integer Programming.
Pagadrai, Sasikanth; Yilmaz, Muhittin; Valluri, Pratyush
2016-08-01
This research investigates an optimal delay-based virtual topology design using integer linear programming (ILP), which is applied to the current backbone networks such as smart-grid real-time communication systems. A network traffic matrix is applied and the corresponding virtual topology problem is solved using the ILP formulations that include a network delay-dependent objective function and lightpath routing, wavelength assignment, wavelength continuity, flow routing, and traffic loss constraints. The proposed optimization approach provides an efficient deterministic integration of intelligent sensing and decision making, and network learning features for superior smart grid operations by adaptively responding the time-varying network traffic data as well as operational constraints to maintain optimal virtual topologies. A representative optical backbone network has been utilized to demonstrate the proposed optimization framework whose simulation results indicate that superior smart-grid network performance can be achieved using commercial networks and integer programming.
Borehole radar interferometry revisited
Liu, Lanbo; Ma, Chunguang; Lane, John W.; Joesten, Peter K.
2014-01-01
Single-hole, multi-offset borehole-radar reflection (SHMOR) is an effective technique for fracture detection. However, commercial radar system limitations hinder the acquisition of multi-offset reflection data in a single borehole. Transforming cross-hole transmission mode radar data to virtual single-hole, multi-offset reflection data using a wave interferometric virtual source (WIVS) approach has been proposed but not fully demonstrated. In this study, we compare WIVS-derived virtual single-hole, multi-offset reflection data to real SHMOR radar reflection profiles using cross-hole and single-hole radar data acquired in two boreholes located at the University of Connecticut (Storrs, CT USA). The field data results are similar to full-waveform numerical simulations developed for a two-borehole model. The reflection from the adjacent borehole is clearly imaged by both the real and WIVS-derived virtual reflection profiles. Reflector travel-time changes induced by deviation of the two boreholes from the vertical can also be observed on the real and virtual reflection profiles. The results of this study demonstrate the potential of the WIVS approach to improve bedrock fracture imaging for hydrogeological and petroleum reservoir development applications.
Augmented Virtual Reality Laboratory
NASA Technical Reports Server (NTRS)
Tully-Hanson, Benjamin
2015-01-01
Real time motion tracking hardware has for the most part been cost prohibitive for research to regularly take place until recently. With the release of the Microsoft Kinect in November 2010, researchers now have access to a device that for a few hundred dollars is capable of providing redgreenblue (RGB), depth, and skeleton data. It is also capable of tracking multiple people in real time. For its original intended purposes, i.e. gaming, being used with the Xbox 360 and eventually Xbox One, it performs quite well. However, researchers soon found that although the sensor is versatile, it has limitations in real world applications. I was brought aboard this summer by William Little in the Augmented Virtual Reality (AVR) Lab at Kennedy Space Center to find solutions to these limitations.
Add Java extensions to your wiki: Java applets can bring dynamic functionality to your wiki pages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarberry, Randall E.
Virtually everyone familiar with today’s world wide web has encountered the free online encyclopedia Wikipedia many times. What you may not know is that Wikipedia is driven by an excellent open-source product called MediaWiki which is available to anyone for free. This has led to a proliferation of wiki sites devoted to just about any topic one can imagine. Users of a wiki can add content -- all that is required of them is that they type in their additions into their web browsers using the simple markup language called wikitext. Even better, the developers of wikitext made it extensible.more » With a little server-side development of your own, you can add your own custom syntax. Users aware of your extensions can then utilize them on their wiki pages with a few simple keystrokes. These extensions can be custom decorations, formatting, web applications, and even instances of the venerable old Java applet. One example of a Java applet extension is the Jmol extension (REF), used to embed a 3-D molecular viewer. This article will walk you through the deployment of a fairly elaborate applet via a MediaWiki extension. By no means exhaustive -- an entire book would be required for that -- it will demonstrate how to give the applet resize handles using using a little Javascript and CSS coding and some popular Javascript libraries. It even describes how a user may customize the extension somewhat using a wiki template. Finally, it explains a rudimentary persistence mechanism which allows applets to save data directly to the wiki pages on which they reside.« less
Can Virtual Science Foster Real Skills? A Study of Inquiry Skills in a Virtual World
ERIC Educational Resources Information Center
Dodds, Heather E.
2013-01-01
Online education has grown into a part of the educational market answering the demand for learning at the learner's choice of time and place. Inquiry skills such as observing, questioning, collecting data, and devising fair experiments are an essential element of 21st-century online science coursework. Virtual immersive worlds such as Second Life…
Emerging Conceptual Understanding of Complex Astronomical Phenomena by Using a Virtual Solar System
ERIC Educational Resources Information Center
Gazit, Elhanan; Yair, Yoav; Chen, David
2005-01-01
This study describes high school students' conceptual development of the basic astronomical phenomena during real-time interactions with a Virtual Solar System (VSS). The VSS is a non-immersive virtual environment which has a dynamic frame of reference that can be altered by the user. Ten 10th grade students were given tasks containing a set of…
Virtual Reality as Innovative Approach to the Interior Designing
NASA Astrophysics Data System (ADS)
Kaleja, Pavol; Kozlovská, Mária
2017-06-01
We can observe significant potential of information and communication technologies (ICT) in interior designing field, by development of software and hardware virtual reality tools. Using ICT tools offer realistic perception of proposal in its initial idea (the study). A group of real-time visualization, supported by hardware tools like Oculus Rift HTC Vive, provides free walkthrough and movement in virtual interior with the possibility of virtual designing. By improving of ICT software tools for designing in virtual reality we can achieve still more realistic virtual environment. The contribution presented proposal of an innovative approach of interior designing in virtual reality, using the latest software and hardware ICT virtual reality technologies
Direct manipulation of virtual objects
NASA Astrophysics Data System (ADS)
Nguyen, Long K.
Interacting with a Virtual Environment (VE) generally requires the user to correctly perceive the relative position and orientation of virtual objects. For applications requiring interaction in personal space, the user may also need to accurately judge the position of the virtual object relative to that of a real object, for example, a virtual button and the user's real hand. This is difficult since VEs generally only provide a subset of the cues experienced in the real world. Complicating matters further, VEs presented by currently available visual displays may be inaccurate or distorted due to technological limitations. Fundamental physiological and psychological aspects of vision as they pertain to the task of object manipulation were thoroughly reviewed. Other sensory modalities -- proprioception, haptics, and audition -- and their cross-interactions with each other and with vision are briefly discussed. Visual display technologies, the primary component of any VE, were canvassed and compared. Current applications and research were gathered and categorized by different VE types and object interaction techniques. While object interaction research abounds in the literature, pockets of research gaps remain. Direct, dexterous, manual interaction with virtual objects in Mixed Reality (MR), where the real, seen hand accurately and effectively interacts with virtual objects, has not yet been fully quantified. An experimental test bed was designed to provide the highest accuracy attainable for salient visual cues in personal space. Optical alignment and user calibration were carefully performed. The test bed accommodated the full continuum of VE types and sensory modalities for comprehensive comparison studies. Experimental designs included two sets, each measuring depth perception and object interaction. The first set addressed the extreme end points of the Reality-Virtuality (R-V) continuum -- Immersive Virtual Environment (IVE) and Reality Environment (RE). This validated, linked, and extended several previous research findings, using one common test bed and participant pool. The results provided a proven method and solid reference points for further research. The second set of experiments leveraged the first to explore the full R-V spectrum and included additional, relevant sensory modalities. It consisted of two full-factorial experiments providing for rich data and key insights into the effect of each type of environment and each modality on accuracy and timeliness of virtual object interaction. The empirical results clearly showed that mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed. Likewise, mean error for the simple task of pushing a button was less than four millimeters whether the button was real or virtual. Mean task completion time was less than one second. Key to the high accuracy and quick task performance time observed was the correct presentation of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance results already near optimal level with accurate visual cues presented, adding proprioception, audio, and haptic cues did not significantly improve performance. Recommendations for future research include enhancement of the visual display and further experiments with more complex tasks and additional control variables.
Ntasis, Efthymios; Maniatis, Theofanis A; Nikita, Konstantina S
2003-01-01
A secure framework is described for real-time tele-collaboration on Virtual Simulation procedure of Radiation Treatment Planning. An integrated approach is followed clustering the security issues faced by the system into organizational issues, security issues over the LAN and security issues over the LAN-to-LAN connection. The design and the implementation of the security services are performed according to the identified security requirements, along with the need for real time communication between the collaborating health care professionals. A detailed description of the implementation is given, presenting a solution, which can directly be tailored to other tele-collaboration services in the field of health care. The pilot study of the proposed security components proves the feasibility of the secure environment, and the consistency with the high performance demands of the application.
Design of a 4-DOF MR haptic master for application to robot surgery: virtual environment work
NASA Astrophysics Data System (ADS)
Oh, Jong-Seok; Choi, Seung-Hyun; Choi, Seung-Bok
2014-09-01
This paper presents the design and control performance of a novel type of 4-degrees-of-freedom (4-DOF) haptic master in cyberspace for a robot-assisted minimally invasive surgery (RMIS) application. By using a controllable magnetorheological (MR) fluid, the proposed haptic master can have a feedback function for a surgical robot. Due to the difficulty in utilizing real human organs in the experiment, the cyberspace that features the virtual object is constructed to evaluate the performance of the haptic master. In order to realize the cyberspace, a volumetric deformable object is represented by a shape-retaining chain-linked (S-chain) model, which is a fast volumetric model and is suitable for real-time applications. In the haptic architecture for an RMIS application, the desired torque and position induced from the virtual object of the cyberspace and the haptic master of real space are transferred to each other. In order to validate the superiority of the proposed master and volumetric model, a tracking control experiment is implemented with a nonhomogenous volumetric cubic object to demonstrate that the proposed model can be utilized in real-time haptic rendering architecture. A proportional-integral-derivative (PID) controller is then designed and empirically implemented to accomplish the desired torque trajectories. It has been verified from the experiment that tracking the control performance for torque trajectories from a virtual slave can be successfully achieved.
Design of virtual three-dimensional instruments for sound control
NASA Astrophysics Data System (ADS)
Mulder, Axel Gezienus Elith
An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object parameters. While the virtual instruments can be adapted to exploit many manipulation gestures, further work is required to reduce the need for technical expertise to realize adaptations. Better virtual object simulation techniques and faster sensor data acquisition will improve the performance of virtual instruments. The design environment which has been developed should prove useful as a (musical) instrument prototyping tool and as a tool for researching the optimal adaptation of machines to humans.
Attentional Demand of a Virtual Reality-Based Reaching Task in Nondisabled Older Adults.
Chen, Yi-An; Chung, Yu-Chen; Proffitt, Rachel; Wade, Eric; Winstein, Carolee
2015-12-01
Attention during exercise is known to affect performance; however, the attentional demand inherent to virtual reality (VR)-based exercise is not well understood. We used a dual-task paradigm to compare the attentional demands of VR-based and non-VR-based (conventional, real-world) exercise: 22 non-disabled older adults performed a primary reaching task to virtual and real targets in a counterbalanced block order while verbally responding to an unanticipated auditory tone in one third of the trials. The attentional demand of the primary reaching task was inferred from the voice response time (VRT) to the auditory tone. Participants' engagement level and task experience were also obtained using questionnaires. The virtual target condition was more attention demanding (significantly longer VRT) than the real target condition. Secondary analyses revealed a significant interaction between engagement level and target condition on attentional demand. For participants who were highly engaged, attentional demand was high and independent of target condition. However, for those who were less engaged, attentional demand was low and depended on target condition (i.e., virtual > real). These findings add important knowledge to the growing body of research pertaining to the development and application of technology-enhanced exercise for elders and for rehabilitation purposes.
Attentional Demand of a Virtual Reality-Based Reaching Task in Nondisabled Older Adults
Chen, Yi-An; Chung, Yu-Chen; Proffitt, Rachel; Wade, Eric; Winstein, Carolee
2015-01-01
Attention during exercise is known to affect performance; however, the attentional demand inherent to virtual reality (VR)-based exercise is not well understood. We used a dual-task paradigm to compare the attentional demands of VR-based and non-VR-based (conventional, real-world) exercise: 22 non-disabled older adults performed a primary reaching task to virtual and real targets in a counterbalanced block order while verbally responding to an unanticipated auditory tone in one third of the trials. The attentional demand of the primary reaching task was inferred from the voice response time (VRT) to the auditory tone. Participants' engagement level and task experience were also obtained using questionnaires. The virtual target condition was more attention demanding (significantly longer VRT) than the real target condition. Secondary analyses revealed a significant interaction between engagement level and target condition on attentional demand. For participants who were highly engaged, attentional demand was high and independent of target condition. However, for those who were less engaged, attentional demand was low and depended on target condition (i.e., virtual > real). These findings add important knowledge to the growing body of research pertaining to the development and application of technology-enhanced exercise for elders and for rehabilitation purposes. PMID:27004233
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject’s face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject’s face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network. PMID:26859884
Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.
Maruthapillai, Vasanthan; Murugappan, Murugappan
2016-01-01
In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.
Near Real-Time Processing of Proteomics Data Using Hadoop.
Hillman, Chris; Ahmad, Yasmeen; Whitehorn, Mark; Cobley, Andy
2014-03-01
This article presents a near real-time processing solution using MapReduce and Hadoop. The solution is aimed at some of the data management and processing challenges facing the life sciences community. Research into genes and their product proteins generates huge volumes of data that must be extensively preprocessed before any biological insight can be gained. In order to carry out this processing in a timely manner, we have investigated the use of techniques from the big data field. These are applied specifically to process data resulting from mass spectrometers in the course of proteomic experiments. Here we present methods of handling the raw data in Hadoop, and then we investigate a process for preprocessing the data using Java code and the MapReduce framework to identify 2D and 3D peaks.
PRAIS: Distributed, real-time knowledge-based systems made easy
NASA Technical Reports Server (NTRS)
Goldstein, David G.
1990-01-01
This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.
Agentless Cloud-Wide Monitoring of Virtual Disk State
2015-10-01
packages include Apache, MySQL , PHP, Ruby on Rails, Java Application Servers, and many others. Figure 2.12 shows the results of a run of the Software...Linux, Apache, MySQL , PHP (LAMP) set of applications. Thus, many file-level update logs will contain the same versions of files repeated across many
The virtues of virtual reality in exposure therapy.
Gega, Lina
2017-04-01
Virtual reality can be more effective and less burdensome than real-life exposure. Optimal virtual reality delivery should incorporate in situ direct dialogues with a therapist, discourage safety behaviours, allow for a mismatch between virtual and real exposure tasks, and encourage self-directed real-life practice between and beyond virtual reality sessions. © The Royal College of Psychiatrists 2017.
Our Experiment in Online, Real-Time Reference.
ERIC Educational Resources Information Center
Broughton, Kelly
2001-01-01
Describes experiences in providing real-time online reference services to users with remote Web access at the Bowling Green State University library. Discusses the decision making process first used to select HumanClick software to communicate via chat; and the selection of a fee-based customer service product, Virtual Reference Desk. (LRW)
Extending the Virtual Solar Observatory (VSO) to Incorporate Data Analysis Capabilities (III)
NASA Astrophysics Data System (ADS)
Csillaghy, A.; Etesi, L.; Dennis, B.; Zarro, D.; Schwartz, R.; Tolbert, K.
2008-12-01
We will present a progress report on our activities to extend the data analysis capabilities of the VSO. Our efforts to date have focused on three areas: 1. Extending the data retrieval capabilities by developing a centralized data processing server. The server is built with Java, IDL (Interactive Data Language), and the SSW (Solar SoftWare) package with all SSW-related instrument libraries and required calibration data. When a user requests VSO data that requires preprocessing, the data are transparently sent to the server, processed, and returned to the user's IDL session for viewing and analysis. It is possible to have any Java or IDL client connect to the server. An IDL prototype for preparing and calibrating SOHO/EIT data wll be demonstrated. 2. Improving the solar data search in SHOW SYNOP, a graphical user tool connected to VSO in IDL. We introduce the Java-IDL interface that allows a flexible dynamic, and extendable way of searching the VSO, where all the communication with VSO are managed dynamically by standard Java tools. 3. Improving image overlay capability to support coregistration of solar disk observations obtained from different orbital view angles, position angles, and distances - such as from the twin STEREO spacecraft.
Access Control of Web and Java Based Applications
NASA Technical Reports Server (NTRS)
Tso, Kam S.; Pajevski, Michael J.; Johnson, Bryan
2011-01-01
Cyber security has gained national and international attention as a result of near continuous headlines from financial institutions, retail stores, government offices and universities reporting compromised systems and stolen data. Concerns continue to rise as threats of service interruption, and spreading of viruses become ever more prevalent and serious. Controlling access to application layer resources is a critical component in a layered security solution that includes encryption, firewalls, virtual private networks, antivirus, and intrusion detection. In this paper we discuss the development of an application-level access control solution, based on an open-source access manager augmented with custom software components, to provide protection to both Web-based and Java-based client and server applications.
Virtual geotechnical laboratory experiments using a simulator
NASA Astrophysics Data System (ADS)
Penumadu, Dayakar; Zhao, Rongda; Frost, David
2000-04-01
The details of a test simulator that provides a realistic environment for performing virtual laboratory experimentals in soil mechanics is presented. A computer program Geo-Sim that can be used to perform virtual experiments, and allow for real-time observations of material response is presented. The results of experiments, for a given set of input parameters, are obtained with the test simulator using well-trained artificial neural-network-based soil models for different soil types and stress paths. Multimedia capabilities are integrated in Geo-Sim, using software that links and controls a laser disc player with a real-time parallel processing ability. During the simulation of a virtual experiment, relevant portions of the video image of a previously recorded test on an actual soil specimen are dispalyed along with the graphical presentation of response from the feedforward ANN model predictions. The pilot simulator developed to date includes all aspects related to performing a triaxial test on cohesionless soil under undrained and drained conditions. The benefits of the test simulator are also presented.
Wavelets and Elman Neural Networks for monitoring environmental variables
NASA Astrophysics Data System (ADS)
Ciarlini, Patrizia; Maniscalco, Umberto
2008-11-01
An application in cultural heritage is introduced. Wavelet decomposition and Neural Networks like virtual sensors are jointly used to simulate physical and chemical measurements in specific locations of a monument. Virtual sensors, suitably trained and tested, can substitute real sensors in monitoring the monument surface quality, while the real ones should be installed for a long time and at high costs. The application of the wavelet decomposition to the environmental data series allows getting the treatment of underlying temporal structure at low frequencies. Consequently a separate training of suitable Elman Neural Networks for high/low components can be performed, thus improving the networks convergence in learning time and measurement accuracy in working time.
A 3D character animation engine for multimodal interaction on mobile devices
NASA Astrophysics Data System (ADS)
Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo
2005-03-01
Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).
NASA Astrophysics Data System (ADS)
Wong, John-Michael; Stojadinovic, Bozidar
2005-05-01
A framework has been defined for storing and retrieving civil infrastructure monitoring data over a network. The framework consists of two primary components: metadata and network communications. The metadata component provides the descriptions and data definitions necessary for cataloging and searching monitoring data. The communications component provides Java classes for remotely accessing the data. Packages of Enterprise JavaBeans and data handling utility classes are written to use the underlying metadata information to build real-time monitoring applications. The utility of the framework was evaluated using wireless accelerometers on a shaking table earthquake simulation test of a reinforced concrete bridge column. The NEESgrid data and metadata repository services were used as a backend storage implementation. A web interface was created to demonstrate the utility of the data model and provides an example health monitoring application.
After an Earthquake: Accessing Near Real-Time Data in the Classroom
NASA Astrophysics Data System (ADS)
Bravo, T. K.; Coleman, B.; Hubenthal, M.; Owens, T. J.; Taber, J.; Welti, R.; Weertman, B. R.
2010-12-01
One of the best ways to engage students in scientific content is to give them opportunities to work with real scientific instruments and data and enable them to experience the discovery of scientific information. In addition, newsworthy earthquakes can capture the attention and imagination of students. IRIS and collaborating partners provide a range of options to leverage that attention through access to near-real-time earthquake location and waveform data stored in the IRIS Data Management System and elsewhere via a number of web-based tools and a new Java-based application. The broadest audience is reached by the Seismic Monitor, a simple Web-based tool for observing near-real-time seismicity. The IRIS Earthquake Browser (IEB) allows users to explore recent and cataloged earthquakes and aftershock patterns online with more flexibility, and K-12 classroom activities for understanding plate tectonics and estimating seismic hazards have been designed around its use. Waveforms are easily viewed and explored on the web using the Rapid Earthquake Viewer (REV), developed by the University of South Carolina in collaboration with IRIS E&O. Data from recent well-known earthquakes available via REV are used in exercises to determine Earth’s internal structure and to locate earthquakes. Three component data is presented to the students, allowing a much more realistic analysis of the data than is presented in most textbooks. The Seismographs in Schools program uses real-time data in the classroom to interest and engage students about recent earthquakes. Through the IRIS website, schools can share event data and 24-hr images. Additionally, data is available in real-time via the API. This API allows anyone to extract data, re-purpose it, and display it however they need to, as is being done by the British Geological Survey Seismographs in Schools program. Over 350 schools throughout the US and internationally are currently registered with the IRIS Seismographs in Schools database. IRIS E&O is collaborating with Moravian College on a Java-based software application to replace the current educational seismometer software. This software facilitates the study of seismological concepts in middle school through introductory undergraduate classrooms. Users can view a graphical representation of seismic data in real time and can process this data to determine characteristics of seismograms such as time of occurrence, distance from the epicenter to the station, magnitude, and location (via triangulation). The software interface makes these tasks easy to accomplish and also provides interactive assistance to users. Data can be collected and viewed from a suite of instruments as well as streaming data in true real time. This allows multiple classrooms within a school to display data from their seismograph and for schools without an instrument to display data from another school.
A Standalone Vision Impairments Simulator for Java Swing Applications
NASA Astrophysics Data System (ADS)
Oikonomou, Theofanis; Votis, Konstantinos; Korn, Peter; Tzovaras, Dimitrios; Likothanasis, Spriridon
A lot of work has been done lately in an attempt to assess accessibility. For the case of web rich-client applications several tools exist that simulate how a vision impaired or colour-blind person would perceive this content. In this work we propose a simulation tool for non-web JavaTM Swing applications. Developers and designers face a real challenge when creating software that has to cope with a lot of interaction situations, as well as specific directives for ensuring an accessible interaction. The proposed standalone tool will assist them to explore user-centered design and important accessibility issues for their JavaTM Swing implementations.
Jeagle: a JAVA Runtime Verification Tool
NASA Technical Reports Server (NTRS)
DAmorim, Marcelo; Havelund, Klaus
2005-01-01
We introduce the temporal logic Jeagle and its supporting tool for runtime verification of Java programs. A monitor for an Jeagle formula checks if a finite trace of program events satisfies the formula. Jeagle is a programming oriented extension of the rule-based powerful Eagle logic that has been shown to be capable of defining and implementing a range of finite trace monitoring logics, including future and past time temporal logic, real-time and metric temporal logics, interval logics, forms of quantified temporal logics, and so on. Monitoring is achieved on a state-by-state basis avoiding any need to store the input trace. Jeagle extends Eagle with constructs for capturing parameterized program events such as method calls and method returns. Parameters can be the objects that methods are called upon, arguments to methods, and return values. Jeagle allows one to refer to these in formulas. The tool performs automated program instrumentation using AspectJ. We show the transformational semantics of Jeagle.
Development of a web application for water resources based on open source software
NASA Astrophysics Data System (ADS)
Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri P.
2014-01-01
This article presents research and development of a prototype web application for water resources using latest advancements in Information and Communication Technologies (ICT), open source software and web GIS. The web application has three web services for: (1) managing, presenting and storing of geospatial data, (2) support of water resources modeling and (3) water resources optimization. The web application is developed using several programming languages (PhP, Ajax, JavaScript, Java), libraries (OpenLayers, JQuery) and open source software components (GeoServer, PostgreSQL, PostGIS). The presented web application has several main advantages: it is available all the time, it is accessible from everywhere, it creates a real time multi-user collaboration platform, the programing languages code and components are interoperable and designed to work in a distributed computer environment, it is flexible for adding additional components and services and, it is scalable depending on the workload. The application was successfully tested on a case study with concurrent multi-users access.
Taglieri, Catherine A; Crosby, Steven J; Zimmerman, Kristin; Schneider, Tulip; Patel, Dhiren K
2017-06-01
Objective. To assess the effect of incorporating virtual patient activities in a pharmacy skills lab on student competence and confidence when conducting real-time comprehensive clinic visits with mock patients. Methods. Students were randomly assigned to a control or intervention group. The control group completed the clinic visit prior to completing virtual patient activities. The intervention group completed the virtual patient activities prior to the clinic visit. Student proficiency was evaluated in the mock lab. All students completed additional exercises with the virtual patient and were subsequently assessed. Student impressions were assessed via a pre- and post-experience survey. Results. Student performance conducting clinic visits was higher in the intervention group compared to the control group. Overall student performance continued to improve in the subsequent module. There was no change in student confidence from pre- to post-experience. Student rating of the ease of use and realistic simulation of the virtual patient increased; however, student rating of the helpfulness of the virtual patient decreased. Despite student rating of the helpfulness of the virtual patient program, student performance improved. Conclusion. Virtual patient activities enhanced student performance during mock clinic visits. Students felt the virtual patient realistically simulated a real patient. Virtual patients may provide additional learning opportunities for students.
Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.
Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn
2016-12-21
The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybrid-dimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies – Three.js, D3.js and PHP – as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.
Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.
Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn
2016-10-01
The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybriddimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies - Three.js, D3.js and PHP - as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.
Seeing an Embodied Virtual Hand is Analgesic Contingent on Colocation.
Nierula, Birgit; Martini, Matteo; Matamala-Gomez, Marta; Slater, Mel; Sanchez-Vives, Maria V
2017-06-01
Seeing one's own body has been reported to have analgesic properties. Analgesia has also been described when seeing an embodied virtual body colocated with the real one. However, there is controversy regarding whether this effect holds true when seeing an illusory-owned body part, such as during the rubber-hand illusion. A critical difference between these paradigms is the distance between the real and surrogate body part. Colocation of the real and surrogate arm is possible in an immersive virtual environment, but not during illusory ownership of a rubber arm. The present study aimed at testing whether the distance between a real and a virtual arm can explain such differences in terms of pain modulation. Using a paradigm of embodiment of a virtual body allowed us to evaluate heat pain thresholds at colocation and at a 30-cm distance between the real and the virtual arm. We observed a significantly higher heat pain threshold at colocation than at a 30-cm distance. The analgesic effects of seeing a virtual colocated arm were eliminated by increasing the distance between the real and the virtual arm, which explains why seeing an illusorily owned rubber arm does not consistently result in analgesia. These findings are relevant for the use of virtual reality in pain management. Looking at a virtual body has analgesic properties similar to looking at one's real body. We identify the importance of colocation between a real and a surrogate body for this to occur and thereby resolve a scientific controversy. This information is useful for exploiting immersive virtual reality in pain management. Copyright © 2017. Published by Elsevier Inc.
A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.
Rutledge, Robert G
2011-03-02
Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.
A Java Program for LRE-Based Real-Time qPCR that Enables Large-Scale Absolute Quantification
Rutledge, Robert G.
2011-01-01
Background Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Findings Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. Conclusions The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples. PMID:21407812
ChemScreener: A Distributed Computing Tool for Scaffold based Virtual Screening.
Karthikeyan, Muthukumarasamy; Pandit, Deepak; Vyas, Renu
2015-01-01
In this work we present ChemScreener, a Java-based application to perform virtual library generation combined with virtual screening in a platform-independent distributed computing environment. ChemScreener comprises a scaffold identifier, a distinct scaffold extractor, an interactive virtual library generator as well as a virtual screening module for subsequently selecting putative bioactive molecules. The virtual libraries are annotated with chemophore-, pharmacophore- and toxicophore-based information for compound prioritization. The hits selected can then be further processed using QSAR, docking and other in silico approaches which can all be interfaced within the ChemScreener framework. As a sample application, in this work scaffold selectivity, diversity, connectivity and promiscuity towards six important therapeutic classes have been studied. In order to illustrate the computational power of the application, 55 scaffolds extracted from 161 anti-psychotic compounds were enumerated to produce a virtual library comprising 118 million compounds (17 GB) and annotated with chemophore, pharmacophore and toxicophore based features in a single step which would be non-trivial to perform with many standard software tools today on libraries of this size.
Evaluation of Wearable Haptic Systems for the Fingers in Augmented Reality Applications.
Maisto, Maurizio; Pacchierotti, Claudio; Chinello, Francesco; Salvietti, Gionata; De Luca, Alessandro; Prattichizzo, Domenico
2017-01-01
Although Augmented Reality (AR) has been around for almost five decades, only recently we have witnessed AR systems and applications entering in our everyday life. Representative examples of this technological revolution are the smartphone games "Pokémon GO" and "Ingress" or the Google Translate real-time sign interpretation app. Even if AR applications are already quite compelling and widespread, users are still not able to physically interact with the computer-generated reality. In this respect, wearable haptics can provide the compelling illusion of touching the superimposed virtual objects without constraining the motion or the workspace of the user. In this paper, we present the experimental evaluation of two wearable haptic interfaces for the fingers in three AR scenarios, enrolling 38 participants. In the first experiment, subjects were requested to write on a virtual board using a real chalk. The haptic devices provided the interaction forces between the chalk and the board. In the second experiment, subjects were asked to pick and place virtual and real objects. The haptic devices provided the interaction forces due to the weight of the virtual objects. In the third experiment, subjects were asked to balance a virtual sphere on a real cardboard. The haptic devices provided the interaction forces due to the weight of the virtual sphere rolling on the cardboard. Providing haptic feedback through the considered wearable device significantly improved the performance of all the considered tasks. Moreover, subjects significantly preferred conditions providing wearable haptic feedback.
Real-time path planning in dynamic virtual environments using multiagent navigation graphs.
Sud, Avneesh; Andersen, Erik; Curtis, Sean; Lin, Ming C; Manocha, Dinesh
2008-01-01
We present a novel approach for efficient path planning and navigation of multiple virtual agents in complex dynamic scenes. We introduce a new data structure, Multi-agent Navigation Graph (MaNG), which is constructed using first- and second-order Voronoi diagrams. The MaNG is used to perform route planning and proximity computations for each agent in real time. Moreover, we use the path information and proximity relationships for local dynamics computation of each agent by extending a social force model [Helbing05]. We compute the MaNG using graphics hardware and present culling techniques to accelerate the computation. We also address undersampling issues and present techniques to improve the accuracy of our algorithm. Our algorithm is used for real-time multi-agent planning in pursuit-evasion, terrain exploration and crowd simulation scenarios consisting of hundreds of moving agents, each with a distinct goal.
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; Fleming, Gary A.
2007-01-01
Virtual Diagnostics Interface technology, or ViDI, is a suite of techniques utilizing image processing, data handling and three-dimensional computer graphics. These techniques aid in the design, implementation, and analysis of complex aerospace experiments. LiveView3D is a software application component of ViDI used to display experimental wind tunnel data in real-time within an interactive, three-dimensional virtual environment. The LiveView3D software application was under development at NASA Langley Research Center (LaRC) for nearly three years. LiveView3D recently was upgraded to perform real-time (as well as post-test) comparisons of experimental data with pre-computed Computational Fluid Dynamics (CFD) predictions. This capability was utilized to compare experimental measurements with CFD predictions of the surface pressure distribution of the NASA Ares I Crew Launch Vehicle (CLV) - like vehicle when tested in the NASA LaRC Unitary Plan Wind Tunnel (UPWT) in December 2006 - January 2007 timeframe. The wind tunnel tests were conducted to develop a database of experimentally-measured aerodynamic performance of the CLV-like configuration for validation of CFD predictive codes.
A Real-Time Executive for Multiple-Computer Clusters.
1984-12-01
in a real-time environment is tantamount to speed and efficiency. By effectively co-locating real-time sensors and related processing modules, real...of which there are two ki n1 s : multicast group address - virtually any nur.,ber of node groups can be assigned a group address so they are all able...interfaceloopbark by ’b4, internal _loopback by 02"b4, clear loooback by ’b4, go offline by Ŝ"b4, eo online by ’b4, onboard _diagnostic by Oa’b4, cdr
Exploring JavaScript and ROOT technologies to create Web-based ATLAS analysis and monitoring tools
NASA Astrophysics Data System (ADS)
Sánchez Pineda, A.
2015-12-01
We explore the potential of current web applications to create online interfaces that allow the visualization, interaction and real cut-based physics analysis and monitoring of processes through a web browser. The project consists in the initial development of web- based and cloud computing services to allow students and researchers to perform fast and very useful cut-based analysis on a browser, reading and using real data and official Monte- Carlo simulations stored in ATLAS computing facilities. Several tools are considered: ROOT, JavaScript and HTML. Our study case is the current cut-based H → ZZ → llqq analysis of the ATLAS experiment. Preliminary but satisfactory results have been obtained online.
NASA Astrophysics Data System (ADS)
Beckhaus, Steffi
Virtual Reality aims at creating an artificial environment that can be perceived as a substitute to a real setting. Much effort in research and development goes into the creation of virtual environments that in their majority are perceivable only by eyes and hands. The multisensory nature of our perception, however, allows and, arguably, also expects more than that. As long as we are not able to simulate and deliver a fully sensory believable virtual environment to a user, we could make use of the fully sensory, multi-modal nature of real objects to fill in for this deficiency. The idea is to purposefully integrate real artifacts into the application and interaction, instead of dismissing anything real as hindering the virtual experience. The term virtual reality - denoting the goal, not the technology - shifts from a core virtual reality to an “enriched” reality, technologically encompassing both the computer generated and the real, physical artifacts. Together, either simultaneously or in a hybrid way, real and virtual jointly provide stimuli that are perceived by users through their senses and are later formed into an experience by the user's mind.
Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization
2017-08-01
visualization, 3-D interactive visualization, scientific visualization, virtual reality, real -time ray tracing 16. SECURITY CLASSIFICATION OF: 17...scientists to employ in the real world. Other than user-friendly software and hardware setup, scientists also need to be able to perform their usual...and scientific visualization communities mostly have different research priorities. For the VR community, the ability to support real -time user
NASA Astrophysics Data System (ADS)
Sun, Yun-Ping; Ju, Jiun-Yan; Liang, Yen-Chu
2008-12-01
Since the unmanned aerial vehicles (UAVs) bring forth many innovative applications in scientific, civilian, and military fields, the development of UAVs is rapidly growing every year. The on-board autopilot that reliably performs attitude and guidance control is a vital part for out-of-sight flights. However, the control law in autopilot is designed according to a simplified plant model in which the dynamics of real hardware are usually not taken into consideration. It is a necessity to develop a test-bed including real servos to make real-time control experiments for prototype autopilots, so called hardware-in-the-loop (HIL) simulation. In this paper on the basis of the graphical application software LabVIEW, the real-time HIL simulation system is realized efficiently by the virtual instrumentation approach. The proportional-integral-derivative (PID) controller in autopilot for the pitch angle control loop is experimentally determined by the classical Ziegler-Nichols tuning rule and exhibits good transient and steady-state response in real-time HIL simulation. From the results the differences between numerical simulation and real-time HIL simulation are also clearly presented. The effectiveness of HIL simulation for UAV autopilot design is definitely confirmed
Context-sensitive trace inlining for Java.
Häubl, Christian; Wimmer, Christian; Mössenböck, Hanspeter
2013-12-01
Method inlining is one of the most important optimizations in method-based just-in-time (JIT) compilers. It widens the compilation scope and therefore allows optimizing multiple methods as a whole, which increases the performance. However, if method inlining is used too frequently, the compilation time increases and too much machine code is generated. This has negative effects on the performance. Trace-based JIT compilers only compile frequently executed paths, so-called traces, instead of whole methods. This may result in faster compilation, less generated machine code, and better optimized machine code. In the previous work, we implemented a trace recording infrastructure and a trace-based compiler for [Formula: see text], by modifying the Java HotSpot VM. Based on this work, we evaluate the effect of trace inlining on the performance and the amount of generated machine code. Trace inlining has several major advantages when compared to method inlining. First, trace inlining is more selective than method inlining, because only frequently executed paths are inlined. Second, the recorded traces may capture information about virtual calls, which simplify inlining. A third advantage is that trace information is context sensitive so that different method parts can be inlined depending on the specific call site. These advantages allow more aggressive inlining while the amount of generated machine code is still reasonable. We evaluate several inlining heuristics on the benchmark suites DaCapo 9.12 Bach, SPECjbb2005, and SPECjvm2008 and show that our trace-based compiler achieves an up to 51% higher peak performance than the method-based Java HotSpot client compiler. Furthermore, we show that the large compilation scope of our trace-based compiler has a positive effect on other compiler optimizations such as constant folding or null check elimination.
Virtual endoscopy using spherical QuickTime-VR panorama views.
Tiede, Ulf; von Sternberg-Gospos, Norman; Steiner, Paul; Höhne, Karl Heinz
2002-01-01
Virtual endoscopy needs some precomputation of the data (segmentation, path finding) before the diagnostic process can take place. We propose a method that precomputes multinode spherical panorama movies using Quick-Time VR. This technique allows almost the same navigation and visualization capabilities as a real endoscopic procedure, a significant reduction of interaction input is achieved and the movie represents a document of the procedure.
Real-Time Collaboration of Virtual Laboratories through the Internet
ERIC Educational Resources Information Center
Jara, Carlos A.; Candelas, Francisco A.; Torres, Fernando; Dormido, Sebastian; Esquembre, Francisco; Reinoso, Oscar
2009-01-01
Web-based learning environments are becoming increasingly popular in higher education. One of the most important web-learning resources is the virtual laboratory (VL), which gives students an easy way for training and learning through the Internet. Moreover, on-line collaborative communication represents a practical method to transmit the…
The Uncertainty Principle, Virtual Particles and Real Forces
ERIC Educational Resources Information Center
Jones, Goronwy Tudor
2002-01-01
This article provides a simple practical introduction to wave-particle duality, including the energy-time version of the Heisenberg Uncertainty Principle. It has been successful in leading students to an intuitive appreciation of "virtual particles" and the role they play in describing the way ordinary particles, like electrons and protons, exert…
NASA Astrophysics Data System (ADS)
Ferriere, D.; Rucinski, A.; Jankowski, T.
2007-04-01
Establishing a Virtual Sea Border by performing a real-time, satellite-accessible Internet-based bio-metric supported threat assessment of arriving foreign-flagged cargo ships, their management and ownership, their arrival terminal operator and owner, and rewarding proven legitimate operators with an economic incentive for their transparency will simultaneously improve port security and maritime transportation efficiencies.
Development of an Environmental Virtual Field Laboratory
ERIC Educational Resources Information Center
Ramasundaram, V.; Grunwald, S.; Mangeot, A.; Comerford, N. B.; Bliss, C. M.
2005-01-01
Laboratory exercises, field observations and field trips are a fundamental part of many earth science and environmental science courses. Field observations and field trips can be constrained because of distance, time, expense, scale, safety, or complexity of real-world environments. Our objectives were to develop an environmental virtual field…
EduMOOs: Virtual Learning Centers.
ERIC Educational Resources Information Center
Woods, Judy C.
1998-01-01
Multi-user Object Oriented Internet activities (MOOs) permit real time interaction in a text-based virtual reality via the Internet. This article explains EduMOOs (educational MOOs) and provides brief descriptions, World Wide Web addresses, and telnet addresses for selected EduMOOs. Instructions for connecting to a MOO and a list of related Web…
Real Time Computer Graphics From Body Motion
NASA Astrophysics Data System (ADS)
Fisher, Scott; Marion, Ann
1983-10-01
This paper focuses on the recent emergence and development of real, time, computer-aided body tracking technologies and their use in combination with various computer graphics imaging techniques. The convergence of these, technologies in our research results, in an interactive display environment. in which multipde, representations of a given body motion can be displayed in real time. Specific reference, to entertainment applications is described in the development of a real time, interactive stage set in which dancers can 'draw' with their bodies as they move, through the space. of the stage or manipulate virtual elements of the set with their gestures.
Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments
NASA Astrophysics Data System (ADS)
Portalés, Cristina; Lerma, José Luis; Navarro, Santiago
2010-01-01
Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.
Virtual School, Real Experience: Simulations Replicate the World of Practice for Aspiring Principals
ERIC Educational Resources Information Center
Mann, Dale; Shakeshaft, Charol
2013-01-01
A web-enabled computer simulation program presents real-world opportunities, problems, and challenges for aspiring principals. The simulation challenges areas that are not always covered in lectures, textbooks, or workshops. For example, using the simulation requires dealing--on-screen and in real time--with demanding parents, observing…
Satou, Shouichi; Aoki, Taku; Kaneko, Junichi; Sakamoto, Yoshihiro; Hasegawa, Kiyoshi; Sugawara, Yasuhiko; Arai, Osamu; Mitake, Tsuyoshi; Miura, Koui; Kokudo, Norihiro
2014-02-01
Real-time virtual sonography is an innovative imaging technology that detects the spatial position of an ultrasound probe and immediately reconstructs a section of computed tomography (CT) and/or magnetic resonance in accordance with the ultrasound image, thereby allowing a real-time comparison of those modalities. A novel intraoperative navigation system for liver resection using real-time virtual sonography has been devised for the detection of tumors and navigation of the resection plane. Sixteen patients with hepatic malignancies (26 tumors in total) were involved in this study, and the system was used intraoperatively. The tumor size ranged 2 to 140 mm (23 mm in median). By the navigation system, operators could refer intraoperative ultrasound image displayed on the television monitor side-by-side with corresponding images of CT and/or magnetic resonance. In addition, the system overlaid preoperative simulation on the CT image and highlighted the extent of resection so as to navigate the resection plane. Because the system used electromagnetic power in the operation room, the feasibility and safety of the system was investigated as well as its validity. The system could be used uneventfully in each operation. All of the 26 tumors scheduled to be resected were detected by the navigation system. The weight of the resected specimen correlated with the preoperatively simulated volume (R = 0.995, P < .0001). The feasibility and safety of the navigation system were confirmed. The system should be helpful for intraoperative tumor detection and navigation of liver resection.
Utilization of virtual reality for endotracheal intubation training.
Mayrose, James; Kesavadas, T; Chugh, Kevin; Joshi, Dhananjay; Ellis, David G
2003-10-01
Tracheal intubation is performed for urgent airway control in injured patients. Current methods of training include working on cadavers and manikins, which lack the realism of a living human being. Work in this field has been limited due to the complex nature of simulating in real-time, the interactive forces and deformations which occur during an actual patient intubation. This study addressed the issue of intubation training in an attempt to bridge the gap between actual and virtual patient scenarios. The haptic device along with the real-time performance of the simulator give it both visual and physical realism. The three-dimensional viewing and interaction available through virtual reality make it possible for physicians, pre-hospital personnel and students to practice many endotracheal intubations without ever touching a patient. The ability for a medical professional to practice a procedure multiple times prior to performing it on a patient will both enhance the skill of the individual while reducing the risk to the patient.
Sounds of silence: How to animate virtual worlds with sound
NASA Technical Reports Server (NTRS)
Astheimer, Peter
1993-01-01
Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.
MyCoach: In Situ User Evaluation of a Virtual and Physical Coach for Running
NASA Astrophysics Data System (ADS)
Biemans, Margit; Haaker, Timber; Szwajcer, Ellen
Running is an enjoyable exercise for many people today. Trainers help people to reach running goals. However, today’s busy and nomadic people are not always able to attend running classes. A combination of a virtual and physical coach should be useful. A virtual coach (MyCoach) was designed to provide this support. MyCoach consists of a mobile phone (real time) and a web application, with a focus on improving health and well-being. A randomised controlled trial was performed to evaluate MyCoach. The results indicate that the runners value the tangible aspects on monitoring and capturing their exercise and analysing progress. The system could be improved by incorporating running schedules provided by the physical trainer and by improving its usability. Extensions of the system should focus on the real-time aspects of information sharing and “physical” coaching at a distance.
Hybrid Reality Lab Capabilities - Video 2
NASA Technical Reports Server (NTRS)
Delgado, Francisco J.; Noyes, Matthew
2016-01-01
Our Hybrid Reality and Advanced Operations Lab is developing incredibly realistic and immersive systems that could be used to provide training, support engineering analysis, and augment data collection for various human performance metrics at NASA. To get a better understanding of what Hybrid Reality is, let's go through the two most commonly known types of immersive realities: Virtual Reality, and Augmented Reality. Virtual Reality creates immersive scenes that are completely made up of digital information. This technology has been used to train astronauts at NASA, used during teleoperation of remote assets (arms, rovers, robots, etc.) and other activities. One challenge with Virtual Reality is that if you are using it for real time-applications (like landing an airplane) then the information used to create the virtual scenes can be old (i.e. visualized long after physical objects moved in the scene) and not accurate enough to land the airplane safely. This is where Augmented Reality comes in. Augmented Reality takes real-time environment information (from a camera, or see through window, and places digitally created information into the scene so that it matches with the video/glass information). Augmented Reality enhances real environment information collected with a live sensor or viewport (e.g. camera, window, etc.) with the information-rich visualization provided by Virtual Reality. Hybrid Reality takes Augmented Reality even further, by creating a higher level of immersion where interactivity can take place. Hybrid Reality takes Virtual Reality objects and a trackable, physical representation of those objects, places them in the same coordinate system, and allows people to interact with both objects' representations (virtual and physical) simultaneously. After a short period of adjustment, the individuals begin to interact with all the objects in the scene as if they were real-life objects. The ability to physically touch and interact with digitally created objects that have the same shape, size, location to their physical object counterpart in virtual reality environment can be a game changer when it comes to training, planning, engineering analysis, science, entertainment, etc. Our Project is developing such capabilities for various types of environments. The video outlined with this abstract is a representation of an ISS Hybrid Reality experience. In the video you can see various Hybrid Reality elements that provide immersion beyond just standard Virtual Reality or Augmented Reality.
Virtual Learning is the Real Thing
ERIC Educational Resources Information Center
Tekaat-Davey, Diana
2006-01-01
In this article, the author discusses how in California, high school students are learning about real business through a virtual world. Virtual enterprise programs are helping students learn about the real business world. Learning about the business world has become about as real as it can in California high schools. Enrollment in the programs…
NASA Astrophysics Data System (ADS)
McMullen, Kyla A.
Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.
The experiment editor: supporting inquiry-based learning with virtual labs
NASA Astrophysics Data System (ADS)
Galan, D.; Heradio, R.; de la Torre, L.; Dormido, S.; Esquembre, F.
2017-05-01
Inquiry-based learning is a pedagogical approach where students are motivated to pose their own questions when facing problems or scenarios. In physics learning, students are turned into scientists who carry out experiments, collect and analyze data, formulate and evaluate hypotheses, and so on. Lab experimentation is essential for inquiry-based learning, yet there is a drawback with traditional hands-on labs in the high costs associated with equipment, space, and maintenance staff. Virtual laboratories are helpful to reduce these costs. This paper enriches the virtual lab ecosystem by providing an integrated environment to automate experimentation tasks. In particular, our environment supports: (i) scripting and running experiments on virtual labs, and (ii) collecting and analyzing data from the experiments. The current implementation of our environment supports virtual labs created with the authoring tool Easy Java/Javascript Simulations. Since there are public repositories with hundreds of freely available labs created with this tool, the potential applicability to our environment is considerable.
The VTIE telescope resource management system
NASA Astrophysics Data System (ADS)
Busschots, B.; Keating, J. G.
2005-06-01
The VTIE Telescope Resource Management System (TRMS) provides a frame work for managing a distributed group of internet telescopes as a single "Virtual Observatory". The TRMS provides hooks which allow for it to be connected to any Java Based web portal and for a Java based scheduler to be added to it. The TRMS represents each telescope and observatory in the system with a software agent and then allows the scheduler and web portal to communicate with these distributed resources in a simple transparent way, hence allowing the scheduler and portal designers to concentrate only on what they wish to do with these resources rather than how to communicate with them. This paper outlines the structure and implementation of this frame work.
The Beginner's Guide to Wind Tunnels with TunnelSim and TunnelSys
NASA Technical Reports Server (NTRS)
Benson, Thomas J.; Galica, Carol A.; Vila, Anthony J.
2010-01-01
The Beginner's Guide to Wind Tunnels is a Web-based, on-line textbook that explains and demonstrates the history, physics, and mathematics involved with wind tunnels and wind tunnel testing. The Web site contains several interactive computer programs to demonstrate scientific principles. TunnelSim is an interactive, educational computer program that demonstrates basic wind tunnel design and operation. TunnelSim is a Java (Sun Microsystems Inc.) applet that solves the continuity and Bernoulli equations to determine the velocity and pressure throughout a tunnel design. TunnelSys is a group of Java applications that mimic wind tunnel testing techniques. Using TunnelSys, a team of students designs, tests, and post-processes the data for a virtual, low speed, and aircraft wing.
NASA Astrophysics Data System (ADS)
Phat Luu, Trieu; He, Yongtian; Brown, Samuel; Nakagome, Sho; Contreras-Vidal, Jose L.
2016-06-01
Objective. The control of human bipedal locomotion is of great interest to the field of lower-body brain-computer interfaces (BCIs) for gait rehabilitation. While the feasibility of closed-loop BCI systems for the control of a lower body exoskeleton has been recently shown, multi-day closed-loop neural decoding of human gait in a BCI virtual reality (BCI-VR) environment has yet to be demonstrated. BCI-VR systems provide valuable alternatives for movement rehabilitation when wearable robots are not desirable due to medical conditions, cost, accessibility, usability, or patient preferences. Approach. In this study, we propose a real-time closed-loop BCI that decodes lower limb joint angles from scalp electroencephalography (EEG) during treadmill walking to control a walking avatar in a virtual environment. Fluctuations in the amplitude of slow cortical potentials of EEG in the delta band (0.1-3 Hz) were used for prediction; thus, the EEG features correspond to time-domain amplitude modulated potentials in the delta band. Virtual kinematic perturbations resulting in asymmetric walking gait patterns of the avatar were also introduced to investigate gait adaptation using the closed-loop BCI-VR system over a period of eight days. Main results. Our results demonstrate the feasibility of using a closed-loop BCI to learn to control a walking avatar under normal and altered visuomotor perturbations, which involved cortical adaptations. The average decoding accuracies (Pearson’s r values) in real-time BCI across all subjects increased from (Hip: 0.18 ± 0.31 Knee: 0.23 ± 0.33 Ankle: 0.14 ± 0.22) on Day 1 to (Hip: 0.40 ± 0.24 Knee: 0.55 ± 0.20 Ankle: 0.29 ± 0.22) on Day 8. Significance. These findings have implications for the development of a real-time closed-loop EEG-based BCI-VR system for gait rehabilitation after stroke and for understanding cortical plasticity induced by a closed-loop BCI-VR system.
Using a 3D Virtual Supermarket to Measure Food Purchase Behavior: A Validation Study
Jiang, Yannan; Steenhuis, Ingrid Hendrika Margaretha; Ni Mhurchu, Cliona
2015-01-01
Background There is increasing recognition that supermarkets are an important environment for health-promoting interventions such as fiscal food policies or front-of-pack nutrition labeling. However, due to the complexities of undertaking such research in the real world, well-designed randomized controlled trials on these kinds of interventions are lacking. The Virtual Supermarket is a 3-dimensional computerized research environment designed to enable experimental studies in a supermarket setting without the complexity or costs normally associated with undertaking such research. Objective The primary objective was to validate the Virtual Supermarket by comparing virtual and real-life food purchasing behavior. A secondary objective was to obtain participant feedback on perceived sense of “presence” (the subjective experience of being in one place or environment even if physically located in another) in the Virtual Supermarket. Methods Eligible main household shoppers (New Zealand adults aged ≥18 years) were asked to conduct 3 shopping occasions in the Virtual Supermarket over 3 consecutive weeks, complete the validated Presence Questionnaire Items Stems, and collect their real supermarket grocery till receipts for that same period. Proportional expenditure (NZ$) and the proportion of products purchased over 18 major food groups were compared between the virtual and real supermarkets. Data were analyzed using repeated measures mixed models. Results A total of 123 participants consented to take part in the study. In total, 69.9% (86/123) completed 1 shop in the Virtual Supermarket, 64.2% (79/123) completed 2 shops, 60.2% (74/123) completed 3 shops, and 48.8% (60/123) returned their real supermarket till receipts. The 4 food groups with the highest relative expenditures were the same for the virtual and real supermarkets: fresh fruit and vegetables (virtual estimate: 14.3%; real: 17.4%), bread and bakery (virtual: 10.0%; real: 8.2%), dairy (virtual: 19.1%; real: 12.6%), and meat and fish (virtual: 16.5%; real: 16.8%). Significant differences in proportional expenditures were observed for 6 food groups, with largest differences (virtual – real) for dairy (in expenditure 6.5%, P<.001; in items 2.2%, P=.04) and fresh fruit and vegetables (in expenditure: –3.1%, P=.04; in items: 5.9%, P=.002). There was no trend of overspending in the Virtual Supermarket and participants experienced a medium-to-high presence (88%, 73/83 scored medium; 8%, 7/83 scored high). Conclusions Shopping patterns in the Virtual Supermarket were comparable to those in real life. Overall, the Virtual Supermarket is a valid tool to measure food purchasing behavior. Nevertheless, it is important to improve the functionality of some food categories, in particular fruit and vegetables and dairy. The results of this validation will assist in making further improvements to the software and with optimization of the internal and external validity of this innovative methodology. PMID:25921185
Using a 3D virtual supermarket to measure food purchase behavior: a validation study.
Waterlander, Wilma Elzeline; Jiang, Yannan; Steenhuis, Ingrid Hendrika Margaretha; Ni Mhurchu, Cliona
2015-04-28
There is increasing recognition that supermarkets are an important environment for health-promoting interventions such as fiscal food policies or front-of-pack nutrition labeling. However, due to the complexities of undertaking such research in the real world, well-designed randomized controlled trials on these kinds of interventions are lacking. The Virtual Supermarket is a 3-dimensional computerized research environment designed to enable experimental studies in a supermarket setting without the complexity or costs normally associated with undertaking such research. The primary objective was to validate the Virtual Supermarket by comparing virtual and real-life food purchasing behavior. A secondary objective was to obtain participant feedback on perceived sense of "presence" (the subjective experience of being in one place or environment even if physically located in another) in the Virtual Supermarket. Eligible main household shoppers (New Zealand adults aged ≥18 years) were asked to conduct 3 shopping occasions in the Virtual Supermarket over 3 consecutive weeks, complete the validated Presence Questionnaire Items Stems, and collect their real supermarket grocery till receipts for that same period. Proportional expenditure (NZ$) and the proportion of products purchased over 18 major food groups were compared between the virtual and real supermarkets. Data were analyzed using repeated measures mixed models. A total of 123 participants consented to take part in the study. In total, 69.9% (86/123) completed 1 shop in the Virtual Supermarket, 64.2% (79/123) completed 2 shops, 60.2% (74/123) completed 3 shops, and 48.8% (60/123) returned their real supermarket till receipts. The 4 food groups with the highest relative expenditures were the same for the virtual and real supermarkets: fresh fruit and vegetables (virtual estimate: 14.3%; real: 17.4%), bread and bakery (virtual: 10.0%; real: 8.2%), dairy (virtual: 19.1%; real: 12.6%), and meat and fish (virtual: 16.5%; real: 16.8%). Significant differences in proportional expenditures were observed for 6 food groups, with largest differences (virtual - real) for dairy (in expenditure 6.5%, P<.001; in items 2.2%, P=.04) and fresh fruit and vegetables (in expenditure: -3.1%, P=.04; in items: 5.9%, P=.002). There was no trend of overspending in the Virtual Supermarket and participants experienced a medium-to-high presence (88%, 73/83 scored medium; 8%, 7/83 scored high). Shopping patterns in the Virtual Supermarket were comparable to those in real life. Overall, the Virtual Supermarket is a valid tool to measure food purchasing behavior. Nevertheless, it is important to improve the functionality of some food categories, in particular fruit and vegetables and dairy. The results of this validation will assist in making further improvements to the software and with optimization of the internal and external validity of this innovative methodology.
Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm.
Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A; Przekwas, Andrzej; Francis, Joseph T; Lytton, William W
2015-01-01
Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics.
Multilevel Coordination Mechanisms for Real-Time Autonomous Agents
2004-02-01
for example, (Paolucci, 2000) or www.sun.com/ jini ) can allow agents to find each other by describing the kinds of services that they need or provide...In this regard, an important output from DARPA’s CoABS program is the CoABS Grid — a middleware layer based on Java / Jini technology that provides...Figure 1. Map of Binni showing firestorm deception. Misinformation from Gao is intended to displace the firestorm to the west, allowing Gao and
Online Operation Guidance of Computer System Used in Real-Time Distance Education Environment
ERIC Educational Resources Information Center
He, Aiguo
2011-01-01
Computer system is useful for improving real time and interactive distance education activities. Especially in the case that a large number of students participate in one distance lecture together and every student uses their own computer to share teaching materials or control discussions over the virtual classrooms. The problem is that within…
YaQ: an architecture for real-time navigation and rendering of varied crowds.
Maïm, Jonathan; Yersin, Barbara; Thalmann, Daniel
2009-01-01
The YaQ software platform is a complete system dedicated to real-time crowd simulation and rendering. Fitting multiple application domains, such as video games and VR, YaQ aims to provide efficient algorithms to generate crowds comprising up to thousands of varied virtual humans navigating in large-scale, global environments.
NASA Astrophysics Data System (ADS)
Wee, Loo Kang; Tiang Ning, Hwee
2014-09-01
This paper presents the customization of Easy Java Simulation models, used with actual laboratory instruments, to create active experiential learning for measurements. The laboratory instruments are the vernier caliper and the micrometer. Three computer model design ideas that complement real equipment are discussed. These ideas involve (1) a simple two-dimensional view for learning from pen and paper questions and the real world; (2) hints, answers, different scale options and the inclusion of zero error; (3) assessment for learning feedback. The initial positive feedback from Singaporean students and educators indicates that these tools could be successfully shared and implemented in learning communities. Educators are encouraged to change the source code for these computer models to suit their own purposes; they have creative commons attribution licenses for the benefit of all.
Virtual action and real action have different impacts on comprehension of concrete verbs
Repetto, Claudia; Cipresso, Pietro; Riva, Giuseppe
2015-01-01
In the last decade, many results have been reported supporting the hypothesis that language has an embodied nature. According to this theory, the sensorimotor system is involved in linguistic processes such as semantic comprehension. One of the cognitive processes emerging from the interplay between action and language is motor simulation. The aim of the present study is to deepen the knowledge about the simulation of action verbs during comprehension in a virtual reality setting. We compared two experimental conditions with different motor tasks: one in which the participants ran in a virtual world by moving the joypad knob with their left hand (virtual action performed with their feet plus real action performed with the hand) and one in which they only watched a video of runners and executed an attentional task by moving the joypad knob with their left hand (no virtual action plus real action performed with the hand). In both conditions, participants had to perform a concomitant go/no-go semantic task, in which they were asked to press a button (with their right hand) when presented with a sentence containing a concrete verb, and to refrain from providing a response when the verb was abstract. Action verbs described actions performed with hand, foot, or mouth. We recorded electromyography (EMG) latencies to measure reaction times of the linguistic task. We wanted to test if the simulation occurs, whether it is triggered by the virtual or the real action, and which effect it produces (facilitation or interference). Results underlined that those who virtually ran in the environment were faster in understanding foot-action verbs; no simulation effect was found for the real action. The present findings are discussed in the light of the embodied language framework, and a hypothesis is provided that integrates our results with those in literature. PMID:25759678
Augmented reality-guided artery-first pancreatico-duodenectomy.
Marzano, Ettore; Piardi, Tullio; Soler, Luc; Diana, Michele; Mutter, Didier; Marescaux, Jacques; Pessaux, Patrick
2013-11-01
Augmented Reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative work-up and real-time patient images with the aim to visualize unapparent anatomical details. The potential of AR navigation as a tool to improve safety of the surgical dissection is presented in a case of pancreatico-duodenectomy (PD). A 77-year-old male patient underwent an AR-assisted PD. The 3D virtual anatomical model was obtained from thoraco-abdominal CT scan using customary software (VR-RENDER®, IRCAD). The virtual model was superimposed to the operative field using an Exoscope (VITOM®, Karl Storz, Tüttlingen, Germany) as well as different visible landmarks (inferior vena cava, left renal vein, aorta, superior mesenteric vein, inferior margin of the pancreas). A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Dissection of the superior mesenteric artery and the hanging maneuver were performed under AR guidance along the hanging plane. AR allowed for precise and safe recognition of all the important vascular structures. Operative time was 360 min. AR display and fine registration was performed within 6 min. The postoperative course was uneventful. The pathology was positive for ampullary adenocarcinoma; the final stage was pT1N0 (0/43 retrieved lymph nodes) with clear surgical margins. AR is a valuable navigation tool that can enhance the ability to achieve a safe surgical resection during PD.
The pH ruler: a Java applet for developing interactive exercises on acids and bases.
Barrette-Ng, Isabelle H
2011-07-01
In introductory biochemistry courses, it is often a struggle to teach the basic concepts of acid-base chemistry in a manner that is relevant to biological systems. To help students gain a more intuitive and visual understanding of abstract acid-base concepts, a simple graphical construct called the pH ruler Java applet was developed. The applet allows students to visualize the abundance of different protonation states of diprotic and triprotic amino acids at different pH values. Using the applet, the student can drag a widget on a slider bar to change the pH and observe in real time changes in the abundance of different ionization states of this amino acid. This tool provides a means for developing more complex inquiry-based, active-learning exercises to teach more advanced topics of biochemistry, such as protein purification, protein structure and enzyme mechanism.
ERIC Educational Resources Information Center
Grotzer, Tina A.; Powell, Megan M.; Derbiszewska, Katarzyna M.; Courter, Caroline J.; Kamarainen, Amy M.; Metcalf, Shari J.; Dede, Christopher J.
2015-01-01
Reasoning about ecosystems includes consideration of causality over temporal and spatial distances; yet learners typically focus on immediate time frames and local contexts. Teaching students to reason beyond these boundaries has met with some success based upon tests that cue students to the types of reasoning required. Virtual worlds offer an…
ERIC Educational Resources Information Center
Harrington, M. C. R.
2011-01-01
Over the past 20 years, there has been a debate on the effectiveness of virtual reality used for learning with young children, producing many ideas but little empirical proof. This empirical study compared learning activity in situ of a real environment (Real) and a desktop virtual reality (Virtual) environment, built with video game technology,…
Theft of Virtual Property — Towards Security Requirements for Virtual Worlds
NASA Astrophysics Data System (ADS)
Beyer, Anja
The article is focused to introduce the topic of information technology security for Virtual Worlds to a security experts’ audience. Virtual Worlds are Web 2.0 applications where the users cruise through the world with their individually shaped avatars to find either amusement, challenges or the next best business deal. People do invest a lot of time but beyond they invest in buying virtual assets like fantasy witcheries, wepaons, armour, houses, clothes,...etc with the power of real world money. Although it is called “virtual” (which is often put on the same level as “not existent”) there is a real value behind it. In November 2007 dutch police arrested a seventeen years old teenager who was suspicted to have stolen virtual items in a Virtual World called Habbo Hotel [Reuters07]. In order to successfully provide security mechanisms into Virtual Worlds it is necessarry to fully understand the domain for which the security mechansims are defined. As Virtual Worlds must be clasified into the domain of Social Software the article starts with an overview of how to understand Web 2.0 and gives a short introduction to Virtual Worlds. The article then provides a consideration of assets of Virtual Worlds participants, describes how these assets can be threatened and gives an overview of appopriate security requirements and completes with an outlook of possible countermeasures.
Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy.
Pessaux, Patrick; Diana, Michele; Soler, Luc; Piardi, Tullio; Mutter, Didier; Marescaux, Jacques
2015-04-01
Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6-10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.
Simulation Of Assembly Processes With Technical Of Virtual Reality
NASA Astrophysics Data System (ADS)
García García, Manuel; Arenas Reina, José Manuel; Lite, Alberto Sánchez; Sebastián Pérez, Miguel Ángel
2009-11-01
Virtual reality techniques use at industrial processes provides a real approach to product life cycle. For components manual assembly, the use of virtual surroundings facilitates a simultaneous engineering in which variables such as human factors and productivity take a real act. On the other hand, in the actual phase of industrial competition it is required a rapid adjustment to client needs and to market situation. In this work it is analyzed the assembly of the front components of a vehicle using virtual reality tools and following up a product-process design methodology which includes every life service stage. This study is based on workstations design, taking into account productive and human factors from the ergonomic point of view implementing a postural study of every assembly operation, leaving the rest of stages for a later study. Design is optimized applying this methodology together with the use of virtual reality tools. It is also achieved a 15% reduction on time assembly and of 90% reduction in muscle—skeletal diseases at every assembly operation.
ERIC Educational Resources Information Center
Chen, Chwen Jen; Fauzy Wan Ismail, Wan Mohd
2008-01-01
The real-time interactive nature of three-dimensional virtual environments (VEs) makes this technology very appropriate for exploratory learning purposes. However, many studies have shown that the exploration process may cause cognitive overload that affects the learning of domain knowledge. This article reports a quasi-experimental study that…
ERIC Educational Resources Information Center
Kiegaldie, Debra; White, Geoff
2006-01-01
The Virtual Patient, an interactive multimedia learning resource using a critical care clinical scenario for postgraduate nursing students, was developed to enhance flexible access to learning experiences and improve learning outcomes in the management of critically ill patients. Using real-time physiological animations, authentic content design…
Software architecture standard for simulation virtual machine, version 2.0
NASA Technical Reports Server (NTRS)
Sturtevant, Robert; Wessale, William
1994-01-01
The Simulation Virtual Machine (SBM) is an Ada architecture which eases the effort involved in the real-time software maintenance and sustaining engineering. The Software Architecture Standard defines the infrastructure which all the simulation models are built from. SVM was developed for and used in the Space Station Verification and Training Facility.
Educational Community: Among the Real and Virtual Civic Initiative
ERIC Educational Resources Information Center
Arsenijevic, Jasmina; Andevski, Milica
2016-01-01
The new media enable numerous advantages in the strengthening of civic engagement, through removing barriers in space and time and through networking of individuals of the same social, civic or political interests at the global level. Different forms of civic engagement and civic responsibility in the virtual space are ever more present, and…
Real-time tracking of visually attended objects in virtual environments and its application to LOD.
Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon
2009-01-01
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.
NASA Astrophysics Data System (ADS)
Lindholm, D. M.; Wilson, A.
2012-12-01
The steps many scientific data users go through to use data (after discovering it) can be rather tedious, even when dealing with datasets within their own discipline. Accessing data across domains often seems intractable. We present here, LaTiS, an Open Source brokering solution that bridges the gap between the source data and the user's code by defining a unified data model plus a plugin framework for "adapters" to read data from their native source, "filters" to perform server side data processing, and "writers" to output any number of desired formats or streaming protocols. A great deal of work is being done in the informatics community to promote multi-disciplinary science with a focus on search and discovery based on metadata - information about the data. The goal of LaTiS is to go that last step to provide a uniform interface to read the dataset into computer programs and other applications once it has been identified. The LaTiS solution for integrating a wide variety of data models is to return to mathematical fundamentals. The LaTiS data model emphasizes functional relationships between variables. For example, a time series of temperature measurements can be thought of as a function that maps a time to a temperature. With just three constructs: "Scalar" for a single variable, "Tuple" for a collection of variables, and "Function" to represent a set of independent and dependent variables, the LaTiS data model can represent most scientific datasets at a low level that enables uniform data access. Higher level abstractions can be built on top of the basic model to add more meaningful semantics for specific user communities. LaTiS defines its data model in terms of the Unified Modeling Language (UML). It also defines a very thin Java Interface that can be implemented by numerous existing data interfaces (e.g. NetCDF-Java) such that client code can access any dataset via the Java API, independent of the underlying data access mechanism. LaTiS also provides a reference implementation of the data model and server framework (with a RESTful service interface) in the Scala programming language. Scala can be thought of as the next generation of Java. It runs on the Java Virtual Machine and can directly use Java code. Scala improves upon Java's object-oriented capabilities and adds support for functional programming paradigms which are particularly well suited for scientific data analysis. The Scala implementation of LaTiS can be thought of as a Domain Specific Language (DSL) which presents an API that better matches the semantics of the problems scientific data users are trying to solve. Instead of working with bytes, ints, or arrays, the data user can directly work with data as "time series" or "spectra". LaTiS provides many layers of abstraction with which users can interact to support a wide variety of data access and analysis needs.
Wang, Junhua; Sun, Shuaiyi; Fang, Shouen; Fu, Ting; Stipancic, Joshua
2017-02-01
This paper aims to both identify the factors affecting driver drowsiness and to develop a real-time drowsy driving probability model based on virtual Location-Based Services (LBS) data obtained using a driving simulator. A driving simulation experiment was designed and conducted using 32 participant drivers. Collected data included the continuous driving time before detection of drowsiness and virtual LBS data related to temperature, time of day, lane width, average travel speed, driving time in heavy traffic, and driving time on different roadway types. Demographic information, such as nap habit, age, gender, and driving experience was also collected through questionnaires distributed to the participants. An Accelerated Failure Time (AFT) model was developed to estimate the driving time before detection of drowsiness. The results of the AFT model showed driving time before drowsiness was longer during the day than at night, and was longer at lower temperatures. Additionally, drivers who identified as having a nap habit were more vulnerable to drowsiness. Generally, higher average travel speeds were correlated to a higher risk of drowsy driving, as were longer periods of low-speed driving in traffic jam conditions. Considering different road types, drivers felt drowsy more quickly on freeways compared to other facilities. The proposed model provides a better understanding of how driver drowsiness is influenced by different environmental and demographic factors. The model can be used to provide real-time data for the LBS-based drowsy driving warning system, improving past methods based only on a fixed driving. Copyright © 2016 Elsevier Ltd. All rights reserved.
Interactions with Virtual People: Do Avatars Dream of Digital Sheep?. Chapter 6
NASA Technical Reports Server (NTRS)
Slater, Mel; Sanchez-Vives, Maria V.
2007-01-01
This paper explores another form of artificial entity, ones without physical embodiment. We refer to virtual characters as the name for a type of interactive object that have become familiar in computer games and within virtual reality applications. We refer to these as avatars: three-dimensional graphical objects that are in more-or-less human form which can interact with humans. Sometimes such avatars will be representations of real-humans who are interacting together within a shared networked virtual environment, other times the representations will be of entirely computer generated characters. Unlike other authors, who reserve the term agent for entirely computer generated characters and avatars for virtual embodiments of real people; the same term here is used for both. This is because avatars and agents are on a continuum. The question is where does their behaviour originate? At the extremes the behaviour is either completely computer generated or comes only from tracking of a real person. However, not every aspect of a real person can be tracked every eyebrow move, every blink, every breath rather real tracking data would be supplemented by inferred behaviours which are programmed based on the available information as to what the real human is doing and her/his underlying emotional and psychological state. Hence there is always some programmed behaviour it is only a matter of how much. In any case the same underlying problem remains how can the human character be portrayed in such a manner that its actions are believable and have an impact on the real people with whom it interacts? This paper has three main parts. In the first part we will review some evidence that suggests that humans react with appropriate affect in their interactions with virtual human characters, or with other humans who are represented as avatars. This is so in spite of the fact that the representational fidelity is relatively low. Our evidence will be from the realm of psychotherapy, where virtual social situations are created that do test whether people react appropriately within these situations. We will also consider some experiments on face-to-face virtual communications between people in the same shared virtual environments. The second part will try to give some clues about why this might happen, taking into account modern theories of perception from neuroscience. The third part will include some speculations about the future developments of the relationship between people and virtual people. We will suggest that a more likely scenario than the world becoming populated by physically embodied virtual people (robots, androids) is that in the relatively near future we will interact more and more in our everyday lives with virtual people- bank managers, shop assistants, instructors, and so on. What is happening in the movies with computer graphic generated individuals and entire crowds may move into the space of everyday life.
The Virtual Telescope Project: Enjoy the Universe from your desktop
NASA Astrophysics Data System (ADS)
Masi, G.
2008-06-01
The Virtual Telescope is a new robotic facility that makes possible for people worldwide to participate in real-time observations of the sky. Complete scientific instruments are made available, matching the needs of researchers, students and amateur astronomers. Instruments are controlled live and in real time by the remote user while qualified assistance is made available from a professional astronomer, to assist and address the observing experience. The project consists of several remote controlled and independent telescopes, including solar scopes for daytime observations. Their diameters range from 40-360 mm. The project and the technology involved are presented here, as well as the peculiar benefits for students and other users.
Synthesis of Virtual Environments for Aircraft Community Noise Impact Studies
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Sullivan, Brenda M.
2005-01-01
A new capability has been developed for the creation of virtual environments for the study of aircraft community noise. It is applicable for use with both recorded and synthesized aircraft noise. When using synthesized noise, a three-stage process is adopted involving non-real-time prediction and synthesis stages followed by a real-time rendering stage. Included in the prediction-based source noise synthesis are temporal variations associated with changes in operational state, and low frequency fluctuations that are present under all operating conditions. Included in the rendering stage are the effects of spreading loss, absolute delay, atmospheric absorption, ground reflections, and binaural filtering. Results of prediction, synthesis and rendering stages are presented.
Self-motion perception: assessment by real-time computer-generated animations
NASA Technical Reports Server (NTRS)
Parker, D. E.; Phillips, J. O.
2001-01-01
We report a new procedure for assessing complex self-motion perception. In three experiments, subjects manipulated a 6 degree-of-freedom magnetic-field tracker which controlled the motion of a virtual avatar so that its motion corresponded to the subjects' perceived self-motion. The real-time animation created by this procedure was stored using a virtual video recorder for subsequent analysis. Combined real and illusory self-motion and vestibulo-ocular reflex eye movements were evoked by cross-coupled angular accelerations produced by roll and pitch head movements during passive yaw rotation in a chair. Contrary to previous reports, illusory self-motion did not correspond to expectations based on semicircular canal stimulation. Illusory pitch head-motion directions were as predicted for only 37% of trials; whereas, slow-phase eye movements were in the predicted direction for 98% of the trials. The real-time computer-generated animations procedure permits use of naive, untrained subjects who lack a vocabulary for reporting motion perception and is applicable to basic self-motion perception studies, evaluation of motion simulators, assessment of balance disorders and so on.
Enhancing Web applications in radiology with Java: estimating MR imaging relaxation times.
Dagher, A P; Fitzpatrick, M; Flanders, A E; Eng, J
1998-01-01
Java is a relatively new programming language that has been used to develop a World Wide Web-based tool for estimating magnetic resonance (MR) imaging relaxation times, thereby demonstrating how Java may be used for Web-based radiology applications beyond improving the user interface of teaching files. A standard processing algorithm coded with Java is downloaded along with the hypertext markup language (HTML) document. The user (client) selects the desired pulse sequence and inputs data obtained from a region of interest on the MR images. The algorithm is used to modify selected MR imaging parameters in an equation that models the phenomenon being evaluated. MR imaging relaxation times are estimated, and confidence intervals and a P value expressing the accuracy of the final results are calculated. Design features such as simplicity, object-oriented programming, and security restrictions allow Java to expand the capabilities of HTML by offering a more versatile user interface that includes dynamic annotations and graphics. Java also allows the client to perform more sophisticated information processing and computation than is usually associated with Web applications. Java is likely to become a standard programming option, and the development of stand-alone Java applications may become more common as Java is integrated into future versions of computer operating systems.
ERIC Educational Resources Information Center
Gil, Arturo; Peidró, Adrián; Reinoso, Óscar; Marín, José María
2017-01-01
This paper presents a tool, LABEL, oriented to the teaching of parallel robotics. The application, organized as a set of tools developed using Easy Java Simulations, enables the study of the kinematics of parallel robotics. A set of classical parallel structures was implemented such that LABEL can solve the inverse and direct kinematic problem of…
Offenders become the victim in virtual reality: impact of changing perspective in domestic violence.
Seinfeld, S; Arroyo-Palacios, J; Iruretagoyena, G; Hortensius, R; Zapata, L E; Borland, D; de Gelder, B; Slater, M; Sanchez-Vives, M V
2018-02-09
The role of empathy and perspective-taking in preventing aggressive behaviors has been highlighted in several theoretical models. In this study, we used immersive virtual reality to induce a full body ownership illusion that allows offenders to be in the body of a victim of domestic abuse. A group of male domestic violence offenders and a control group without a history of violence experienced a virtual scene of abuse in first-person perspective. During the virtual encounter, the participants' real bodies were replaced with a life-sized virtual female body that moved synchronously with their own real movements. Participants' emotion recognition skills were assessed before and after the virtual experience. Our results revealed that offenders have a significantly lower ability to recognize fear in female faces compared to controls, with a bias towards classifying fearful faces as happy. After being embodied in a female victim, offenders improved their ability to recognize fearful female faces and reduced their bias towards recognizing fearful faces as happy. For the first time, we demonstrate that changing the perspective of an aggressive population through immersive virtual reality can modify socio-perceptual processes such as emotion recognition, thought to underlie this specific form of aggressive behaviors.
Riva, Giuseppe; Raspelli, Simona; Algeri, Davide; Pallavicini, Federica; Gorini, Alessandra; Wiederhold, Brenda K; Gaggioli, Andrea
2010-02-01
The use of new technologies, particularly virtual reality, is not new in the treatment of posttraumatic stress disorders (PTSD): VR is used to facilitate the activation of the traumatic event during exposure therapy. However, during the therapy, VR is a new and distinct realm, separate from the emotions and behaviors experienced by the patient in the real world: the behavior of the patient in VR has no direct effects on the real-life experience; the emotions and problems experienced by the patient in the real world are not directly addressed in the VR exposure. In this article, we suggest that the use of a new technological paradigm, Interreality, may improve the clinical outcome of PTSD. The main feature of Interreality is a twofold link between the virtual and real worlds: (a) behavior in the physical world influences the experience in the virtual one; (b) behavior in the virtual world influences the experience in the real one. This is achieved through 3D shared virtual worlds; biosensors and activity sensors (from the real to the virtual world); and personal digital assistants and/or mobile phones (from the virtual world to the real one). We describe different technologies that are involved in the Interreality vision and its clinical rationale. To illustrate the concept of Interreality in practice, a clinical scenario is also presented and discussed: Rosa, a 55-year-old nurse, involved in a major car accident.
Dynamic publication model for neurophysiology databases.
Gardner, D; Abato, M; Knuth, K H; DeBellis, R; Erde, S M
2001-08-29
We have implemented a pair of database projects, one serving cortical electrophysiology and the other invertebrate neurones and recordings. The design for each combines aspects of two proven schemes for information interchange. The journal article metaphor determined the type, scope, organization and quantity of data to comprise each submission. Sequence databases encouraged intuitive tools for data viewing, capture, and direct submission by authors. Neurophysiology required transcending these models with new datatypes. Time-series, histogram and bivariate datatypes, including illustration-like wrappers, were selected by their utility to the community of investigators. As interpretation of neurophysiological recordings depends on context supplied by metadata attributes, searches are via visual interfaces to sets of controlled-vocabulary metadata trees. Neurones, for example, can be specified by metadata describing functional and anatomical characteristics. Permanence is advanced by data model and data formats largely independent of contemporary technology or implementation, including Java and the XML standard. All user tools, including dynamic data viewers that serve as a virtual oscilloscope, are Java-based, free, multiplatform, and distributed by our application servers to any contemporary networked computer. Copyright is retained by submitters; viewer displays are dynamic and do not violate copyright of related journal figures. Panels of neurophysiologists view and test schemas and tools, enhancing community support.
SU-G-BRA-01: A Real-Time Tumor Localization and Guidance Platform for Radiotherapy Using US and MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bednarz, B; Culberson, W; Bassetti, M
Purpose: To develop and validate a real-time motion management platform for radiotherapy that directly tracks tumor motion using ultrasound and MRI. This will be a cost-effective and non-invasive real-time platform combining the excellent temporal resolution of ultrasound with the excellent soft-tissue contrast of MRI. Methods: A 4D planar ultrasound acquisition during the treatment that is coupled to a pre-treatment calibration training image set consisting of a simultaneous 4D ultrasound and 4D MRI acquisition. The image sets will be rapidly matched using advanced image and signal processing algorithms, allowing the display of virtual MR images of the tumor/organ motion in real-timemore » from an ultrasound acquisition. Results: The completion of this work will result in several innovations including: a (2D) patch-like, MR and LINAC compatible 4D planar ultrasound transducer that is electronically steerable for hands-free operation to provide real-time virtual MR and ultrasound imaging for motion management during radiation therapy; a multi- modal tumor localization strategy that uses ultrasound and MRI; and fast and accurate image processing algorithms that provide real-time information about the motion and location of tumor or related soft-tissue structures within the patient. Conclusion: If successful, the proposed approach will provide real-time guidance for radiation therapy without degrading image or treatment plan quality. The approach would be equally suitable for image-guided proton beam or heavy ion-beam therapy. This work is partially funded by NIH grant R01CA190298.« less
Real-time, interactive, visually updated simulator system for telepresence
NASA Technical Reports Server (NTRS)
Schebor, Frederick S.; Turney, Jerry L.; Marzwell, Neville I.
1991-01-01
Time delays and limited sensory feedback of remote telerobotic systems tend to disorient teleoperators and dramatically decrease the operator's performance. To remove the effects of time delays, key components were designed and developed of a prototype forward simulation subsystem, the Global-Local Environment Telerobotic Simulator (GLETS) that buffers the operator from the remote task. GLETS totally immerses an operator in a real-time, interactive, simulated, visually updated artificial environment of the remote telerobotic site. Using GLETS, the operator will, in effect, enter into a telerobotic virtual reality and can easily form a gestalt of the virtual 'local site' that matches the operator's normal interactions with the remote site. In addition to use in space based telerobotics, GLETS, due to its extendable architecture, can also be used in other teleoperational environments such as toxic material handling, construction, and undersea exploration.
Koenig, Alexander; Novak, Domen; Omlin, Ximena; Pulfer, Michael; Perreault, Eric; Zimmerli, Lukas; Mihelj, Matjaz; Riener, Robert
2011-08-01
Cognitively challenging training sessions during robot-assisted gait training after stroke were shown to be key requirements for the success of rehabilitation. Despite a broad variability of cognitive impairments amongst the stroke population, current rehabilitation environments do not adapt to the cognitive capabilities of the patient, as cognitive load cannot be objectively assessed in real-time. We provided healthy subjects and stroke patients with a virtual task during robot-assisted gait training, which allowed modulating cognitive load by adapting the difficulty level of the task. We quantified the cognitive load of stroke patients by using psychophysiological measurements and performance data. In open-loop experiments with healthy subjects and stroke patients, we obtained training data for a linear, adaptive classifier that estimated the current cognitive load of patients in real-time. We verified our classification results via questionnaires and obtained 88% correct classification in healthy subjects and 75% in patients. Using the pre-trained, adaptive classifier, we closed the cognitive control loop around healthy subjects and stroke patients by automatically adapting the difficulty level of the virtual task in real-time such that patients were neither cognitively overloaded nor under-challenged. © 2011 IEEE
The development of a collaborative virtual environment for finite element simulation
NASA Astrophysics Data System (ADS)
Abdul-Jalil, Mohamad Kasim
Communication between geographically distributed designers has been a major hurdle in traditional engineering design. Conventional methods of communication, such as video conferencing, telephone, and email, are less efficient especially when dealing with complex design models. Complex shapes, intricate features and hidden parts are often difficult to describe verbally or even using traditional 2-D or 3-D visual representations. Virtual Reality (VR) and Internet technologies have provided a substantial potential to bridge the present communication barrier. VR technology allows designers to immerse themselves in a virtual environment to view and manipulate this model just as in real-life. Fast Internet connectivity has enabled fast data transfer between remote locations. Although various collaborative virtual environment (CVE) systems have been developed in the past decade, they are limited to high-end technology that is not accessible to typical designers. The objective of this dissertation is to discover and develop a new approach to increase the efficiency of the design process, particularly for large-scale applications wherein participants are geographically distributed. A multi-platform and easily accessible collaborative virtual environment (CVRoom), is developed to accomplish the stated research objective. Geographically dispersed designers can meet in a single shared virtual environment to discuss issues pertaining to the engineering design process and to make trade-off decisions more quickly than before, thereby speeding the entire process. This 'faster' design process will be achieved through the development of capabilities to better enable the multidisciplinary and modeling the trade-off decisions that are so critical before launching into a formal detailed design. The features of the environment developed as a result of this research include the ability to view design models, use voice interaction, and to link engineering analysis modules (such as Finite Element Analysis module, such as is demonstrated in this work). One of the major issues in developing a CVE system for engineering design purposes is to obtain any pertinent simulation results in real-time. This is critical so that the designers can make decisions based on these results quickly. For example, in a finite element analysis, if a design model is changed or perturbed, the analysis results must be obtained in real-time or near real-time to make the virtual meeting environment realistic. In this research, the finite difference-based Design Sensitivity Analysis (DSA) approach is employed to approximate structural responses (i.e. stress, displacement, etc), so as to demonstrate the applicability of CVRoom for engineering design trade-offs. This DSA approach provides for fast approximation and is well-suited for the virtual meeting environment where fast response time is required. The DSA-based approach is tested on several example test problems to show its applicability and limitations. This dissertation demonstrates that an increase in efficiency and reduction of time required for a complex design processing can be accomplished using the approach developed in this dissertation research. Several implementations of CVRoom by students working on common design tasks were investigated. All participants confirmed the preference of using the collaborative virtual environment developed in this dissertation work (CVRoom) over other modes of interactions. It is proposed here that CVRoom is representative of the type of collaborative virtual environment that will be used by most designers in the future to reduce the time required in a design cycle and thereby reduce the associated cost.
The Design and Implementation of Virtual Roaming in Yunnan Diqing Tibetan traditional Villages
NASA Astrophysics Data System (ADS)
Cao, Lucheng; Xu, Wu; Li, Ke; Jin, Chunjie; Su, Ying; He, Jin
2018-06-01
Traditional residence is the continuation of intangible cultural heritage and the primitive soil for development. At present, the protection and inheritance of traditional villages have been impacted by the process of modernization, and the phenomenon of assimilation is very serious. This article takes the above questions as the breakthrough point, and then analyzes why and how to use virtual reality technology to better solve the above problems, and take the Yunnan Diqing Tibetan traditional dwellings as the specific example to explore. First, using VR technology, with real images and sound, the paper simulate a near real virtual world. Secondly, we collect a large amount of real image information, and make the visualization model of building by using 3DMAX software platform, UV Mapping and Rendering optimization. Finally, the Vizard virtual reality development platform was used to establish the roaming system and realize the virtual interaction. The roaming system was posted online so that overcome the disadvantages of not intuitive and low capability of interaction, and these new ideas can give a whole new meaning in the protection projects of the cultural relic buildings. At the same time, visitors could enjoy the "Dian-style" architectural style and cultural connotation of dwelling house in Diqing Yunnan.
A Novel Integrating Virtual Reality Approach for the Assessment of the Attachment Behavioral System.
Chicchi Giglioli, Irene Alice; Pravettoni, Gabriella; Sutil Martín, Dolores Lucia; Parra, Elena; Raya, Mariano A
2017-01-01
Virtual reality (VR) technology represents a novel and powerful tool for behavioral research in psychological assessment. VR provides simulated experiences able to create the sensation of undergoing real situations. Users become active participants in the virtual environment seeing, hearing, feeling, and actuating as if they were in the real world. Currently, the most psychological VR applications concern the treatment of various mental disorders but not the assessment, that it is mainly based on paper and pencil tests. The observation of behaviors is costly, labor-intensive, and it is hard to create social situations in laboratory settings, even if the observation of actual behaviors could be particularly informative. In this framework, social stressful experiences can activate various behaviors of attachment for a significant person that can help to control and soothe them to promote individual's well-being. Social support seeking, physical proximity, and positive and negative behaviors represent the main attachment behaviors that people can carry out during experiences of distress. We proposed VR as a novel integrating approach to measure real attachment behaviors. The first studies on attachment behavioral system by VR showed the potentiality of this approach. To improve the assessment during the VR experience, we proposed virtual stealth assessment (VSA) as a new method. VSA could represent a valid and novel technique to measure various psychological attributes in real-time during the virtual experience. The possible use of this method in psychology could be to generate a more complete, exhaustive, and accurate individual's psychological evaluation.
Tuning self-motion perception in virtual reality with visual illusions.
Bruder, Gerd; Steinicke, Frank; Wieland, Phil; Lappe, Markus
2012-07-01
Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.
Exploring Non-Traditional Learning Methods in Virtual and Real-World Environments
ERIC Educational Resources Information Center
Lukman, Rebeka; Krajnc, Majda
2012-01-01
This paper identifies the commonalities and differences within non-traditional learning methods regarding virtual and real-world environments. The non-traditional learning methods in real-world have been introduced within the following courses: Process Balances, Process Calculation, and Process Synthesis, and within the virtual environment through…
NASA Astrophysics Data System (ADS)
Tadokoro, Satoshi; Kitano, Hiroaki; Takahashi, Tomoichi; Noda, Itsuki; Matsubara, Hitoshi; Shinjoh, Atsushi; Koto, Tetsuo; Takeuchi, Ikuo; Takahashi, Hironao; Matsuno, Fumitoshi; Hatayama, Mitsunori; Nobe, Jun; Shimada, Susumu
2000-07-01
This paper introduces the RoboCup-Rescue Simulation Project, a contribution to the disaster mitigation, search and rescue problem. A comprehensive urban disaster simulator is constructed on distributed computers. Heterogeneous intelligent agents such as fire fighters, victims and volunteers conduct search and rescue activities in this virtual disaster world. A real world interface integrates various sensor systems and controllers of infrastructures in the real cities with the real world. Real-time simulation is synchronized with actual disasters, computing complex relationship between various damage factors and agent behaviors. A mission-critical man-machine interface provides portability and robustness of disaster mitigation centers, and augmented-reality interfaces for rescue in real disasters. It also provides a virtual- reality training function for the public. This diverse spectrum of RoboCup-Rescue contributes to the creation of the safer social system.
Validation of virtual reality as a tool to understand and prevent child pedestrian injury.
Schwebel, David C; Gaines, Joanna; Severson, Joan
2008-07-01
In recent years, virtual reality has emerged as an innovative tool for health-related education and training. Among the many benefits of virtual reality is the opportunity for novice users to engage unsupervised in a safe environment when the real environment might be dangerous. Virtual environments are only useful for health-related research, however, if behavior in the virtual world validly matches behavior in the real world. This study was designed to test the validity of an immersive, interactive virtual pedestrian environment. A sample of 102 children and 74 adults was recruited to complete simulated road-crossings in both the virtual environment and the identical real environment. In both the child and adult samples, construct validity was demonstrated via significant correlations between behavior in the virtual and real worlds. Results also indicate construct validity through developmental differences in behavior; convergent validity by showing correlations between parent-reported child temperament and behavior in the virtual world; internal reliability of various measures of pedestrian safety in the virtual world; and face validity, as measured by users' self-reported perception of realism in the virtual world. We discuss issues of generalizability to other virtual environments, and the implications for application of virtual reality to understanding and preventing pediatric pedestrian injuries.
NASA Astrophysics Data System (ADS)
Soler, Luc; Marescaux, Jacques
2006-04-01
Technological innovations of the 20 th century provided medicine and surgery with new tools, among which virtual reality and robotics belong to the most revolutionary ones. Our work aims at setting up new techniques for detection, 3D delineation and 4D time follow-up of small abdominal lesions from standard mecial images (CT scsan, MRI). It also aims at developing innovative systems making tumor resection or treatment easier with the use of augmented reality and robotized systems, increasing gesture precision. It also permits a realtime great distance connection between practitioners so they can share a same 3D reconstructed patient and interact on a same patient, virtually before the intervention and for real during the surgical procedure thanks to a telesurgical robot. In preclinical studies, our first results obtained from a micro-CT scanner show that these technologies provide an efficient and precise 3D modeling of anatomical and pathological structures of rats and mice. In clinical studies, our first results show the possibility to improve the therapeutic choice thanks to a better detection and and representation of the patient before performing the surgical gesture. They also show the efficiency of augmented reality that provides virtual transparency of the patient in real time during the operative procedure. In the near future, through the exploitation of these systems, surgeons will program and check on the virtual patient clone an optimal procedure without errors, which will be replayed on the real patient by the robot under surgeon control. This medical dream is today about to become reality.
Web-Based Virtual Environments for Facilitating Assessment of L2 Oral Communication Ability
ERIC Educational Resources Information Center
Ockey, Gary J.; Gu, Lin; Keehner, Madeleine
2017-01-01
A growing number of stakeholders argue for the use of second language (L2) speaking assessments that measure the ability to orally communicate in real time. A Web-based virtual environment (VE) that allows live voice communication among individuals may have potential for aiding in delivering such assessments. While off-the-shelf voice…
ERIC Educational Resources Information Center
Blikstein, Paulo; Fuhrmann, Tamar; Salehi, Shima
2016-01-01
In this paper, we investigate an approach to supporting students' learning in science through a combination of physical experimentation and virtual modeling. We present a study that utilizes a scientific inquiry framework, which we call "bifocal modeling," to link student-designed experiments and computer models in real time. In this…
JavaGenes and Condor: Cycle-Scavenging Genetic Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Langhirt, Eric; Livny, Miron; Ramamurthy, Ravishankar; Soloman, Marvin; Traugott, Steve
2000-01-01
A genetic algorithm code, JavaGenes, was written in Java and used to evolve pharmaceutical drug molecules and digital circuits. JavaGenes was run under the Condor cycle-scavenging batch system managing 100-170 desktop SGI workstations. Genetic algorithms mimic biological evolution by evolving solutions to problems using crossover and mutation. While most genetic algorithms evolve strings or trees, JavaGenes evolves graphs representing (currently) molecules and circuits. Java was chosen as the implementation language because the genetic algorithm requires random splitting and recombining of graphs, a complex data structure manipulation with ample opportunities for memory leaks, loose pointers, out-of-bound indices, and other hard to find bugs. Java garbage-collection memory management, lack of pointer arithmetic, and array-bounds index checking prevents these bugs from occurring, substantially reducing development time. While a run-time performance penalty must be paid, the only unacceptable performance we encountered was using standard Java serialization to checkpoint and restart the code. This was fixed by a two-day implementation of custom checkpointing. JavaGenes is minimally integrated with Condor; in other words, JavaGenes must do its own checkpointing and I/O redirection. A prototype Java-aware version of Condor was developed using standard Java serialization for checkpointing. For the prototype to be useful, standard Java serialization must be significantly optimized. JavaGenes is approximately 8700 lines of code and a few thousand JavaGenes jobs have been run. Most jobs ran for a few days. Results include proof that genetic algorithms can evolve directed and undirected graphs, development of a novel crossover operator for graphs, a paper in the journal Nanotechnology, and another paper in preparation.
A geostationary Earth orbit satellite model using Easy Java Simulation
NASA Astrophysics Data System (ADS)
Wee, Loo Kang; Hwee Goh, Giam
2013-01-01
We develop an Easy Java Simulation (EJS) model for students to visualize geostationary orbits near Earth, modelled using a Java 3D implementation of the EJS 3D library. The simplified physics model is described and simulated using a simple constant angular velocity equation. We discuss four computer model design ideas: (1) a simple and realistic 3D view and associated learning in the real world; (2) comparative visualization of permanent geostationary satellites; (3) examples of non-geostationary orbits of different rotation senses, periods and planes; and (4) an incorrect physics model for conceptual discourse. General feedback from the students has been relatively positive, and we hope teachers will find the computer model useful in their own classes.
Ligand.Info small-molecule Meta-Database.
von Grotthuss, Marcin; Koczyk, Grzegorz; Pas, Jakub; Wyrwicz, Lucjan S; Rychlewski, Leszek
2004-12-01
Ligand.Info is a compilation of various publicly available databases of small molecules. The total size of the Meta-Database is over 1 million entries. The compound records contain calculated three-dimensional coordinates and sometimes information about biological activity. Some molecules have information about FDA drug approving status or about anti-HIV activity. Meta-Database can be downloaded from the http://Ligand.Info web page. The database can also be screened using a Java-based tool. The tool can interactively cluster sets of molecules on the user side and automatically download similar molecules from the server. The application requires the Java Runtime Environment 1.4 or higher, which can be automatically downloaded from Sun Microsystems or Apple Computer and installed during the first use of Ligand.Info on desktop systems, which support Java (Ms Windows, Mac OS, Solaris, and Linux). The Ligand.Info Meta-Database can be used for virtual high-throughput screening of new potential drugs. Presented examples showed that using a known antiviral drug as query the system was able to find others antiviral drugs and inhibitors.
Owgis 2.0: Open Source Java Application that Builds Web GIS Interfaces for Desktop Andmobile Devices
NASA Astrophysics Data System (ADS)
Zavala Romero, O.; Chassignet, E.; Zavala-Hidalgo, J.; Pandav, H.; Velissariou, P.; Meyer-Baese, A.
2016-12-01
OWGIS is an open source Java and JavaScript application that builds easily configurable Web GIS sites for desktop and mobile devices. The current version of OWGIS generates mobile interfaces based on HTML5 technology and can be used to create mobile applications. The style of the generated websites can be modified using COMPASS, a well known CSS Authoring Framework. In addition, OWGIS uses several Open Geospatial Consortium standards to request datafrom the most common map servers, such as GeoServer. It is also able to request data from ncWMS servers, allowing the websites to display 4D data from NetCDF files. This application is configured by XML files that define which layers, geographic datasets, are displayed on the Web GIS sites. Among other features, OWGIS allows for animations; streamlines from vector data; virtual globe display; vertical profiles and vertical transects; different color palettes; the ability to download data; and display text in multiple languages. OWGIS users are mainly scientists in the oceanography, meteorology and climate fields.
NASA Technical Reports Server (NTRS)
Pearson, Don; Hamm, Dustin; Kubena, Brian; Weaver, Jonathan K.
2010-01-01
An updated version of the Platform Independent Software Components for the Exploration of Space (PISCES) software library is available. A previous version was reported in Library for Developing Spacecraft-Mission-Planning Software (MSC-22983), NASA Tech Briefs, Vol. 25, No. 7 (July 2001), page 52. To recapitulate: This software provides for Web-based, collaborative development of computer programs for planning trajectories and trajectory- related aspects of spacecraft-mission design. The library was built using state-of-the-art object-oriented concepts and software-development methodologies. The components of PISCES include Java-language application programs arranged in a hierarchy of classes that facilitates the reuse of the components. As its full name suggests, the PISCES library affords platform-independence: The Java language makes it possible to use the classes and application programs with a Java virtual machine, which is available in most Web-browser programs. Another advantage is expandability: Object orientation facilitates expansion of the library through creation of a new class. Improvements in the library since the previous version include development of orbital-maneuver- planning and rendezvous-launch-window application programs, enhancement of capabilities for propagation of orbits, and development of a desktop user interface.
Avola, Danilo; Spezialetti, Matteo; Placidi, Giuseppe
2013-06-01
Rehabilitation is often required after stroke, surgery, or degenerative diseases. It has to be specific for each patient and can be easily calibrated if assisted by human-computer interfaces and virtual reality. Recognition and tracking of different human body landmarks represent the basic features for the design of the next generation of human-computer interfaces. The most advanced systems for capturing human gestures are focused on vision-based techniques which, on the one hand, may require compromises from real-time and spatial precision and, on the other hand, ensure natural interaction experience. The integration of vision-based interfaces with thematic virtual environments encourages the development of novel applications and services regarding rehabilitation activities. The algorithmic processes involved during gesture recognition activity, as well as the characteristics of the virtual environments, can be developed with different levels of accuracy. This paper describes the architectural aspects of a framework supporting real-time vision-based gesture recognition and virtual environments for fast prototyping of customized exercises for rehabilitation purposes. The goal is to provide the therapist with a tool for fast implementation and modification of specific rehabilitation exercises for specific patients, during functional recovery. Pilot examples of designed applications and preliminary system evaluation are reported and discussed. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The Integrated Virtual Environment Rehabilitation Treadmill System
Feasel, Jeff; Whitton, Mary C.; Kassler, Laura; Brooks, Frederick P.; Lewek, Michael D.
2015-01-01
Slow gait speed and interlimb asymmetry are prevalent in a variety of disorders. Current approaches to locomotor retraining emphasize the need for appropriate feedback during intensive, task-specific practice. This paper describes the design and feasibility testing of the integrated virtual environment rehabilitation treadmill (IVERT) system intended to provide real-time, intuitive feedback regarding gait speed and asymmetry during training. The IVERT system integrates an instrumented, split-belt treadmill with a front-projection, immersive virtual environment. The novel adaptive control system uses only ground reaction force data from the treadmill to continuously update the speeds of the two treadmill belts independently, as well as to control the speed and heading in the virtual environment in real time. Feedback regarding gait asymmetry is presented 1) visually as walking a curved trajectory through the virtual environment and 2) proprioceptively in the form of different belt speeds on the split-belt treadmill. A feasibility study involving five individuals with asymmetric gait found that these individuals could effectively control the speed of locomotion and perceive gait asymmetry during the training session. Although minimal changes in overground gait symmetry were observed immediately following a single training session, further studies should be done to determine the IVERT’s potential as a tool for rehabilitation of asymmetric gait by providing patients with congruent visual and proprioceptive feedback. PMID:21652279
ERIC Educational Resources Information Center
Childers, Gina; Jones, M. Gail
2015-01-01
Remote access technologies enable students to investigate science by utilizing scientific tools and communicating in real-time with scientists and researchers with only a computer and an Internet connection. Very little is known about student perceptions of how real remote investigations are and how immersed the students are in the experience.…
Temkin, Bharti; Acosta, Eric; Malvankar, Ameya; Vaidyanath, Sreeram
2006-04-01
The Visible Human digital datasets make it possible to develop computer-based anatomical training systems that use virtual anatomical models (virtual body structures-VBS). Medical schools are combining these virtual training systems and classical anatomy teaching methods that use labeled images and cadaver dissection. In this paper we present a customizable web-based three-dimensional anatomy training system, W3D-VBS. W3D-VBS uses National Library of Medicine's (NLM) Visible Human Male datasets to interactively locate, explore, select, extract, highlight, label, and visualize, realistic 2D (using axial, coronal, and sagittal views) and 3D virtual structures. A real-time self-guided virtual tour of the entire body is designed to provide detailed anatomical information about structures, substructures, and proximal structures. The system thus facilitates learning of visuospatial relationships at a level of detail that may not be possible by any other means. The use of volumetric structures allows for repeated real-time virtual dissections, from any angle, at the convenience of the user. Volumetric (3D) virtual dissections are performed by adding, removing, highlighting, and labeling individual structures (and/or entire anatomical systems). The resultant virtual explorations (consisting of anatomical 2D/3D illustrations and animations), with user selected highlighting colors and label positions, can be saved and used for generating lesson plans and evaluation systems. Tracking users' progress using the evaluation system helps customize the curriculum, making W3D-VBS a powerful learning tool. Our plan is to incorporate other Visible Human segmented datasets, especially datasets with higher resolutions, that make it possible to include finer anatomical structures such as nerves and small vessels. (c) 2006 Wiley-Liss, Inc.
Development and Use of a Virtual NMR Facility
NASA Astrophysics Data System (ADS)
Keating, Kelly A.; Myers, James D.; Pelton, Jeffrey G.; Bair, Raymond A.; Wemmer, David E.; Ellis, Paul D.
2000-03-01
We have developed a "virtual NMR facility" (VNMRF) to enhance access to the NMR spectrometers in Pacific Northwest National Laboratory's Environmental Molecular Sciences Laboratory (EMSL). We use the term virtual facility to describe a real NMR facility made accessible via the Internet. The VNMRF combines secure remote operation of the EMSL's NMR spectrometers over the Internet with real-time videoconferencing, remotely controlled laboratory cameras, real-time computer display sharing, a Web-based electronic laboratory notebook, and other capabilities. Remote VNMRF users can see and converse with EMSL researchers, directly and securely control the EMSL spectrometers, and collaboratively analyze results. A customized Electronic Laboratory Notebook allows interactive Web-based access to group notes, experimental parameters, proposed molecular structures, and other aspects of a research project. This paper describes our experience developing a VNMRF and details the specific capabilities available through the EMSL VNMRF. We show how the VNMRF has evolved during a test project and present an evaluation of its impact in the EMSL and its potential as a model for other scientific facilities. All Collaboratory software used in the VNMRF is freely available from http://www.emsl.pnl.gov:2080/docs/collab.
Solving the Software Legacy Problem with RISA
NASA Astrophysics Data System (ADS)
Ibarra, A.; Gabriel, C.
2012-09-01
Nowadays hardware and system infrastructure evolve on time scales much shorter than the typical duration of space astronomy missions. Data processing software capabilities have to evolve to preserve the scientific return during the entire experiment life time. Software preservation is a key issue that has to be tackled before the end of the project to keep the data usable over many years. We present RISA (Remote Interface to Science Analysis) as a solution to decouple data processing software and infrastructure life-cycles, using JAVA applications and web-services wrappers to existing software. This architecture employs embedded SAS in virtual machines assuring a homogeneous job execution environment. We will also present the first studies to reactivate the data processing software of the EXOSAT mission, the first ESA X-ray astronomy mission launched in 1983, using the generic RISA approach.
Interreality: A New Paradigm for E-health.
Riva, Giuseppe
2009-01-01
"Interreality" is a personalized immersive e-therapy whose main novelty is a hybrid, closed-loop empowering experience bridging physical and virtual worlds. The main feature of interreality is a twofold link between the virtual and the real world: (a) behavior in the physical world influences the experience in the virtual one; (b) behavior in the virtual world influences the experience in the real one. This is achieved through: (1) 3D Shared Virtual Worlds: role-playing experiences in which one or more users interact with one another within a 3D world; (2) Bio and Activity Sensors (From the Real to the Virtual World): They are used to track the emotional/health/activity status of the user and to influence his/her experience in the virtual world (aspect, activity and access); (3) Mobile Internet Appliances (From the Virtual to the Real One): In interreality, the social and individual user activity in the virtual world has a direct link with the users' life through a mobile phone/digital assistant. The different technologies that are involved in the interreality vision and its clinical rationale are addressed and discussed.
Proceedings of the First NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)
2009-01-01
Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.
Astroblaster--A Fascinating Game of Multi-Ball Collisions
ERIC Educational Resources Information Center
Kires, Marian
2009-01-01
Multi-ball collisions inside the Astroblaster toy are explained from the conservation of momentum point of view. The important role of the coefficient of restitution is demonstrated in ideal and real cases. Real experimental results with the simple toy can be compared with a computer model represented by an interactive Java applet. (Contains 1…
ERIC Educational Resources Information Center
Akdemir, Ömür; Vural, Ömer F.; Çolakoglu, Özgür M.
2015-01-01
Individuals act different in virtual environment than real life. The primary purpose of this study is to investigate the prospective teachers' likelihood of performing unethical behaviors in the real and virtual environments. Prospective teachers are surveyed online and their perceptions have been collected for various scenarios. Findings revealed…
ERIC Educational Resources Information Center
Gilford, J.; Falconer, R. E.; Wade, R.; Scott-Brown, K. C.
2014-01-01
Interactive Virtual Environments (VEs) have the potential to increase student interest in soil science. Accordingly a bespoke "soil atlas" was created using Java3D as an interactive 3D VE, to show soil information in the context of (and as affected by) the over-lying landscape. To display the below-ground soil characteristics, four sets…
2014-03-27
0.8.0. The virtual machine’s network adapter was set to internal network only to keep any outside traffic from interfering. A MySQL -based query...primary output of Fullstats is the ARFF file format, intended for use with the WEKA Java -based data mining software developed at the University of Waikato
Study of Tools for Network Discovery and Network Mapping
2003-11-01
connected to the switch. iv. Accessibility of historical data and event data In general, network discovery tools keep a history of the collected...has the following software dependencies: - Java Virtual machine 76 - Perl modules - RRD Tool - TomCat - PostgreSQL STRENGTHS AND...systems - provide a simple view of the current network status - generate alarms on status change - generate history of status change VISUAL MAP
Validity of assessing child feeding with virtual reality.
Persky, Susan; Goldring, Megan R; Turner, Sara A; Cohen, Rachel W; Kistler, William D
2018-04-01
Assessment of parents' child feeding behavior is challenging, and there is need for additional methodological approaches. Virtual reality technology allows for the creation of behavioral measures, and its implementation overcomes several limitations of existing methods. This report evaluates the validity and usability of the Virtual Reality (VR) Buffet among a sample of 52 parents of children aged 3-7. Participants served a meal of pasta and apple juice in both a virtual setting and real-world setting (counterbalanced and separated by a distractor task). They then created another meal for their child, this time choosing from the full set of food options in the VR Buffet. Finally, participants completed a food estimation task followed by a questionnaire, which assessed their perceptions of the VR Buffet. Results revealed that the amount of virtual pasta served by parents correlated significantly with the amount of real pasta they served, r s = 0.613, p < .0001, as did served amounts of virtual and real apple juice, r s = 0.822, p < .0001. Furthermore, parents' perception of the calorie content of chosen foods was significantly correlated with observed calorie content (r s = 0.438, p = .002), and parents agreed that they would feed the meal they created to their child (M = 4.43, SD = 0.82 on a 1-5 scale). The data presented here demonstrate that parent behavior in the VR Buffet is highly related to real-world behavior, and that the tool is well-rated by parents. Given the data presented and the potential benefits of the abundant behavioral data the VR Buffet can provide, we conclude that it is a valid and needed addition to the array of tools for assessing feeding behavior. Published by Elsevier Ltd.
Aharon, S; Robb, R A
1997-01-01
Virtual reality environments provide highly interactive, natural control of the visualization process, significantly enhancing the scientific value of the data produced by medical imaging systems. Due to the computational and real time display update requirements of virtual reality interfaces, however, the complexity of organ and tissue surfaces which can be displayed is limited. In this paper, we present a new algorithm for the production of a polygonal surface containing a pre-specified number of polygons from patient or subject specific volumetric image data. The advantage of this new algorithm is that it effectively tiles complex structures with a specified number of polygons selected to optimize the trade-off between surface detail and real-time display rates.
Real-Time View Correction for Mobile Devices.
Schops, Thomas; Oswald, Martin R; Speciale, Pablo; Yang, Shuoran; Pollefeys, Marc
2017-11-01
We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. The inpainting takes the location of strong image gradients into account as likely depth discontinuities. We present our method in the context of a view correction system for mobile devices, and discuss how to obtain a screen-camera calibration and options for acquiring depth input. Our method has use cases in both augmented and virtual reality applications. We demonstrate the speed of our system and the visual quality of its results in multiple experiments in the paper as well as in the supplementary video.
Virtual probing system for medical volume data
NASA Astrophysics Data System (ADS)
Xiao, Yongfei; Fu, Yili; Wang, Shuguo
2007-12-01
Because of the huge computation in 3D medical data visualization, looking into its inner data interactively is always a problem to be resolved. In this paper, we present a novel approach to explore 3D medical dataset in real time by utilizing a 3D widget to manipulate the scanning plane. With the help of the 3D texture property in modern graphics card, a virtual scanning probe is used to explore oblique clipping plane of medical volume data in real time. A 3D model of the medical dataset is also rendered to illustrate the relationship between the scanning-plane image and the other tissues in medical data. It will be a valuable tool in anatomy education and understanding of medical images in the medical research.
Virtual Service, Real Data: Results of a Pilot Study.
ERIC Educational Resources Information Center
Kibbee, Jo; Ward, David; Ma, Wei
2002-01-01
Describes a pilot project at the University of Illinois at Urbana-Champaign reference and undergraduate libraries to test the feasibility of offering real-time online reference service via their Web site. Discusses software selection, policies and procedures, promotion and marketing, user interface, training and staffing, data collection, and…
Interactive visualization of vegetation dynamics
Reed, B.C.; Swets, D.; Bard, L.; Brown, J.; Rowland, James
2001-01-01
Satellite imagery provides a mechanism for observing seasonal dynamics of the landscape that have implications for near real-time monitoring of agriculture, forest, and range resources. This study illustrates a technique for visualizing timely information on key events during the growing season (e.g., onset, peak, duration, and end of growing season), as well as the status of the current growing season with respect to the recent historical average. Using time-series analysis of normalized difference vegetation index (NDVI) data from the advanced very high resolution radiometer (AVHRR) satellite sensor, seasonal dynamics can be derived. We have developed a set of Java-based visualization and analysis tools to make comparisons between the seasonal dynamics of the current year with those from the past twelve years. In addition, the visualization tools allow the user to query underlying databases such as land cover or administrative boundaries to analyze the seasonal dynamics of areas of their own interest. The Java-based tools (data exploration and visualization analysis or DEVA) use a Web-based client-server model for processing the data. The resulting visualization and analysis, available via the Internet, is of value to those responsible for land management decisions, resource allocation, and at-risk population targeting.
Master of Puppets: An Animation-by-Demonstration Computer Puppetry Authoring Framework
NASA Astrophysics Data System (ADS)
Cui, Yaoyuan; Mousas, Christos
2018-03-01
This paper presents Master of Puppets (MOP), an animation-by-demonstration framework that allows users to control the motion of virtual characters (puppets) in real time. In the first step, the user is asked to perform the necessary actions that correspond to the character's motions. The user's actions are recorded, and a hidden Markov model is used to learn the temporal profile of the actions. During the runtime of the framework, the user controls the motions of the virtual character based on the specified activities. The advantage of the MOP framework is that it recognizes and follows the progress of the user's actions in real time. Based on the forward algorithm, the method predicts the evolution of the user's actions, which corresponds to the evolution of the character's motion. This method treats characters as puppets that can perform only one motion at a time. This means that combinations of motion segments (motion synthesis), as well as the interpolation of individual motion sequences, are not provided as functionalities. By implementing the framework and presenting several computer puppetry scenarios, its efficiency and flexibility in animating virtual characters is demonstrated.
Virtual Collaborative Simulation Environment for Integrated Product and Process Development
NASA Technical Reports Server (NTRS)
Gulli, Michael A.
1997-01-01
Deneb Robotics is a leader in the development of commercially available, leading edge three- dimensional simulation software tools for virtual prototyping,, simulation-based design, manufacturing process simulation, and factory floor simulation and training applications. Deneb has developed and commercially released a preliminary Virtual Collaborative Engineering (VCE) capability for Integrated Product and Process Development (IPPD). This capability allows distributed, real-time visualization and evaluation of design concepts, manufacturing processes, and total factory and enterprises in one seamless simulation environment.
The Role of Virtual Rehabilitation in Total Knee and Hip Arthroplasty.
Chughtai, Morad; Newman, Jared M; Sultan, Assem A; Khlopas, Anton; Navarro, Sergio M; Bhave, Anil; Mont, Michael A
2018-06-01
Virtual rehabilitation therapies have been developed to focus on improving care for those suffering from various musculoskeletal disorders. There has been evidence suggesting that real-time virtual rehabilitation may be equivalent to conventional methods for adherence, improvement of function, and relief of pain seen in these conditions. This study specifically evaluated the use of a virtual physical therapy/rehabilitation platform for use during the postoperative period after total hip arthroplasty (THA) and total knee arthroplasty (TKA). The use of this technology has the potential benefits that allow for patient adherence, cost reductions, and coordination of care.
Accountable Information Flow for Java-Based Web Applications
2010-01-01
runtime library Swift server runtime Java servlet framework HTTP Web server Web browser Figure 2: The Swift architecture introduced an open-ended...On the server, the Java application code links against Swift’s server-side run-time library, which in turn sits on top of the standard Java servlet ...AFRL-RI-RS-TR-2010-9 Final Technical Report January 2010 ACCOUNTABLE INFORMATION FLOW FOR JAVA -BASED WEB APPLICATIONS
NASA Astrophysics Data System (ADS)
Gordov, Evgeny; Okladnikov, Igor; Titov, Alexander
2017-04-01
For comprehensive usage of large geospatial meteorological and climate datasets it is necessary to create a distributed software infrastructure based on the spatial data infrastructure (SDI) approach. Currently, it is generally accepted that the development of client applications as integrated elements of such infrastructure should be based on the usage of modern web and GIS technologies. The paper describes the Web GIS for complex processing and visualization of geospatial (mainly in NetCDF and PostGIS formats) datasets as an integral part of the dedicated Virtual Research Environment for comprehensive study of ongoing and possible future climate change, and analysis of their implications, providing full information and computing support for the study of economic, political and social consequences of global climate change at the global and regional levels. The Web GIS consists of two basic software parts: 1. Server-side part representing PHP applications of the SDI geoportal and realizing the functionality of interaction with computational core backend, WMS/WFS/WPS cartographical services, as well as implementing an open API for browser-based client software. Being the secondary one, this part provides a limited set of procedures accessible via standard HTTP interface. 2. Front-end part representing Web GIS client developed according to a "single page application" technology based on JavaScript libraries OpenLayers (http://openlayers.org/), ExtJS (https://www.sencha.com/products/extjs), GeoExt (http://geoext.org/). It implements application business logic and provides intuitive user interface similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Boundless/OpenGeo architecture was used as a basis for Web-GIS client development. According to general INSPIRE requirements to data visualization Web GIS provides such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. The specialized Web GIS client contains three basic tires: • Tier of NetCDF metadata in JSON format • Middleware tier of JavaScript objects implementing methods to work with: o NetCDF metadata o XML file of selected calculations configuration (XML task) o WMS/WFS/WPS cartographical services • Graphical user interface tier representing JavaScript objects realizing general application business logic Web-GIS developed provides computational processing services launching to support solving tasks in the area of environmental monitoring, as well as presenting calculation results in the form of WMS/WFS cartographical layers in raster (PNG, JPG, GeoTIFF), vector (KML, GML, Shape), and binary (NetCDF) formats. It has shown its effectiveness in the process of solving real climate change research problems and disseminating investigation results in cartographical formats. The work is supported by the Russian Science Foundation grant No 16-19-10257.
Gain and phase of perceived virtual rotation evoked by electrical vestibular stimuli
Peters, Ryan M.; Rasman, Brandon G.; Inglis, J. Timothy
2015-01-01
Galvanic vestibular stimulation (GVS) evokes a perception of rotation; however, very few quantitative data exist on the matter. We performed psychophysical experiments on virtual rotations experienced when binaural bipolar electrical stimulation is applied over the mastoids. We also performed analogous real whole body yaw rotation experiments, allowing us to compare the frequency response of vestibular perception with (real) and without (virtual) natural mechanical stimulation of the semicircular canals. To estimate the gain of vestibular perception, we measured direction discrimination thresholds for virtual and real rotations. Real direction discrimination thresholds decreased at higher frequencies, confirming multiple previous studies. Conversely, virtual direction discrimination thresholds increased at higher frequencies, implying low-pass filtering of the virtual perception process occurring potentially anywhere between afferent transduction and cortical responses. To estimate the phase of vestibular perception, participants manually tracked their perceived position during sinusoidal virtual and real kinetic stimulation. For real rotations, perceived velocity was approximately in phase with actual velocity across all frequencies. Perceived virtual velocity was in phase with the GVS waveform at low frequencies (0.05 and 0.1 Hz). As frequency was increased to 1 Hz, the phase of perceived velocity advanced relative to the GVS waveform. Therefore, at low frequencies GVS is interpreted as an angular velocity signal and at higher frequencies GVS becomes interpreted increasingly as an angular position signal. These estimated gain and phase spectra for vestibular perception are a first step toward generating well-controlled virtual vestibular percepts, an endeavor that may reveal the usefulness of GVS in the areas of clinical assessment, neuroprosthetics, and virtual reality. PMID:25925318
Gain and phase of perceived virtual rotation evoked by electrical vestibular stimuli.
Peters, Ryan M; Rasman, Brandon G; Inglis, J Timothy; Blouin, Jean-Sébastien
2015-07-01
Galvanic vestibular stimulation (GVS) evokes a perception of rotation; however, very few quantitative data exist on the matter. We performed psychophysical experiments on virtual rotations experienced when binaural bipolar electrical stimulation is applied over the mastoids. We also performed analogous real whole body yaw rotation experiments, allowing us to compare the frequency response of vestibular perception with (real) and without (virtual) natural mechanical stimulation of the semicircular canals. To estimate the gain of vestibular perception, we measured direction discrimination thresholds for virtual and real rotations. Real direction discrimination thresholds decreased at higher frequencies, confirming multiple previous studies. Conversely, virtual direction discrimination thresholds increased at higher frequencies, implying low-pass filtering of the virtual perception process occurring potentially anywhere between afferent transduction and cortical responses. To estimate the phase of vestibular perception, participants manually tracked their perceived position during sinusoidal virtual and real kinetic stimulation. For real rotations, perceived velocity was approximately in phase with actual velocity across all frequencies. Perceived virtual velocity was in phase with the GVS waveform at low frequencies (0.05 and 0.1 Hz). As frequency was increased to 1 Hz, the phase of perceived velocity advanced relative to the GVS waveform. Therefore, at low frequencies GVS is interpreted as an angular velocity signal and at higher frequencies GVS becomes interpreted increasingly as an angular position signal. These estimated gain and phase spectra for vestibular perception are a first step toward generating well-controlled virtual vestibular percepts, an endeavor that may reveal the usefulness of GVS in the areas of clinical assessment, neuroprosthetics, and virtual reality. Copyright © 2015 the American Physiological Society.
Advanced Technology for Portable Personal Visualization.
1992-06-01
interactive radiosity . 6 Advanced Technology for Portable Personal Visualization Progress Report January-June 1992 9 2.5 Virtual-Environment Ultrasound...the system, with support for textures, model partitioning, more complex radiosity emitters, and the replacement of model parts with objects from our...model libraries. "* Add real-time, interactive radiosity to the display program on Pixel-Planes 5. "* Move the real-time model mesh-generation to the
NASA Astrophysics Data System (ADS)
Li, Baishou; Huang, Yu; Lan, Guangquan; Li, Tingting; Lu, Ting; Yao, Mingxing; Luo, Yuandan; Li, Boxiang; Qian, Yongyou; Gao, Yujiu
2015-12-01
This paper design and implement security monitor system within a scenic spot for tourists, the scenic spot staff can be automatic real time for visitors to perception and monitoring, and visitors can also know about themselves location in the scenic, real-time and obtain the 3D imaging conditions of scenic area. Through early warning can realize "parent-child relation", preventing the old man and child lost and wandering. Research results to the further development of virtual reality to provide effective security early warning platform of the theoretical basis and practical reference.
Real Students and Virtual Field Trips
NASA Astrophysics Data System (ADS)
de Paor, D. G.; Whitmeyer, S. J.; Bailey, J. E.; Schott, R. C.; Treves, R.; Scientific Team Of Www. Digitalplanet. Org
2010-12-01
Field trips have always been one of the major attractions of geoscience education, distinguishing courses in geology, geography, oceanography, etc., from laboratory-bound sciences such as nuclear physics or biochemistry. However, traditional field trips have been limited to regions with educationally useful exposures and to student populations with the necessary free time and financial resources. Two-year or commuter colleges serving worker-students cannot realistically insist on completion of field assignments and even well-endowed universities cannot take students to more than a handful of the best available field localities. Many instructors have attempted to bring the field into the classroom with the aid of technology. So-called Virtual Field Trips (VFTs) cannot replace the real experience for those that experience it but they are much better than nothing at all. We have been working to create transformative improvements in VFTs using four concepts: (i) self-drive virtual vehicles that students use to navigate the virtual globe under their own control; (ii) GigaPan outcrops that reveal successively more details views of key locations; (iii) virtual specimens scanned from real rocks, minerals, and fossils; and (iv) embedded assessment via logging of student actions. Students are represented by avatars of their own choosing and travel either together in a virtual field vehicle, or separately. When they approach virtual outcrops, virtual specimens become collectable and can be examined using Javascript controls that change magnification and orientation. These instructional resources are being made available via a new server under the domain name www.DigitalPlanet.org. The server will log student progress and provide immediate feedback. We aim to disseminate these resources widely and welcome feedback from instructors and students.
Planning and Management of Real-Time Geospatialuas Missions Within a Virtual Globe Environment
NASA Astrophysics Data System (ADS)
Nebiker, S.; Eugster, H.; Flückiger, K.; Christen, M.
2011-09-01
This paper presents the design and development of a hardware and software framework supporting all phases of typical monitoring and mapping missions with mini and micro UAVs (unmanned aerial vehicles). The developed solution combines state-of-the art collaborative virtual globe technologies with advanced geospatial imaging techniques and wireless data link technologies supporting the combined and highly reliable transmission of digital video, high-resolution still imagery and mission control data over extended operational ranges. The framework enables the planning, simulation, control and real-time monitoring of UAS missions in application areas such as monitoring of forest fires, agronomical research, border patrol or pipeline inspection. The geospatial components of the project are based on the Virtual Globe Technology i3D OpenWebGlobe of the Institute of Geomatics Engineering at the University of Applied Sciences Northwestern Switzerland (FHNW). i3D OpenWebGlobe is a high-performance 3D geovisualisation engine supporting the web-based streaming of very large amounts of terrain and POI data.
A 3D virtual reality simulator for training of minimally invasive surgery.
Mi, Shao-Hua; Hou, Zeng-Gunag; Yang, Fan; Xie, Xiao-Liang; Bian, Gui-Bin
2014-01-01
For the last decade, remarkable progress has been made in the field of cardiovascular disease treatment. However, these complex medical procedures require a combination of rich experience and technical skills. In this paper, a 3D virtual reality simulator for core skills training in minimally invasive surgery is presented. The system can generate realistic 3D vascular models segmented from patient datasets, including a beating heart, and provide a real-time computation of force and force feedback module for surgical simulation. Instruments, such as a catheter or guide wire, are represented by a multi-body mass-spring model. In addition, a realistic user interface with multiple windows and real-time 3D views are developed. Moreover, the simulator is also provided with a human-machine interaction module that gives doctors the sense of touch during the surgery training, enables them to control the motion of a virtual catheter/guide wire inside a complex vascular model. Experimental results show that the simulator is suitable for minimally invasive surgery training.
Coupled auralization and virtual video for immersive multimedia displays
NASA Astrophysics Data System (ADS)
Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian
2003-04-01
The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.
NASA Astrophysics Data System (ADS)
Caprio, M.; Cua, G. B.; Wiemer, S.; Fischer, M.; Heaton, T. H.; CISN EEW Team
2011-12-01
The Virtual Seismologist (VS) earthquake early warning (EEW) algorithm is one of 3 EEW approaches being incorporated into the California Integrated Seismic Network (CISN) ShakeAlert system, a prototype EEW system being tested in real-time in California. The VS algorithm, implemented by the Swiss Seismological Service at ETH Zurich, is a Bayesian approach to EEW, wherein the most probable source estimate at any given time is a combination of contributions from a likehihood function that evolves in response to incoming data from the on-going earthquake, and selected prior information, which can include factors such as network topology, the Gutenberg-Richter relationship or previously observed seismicity. The VS codes have been running in real-time at the Southern California Seismic Network (SCSN) since July 2008, and at the Northern California Seismic Network (NCSN) since February 2009. With the aim of improving the convergence of real-time VS magnitude estimates to network magnitudes, we evaluate various empirical and Vs30-based approaches to accounting for site amplification. Empirical station corrections for SCSN stations are derived from M>3.0 events from 2005 through 2009. We evaluate the performance of the various approaches using an independent 2010 dataset. In addition, we analyze real-time VS performance from 2008 to the present to quantify the time and spatial dependence of VS uncertainty estimates. We also summarize real-time VS performance for significant 2011 events in California. Improved magnitude and uncertainty estimates potentially increase the utility of EEW information for end-users, particularly those intending to automate damage-mitigating actions based on real-time information.
Fire training in a virtual-reality environment
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Jurgen; Bucken, Arno
2005-03-01
Although fire is very common in our daily environment - as a source of energy at home or as a tool in industry - most people cannot estimate the danger of a conflagration. Therefore it is important to train people in combating fire. Beneath training with propane simulators or real fires and real extinguishers, fire training can be performed in virtual reality, which means a pollution-free and fast way of training. In this paper we describe how to enhance a virtual-reality environment with a real-time fire simulation and visualisation in order to establish a realistic emergency-training system. The presented approach supports extinguishing of the virtual fire including recordable performance data as needed in teletraining environments. We will show how to get realistic impressions of fire using advanced particle-simulation and how to use the advantages of particles to trigger states in a modified cellular automata used for the simulation of fire-behaviour. Using particle systems that interact with cellular automata it is possible to simulate a developing, spreading fire and its reaction on different extinguishing agents like water, CO2 or oxygen. The methods proposed in this paper have been implemented and successfully tested on Cosimir, a commercial robot-and VR-simulation-system.
NASA Astrophysics Data System (ADS)
Da Silva, A.; Sánchez Prieto, S.; Polo, O.; Parra Espada, P.
2013-05-01
Because of the tough robustness requirements in space software development, it is imperative to carry out verification tasks at a very early development stage to ensure that the implemented exception mechanisms work properly. All this should be done long time before the real hardware is available. But even if real hardware is available the verification of software fault tolerance mechanisms can be difficult since real faulty situations must be systematically and artificially brought about which can be imposible on real hardware. To solve this problem the Alcala Space Research Group (SRG) has developed a LEON2 virtual platform (Leon2ViP) with fault injection capabilities. This way it is posible to run the exact same target binary software as runs on the physical system in a more controlled and deterministic environment, allowing a more strict requirements verification. Leon2ViP enables unmanned and tightly focused fault injection campaigns, not possible otherwise, in order to expose and diagnose flaws in the software implementation early. Furthermore, the use of a virtual hardware-in-the-loop approach makes it possible to carry out preliminary integration tests with the spacecraft emulator or the sensors. The use of Leon2ViP has meant a signicant improvement, in both time and cost, in the development and verification processes of the Instrument Control Unit boot software on board Solar Orbiter's Energetic Particle Detector.
Chen, Xiaojun; Xu, Lu; Wang, Yiping; Wang, Huixiang; Wang, Fang; Zeng, Xiangsen; Wang, Qiugen; Egger, Jan
2015-06-01
The surgical navigation system has experienced tremendous development over the past decades for minimizing the risks and improving the precision of the surgery. Nowadays, Augmented Reality (AR)-based surgical navigation is a promising technology for clinical applications. In the AR system, virtual and actual reality are mixed, offering real-time, high-quality visualization of an extensive variety of information to the users (Moussa et al., 2012) [1]. For example, virtual anatomical structures such as soft tissues, blood vessels and nerves can be integrated with the real-world scenario in real time. In this study, an AR-based surgical navigation system (AR-SNS) is developed using an optical see-through HMD (head-mounted display), aiming at improving the safety and reliability of the surgery. With the use of this system, including the calibration of instruments, registration, and the calibration of HMD, the 3D virtual critical anatomical structures in the head-mounted display are aligned with the actual structures of patient in real-world scenario during the intra-operative motion tracking process. The accuracy verification experiment demonstrated that the mean distance and angular errors were respectively 0.809±0.05mm and 1.038°±0.05°, which was sufficient to meet the clinical requirements. Copyright © 2015 Elsevier Inc. All rights reserved.
Optimizing Aspect-Oriented Mechanisms for Embedded Applications
NASA Astrophysics Data System (ADS)
Hundt, Christine; Stöhr, Daniel; Glesner, Sabine
As applications for small embedded mobile devices are getting larger and more complex, it becomes inevitable to adopt more advanced software engineering methods from the field of desktop application development. Aspect-oriented programming (AOP) is a promising approach due to its advanced modularization capabilities. However, existing AOP languages tend to add a substantial overhead in both execution time and code size which restricts their practicality for small devices with limited resources. In this paper, we present optimizations for aspect-oriented mechanisms at the level of the virtual machine. Our experiments show that these optimizations yield a considerable performance gain along with a reduction of the code size. Thus, our optimizations establish the base for using advanced aspect-oriented modularization techniques for developing Java applications on small embedded devices.
The RoboCup Mixed Reality League - A Case Study
NASA Astrophysics Data System (ADS)
Gerndt, Reinhard; Bohnen, Matthias; da Silva Guerra, Rodrigo; Asada, Minoru
In typical mixed reality systems there is only a one-way interaction from real to virtual. A human user or the physics of a real object may influence the behavior of virtual objects, but real objects usually cannot be influenced by the virtual world. By introducing real robots into the mixed reality system, we allow a true two-way interaction between virtual and real worlds. Our system has been used since 2007 to implement the RoboCup mixed reality soccer games and other applications for research and edutainment. Our framework system is freely programmable to generate any virtual environment, which may then be further supplemented with virtual and real objects. The system allows for control of any real object based on differential drive robots. The robots may be adapted for different applications, e.g., with markers for identification or with covers to change shape and appearance. They may also be “equipped” with virtual tools. In this chapter we present the hardware and software architecture of our system and some applications. The authors believe this can be seen as a first implementation of Ivan Sutherland’s 1965 idea of the ultimate display: “The ultimate display would, of course, be a room within which the computer can control the existence of matter …” (Sutherland, 1965, Proceedings of IFIPS Congress 2:506-508).
Pataky, T C; Lamb, P F
2018-06-01
External randomness exists in all sports but is perhaps most obvious in golf putting where robotic putters sink only 80% of 5 m putts due to unpredictable ball-green dynamics. The purpose of this study was to test whether physical randomness training can improve putting performance in novices. A virtual random-physics golf-putting game was developed based on controlled ball-roll data. Thirty-two subjects were assigned a unique randomness gain (RG) ranging from 0.1 to 2.0-times real-world randomness. Putter face kinematics were measured in 5 m laboratory putts before and after five days of virtual training. Performance was quantified using putt success rate and "miss-adjustment correlation" (MAC), the correlation between left-right miss magnitude and subsequent right-left kinematic adjustments. Results showed no RG-success correlation (r = -0.066, p = 0.719) but mildly stronger correlations with MAC for face angle (r = -0.168, p = 0.358) and clubhead path (r = -0.302, p = 0.093). The strongest RG-MAC correlation was observed during virtual training (r = -0.692, p < 0.001). These results suggest that subjects quickly adapt to physical randomness in virtual training, and also that this learning may weakly transfer to real golf putting kinematics. Adaptation to external physical randomness during virtual training may therefore help golfers adapt to external randomness in real-world environments.
A novel augmented reality system of image projection for image-guided neurosurgery.
Mahvash, Mehran; Besharati Tabrizi, Leila
2013-05-01
Augmented reality systems combine virtual images with a real environment. To design and develop an augmented reality system for image-guided surgery of brain tumors using image projection. A virtual image was created in two ways: (1) MRI-based 3D model of the head matched with the segmented lesion of a patient using MRIcro software (version 1.4, freeware, Chris Rorden) and (2) Digital photograph based model in which the tumor region was drawn using image-editing software. The real environment was simulated with a head phantom. For direct projection of the virtual image to the head phantom, a commercially available video projector (PicoPix 1020, Philips) was used. The position and size of the virtual image was adjusted manually for registration, which was performed using anatomical landmarks and fiducial markers position. An augmented reality system for image-guided neurosurgery using direct image projection has been designed successfully and implemented in first evaluation with promising results. The virtual image could be projected to the head phantom and was registered manually. Accurate registration (mean projection error: 0.3 mm) was performed using anatomical landmarks and fiducial markers position. The direct projection of a virtual image to the patients head, skull, or brain surface in real time is an augmented reality system that can be used for image-guided neurosurgery. In this paper, the first evaluation of the system is presented. The encouraging first visualization results indicate that the presented augmented reality system might be an important enhancement of image-guided neurosurgery.
Foloppe, Déborah A; Richard, Paul; Yamaguchi, Takehiko; Etcharry-Bouyx, Frédérique; Allain, Philippe
2018-07-01
Impairments in performing activities of daily living occur early in the course of Alzheimer's disease (AD). There is a great need to develop non-pharmacological therapeutic interventions likely to reduce dependency in everyday activities in AD patients. This study investigated whether it was possible to increase autonomy in these patients in cooking activities using interventions based on errorless learning, vanishing-cue, and virtual reality techniques. We recruited a 79-year-old woman who met NINCDS-ADRDA criteria for probable AD. She was trained in four cooking tasks for four days per task, one hour per day, in virtual and in real conditions. Outcome measures included subjective data concerning the therapeutic intervention and the experience of virtual reality, repeated assessments of training activities, neuropsychological scores, and self-esteem and quality of life measures. The results indicated that our patient could relearn some cooking activities using virtual reality techniques. Transfer to real life was also observed. Improvement of the task performance remained stable over time. This case report supports the value of a non-immersive virtual kitchen to help people with AD to relearn cooking activities.
East Java Maritime Connectivity and Its Regional Development Support
NASA Astrophysics Data System (ADS)
Purboyo, H.; Ibad, M. Z.
2017-07-01
The study presents an evolution of maritime connectivity index of East Java which is associated with accessibility and mobility index of regions in East Java. The findings show that East Java increased connectivity more than three times from 1996 to 2011. Initially, the East Java is importer but then become exporter to national territory. For accessibility, the inland regions of East Java in general is higher than the coastal areas. And for mobility, inland regions initially have a small index, but in subsequent years its index is greater than the coastal areas.
NASA Astrophysics Data System (ADS)
Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu
2003-01-01
This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.
A video-based real-time adaptive vehicle-counting system for urban roads.
Liu, Fei; Zeng, Zhiyuan; Jiang, Rong
2017-01-01
In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.
A video-based real-time adaptive vehicle-counting system for urban roads
2017-01-01
In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios. PMID:29135984
ERIC Educational Resources Information Center
Schonbrodt, Felix D.; Asendorpf, Jens B.
2011-01-01
Computer games are advocated as a promising tool bridging the gap between the controllability of a lab experiment and the mundane realism of a field experiment. At the same time, many authors stress the importance of observing real behavior instead of asking participants about possible or intended behaviors. In this article, the authors introduce…
Results of a massive experiment on virtual currency endowments and money demand.
Živić, Nenad; Andjelković, Igor; Özden, Tolga; Dekić, Milovan; Castronova, Edward
2017-01-01
We use a 575,000-subject, 28-day experiment to investigate monetary policy in a virtual setting. The experiment tests the effect of virtual currency endowments on player retention and virtual currency demand. An increase in endowments of a virtual currency should lower the demand for the currency in the short run. However, in the long run, we would expect money demand to rise in response to inflation in the virtual world. We test for this behavior in a virtual field experiment in the football management game Top11. 575,000 players were selected at random and allocated to different "shards" or versions of the world. The shards differed only in terms of the initial money endowment offered to new players. Money demand was observed for 28 days as players used real money to purchase additional virtual currency. The results indicate that player money purchases were significantly higher in the shards where higher endowments were given. This suggests that a positive change in the money supply in a virtual context leads to inflation and increased money demand, and does so much more quickly than in real-world economies. Differences between virtual and real currency behavior will become more interesting as virtual currency becomes a bigger part of the real economy.
Results of a massive experiment on virtual currency endowments and money demand
Živić, Nenad; Andjelković, Igor; Özden, Tolga; Dekić, Milovan
2017-01-01
We use a 575,000-subject, 28-day experiment to investigate monetary policy in a virtual setting. The experiment tests the effect of virtual currency endowments on player retention and virtual currency demand. An increase in endowments of a virtual currency should lower the demand for the currency in the short run. However, in the long run, we would expect money demand to rise in response to inflation in the virtual world. We test for this behavior in a virtual field experiment in the football management game Top11. 575,000 players were selected at random and allocated to different “shards” or versions of the world. The shards differed only in terms of the initial money endowment offered to new players. Money demand was observed for 28 days as players used real money to purchase additional virtual currency. The results indicate that player money purchases were significantly higher in the shards where higher endowments were given. This suggests that a positive change in the money supply in a virtual context leads to inflation and increased money demand, and does so much more quickly than in real-world economies. Differences between virtual and real currency behavior will become more interesting as virtual currency becomes a bigger part of the real economy. PMID:29045494
Short Term Motor-Skill Acquisition Improves with Size of Self-Controlled Virtual Hands
Ossmy, Ori; Mukamel, Roy
2017-01-01
Visual feedback in general, and from the body in particular, is known to influence the performance of motor skills in humans. However, it is unclear how the acquisition of motor skills depends on specific visual feedback parameters such as the size of performing effector. Here, 21 healthy subjects physically trained to perform sequences of finger movements with their right hand. Through the use of 3D Virtual Reality devices, visual feedback during training consisted of virtual hands presented on the screen, tracking subject’s hand movements in real time. Importantly, the setup allowed us to manipulate the size of the displayed virtual hands across experimental conditions. We found that performance gains increase with the size of virtual hands. In contrast, when subjects trained by mere observation (i.e., in the absence of physical movement), manipulating the size of the virtual hand did not significantly affect subsequent performance gains. These results demonstrate that when it comes to short-term motor skill learning, the size of visual feedback matters. Furthermore, these results suggest that highest performance gains in individual subjects are achieved when the size of the virtual hand matches their real hand size. These results may have implications for optimizing motor training schemes. PMID:28056023
Luu, Trieu Phat; He, Yongtian; Brown, Samuel; Nakagame, Sho; Contreras-Vidal, Jose L.
2017-01-01
Objective The control of human bipedal locomotion is of great interest to the field of lower-body brain computer interfaces (BCIs) for gait rehabilitation. While the feasibility of closed-loop BCI systems for the control of a lower body exoskeleton has been recently shown, multi-day closed-loop neural decoding of human gait in a BCI virtual reality (BCI-VR) environment has yet to be demonstrated. BCI-VR systems provide valuable alternatives for movement rehabilitation when wearable robots are not desirable due to medical conditions, cost, accessibility, usability, or patient preferences. Approach In this study, we propose a real-time closed-loop BCI that decodes lower limb joint angles from scalp electroencephalography (EEG) during treadmill walking to control a walking avatar in a virtual environment. Fluctuations in the amplitude of slow cortical potentials of EEG in the delta band (0.1 – 3 Hz) were used for prediction; thus, the EEG features correspond to time-domain amplitude modulated (AM) potentials in the delta band. Virtual kinematic perturbations resulting in asymmetric walking gait patterns of the avatar were also introduced to investigate gait adaptation using the closed-loop BCI-VR system over a period of eight days. Main results Our results demonstrate the feasibility of using a closed-loop BCI to learn to control a walking avatar under normal and altered visuomotor perturbations, which involved cortical adaptations. The average decoding accuracies (Pearson’s r values) in real-time BCI across all subjects increased from (Hip: 0.18 ± 0.31; Knee: 0.23 ± 0.33; Ankle: 0.14 ± 0.22) on Day 1 to (Hip: 0.40 ± 0.24; Knee: 0.55 ± 0.20; Ankle: 0.29 ± 0.22) on Day 8. Significance These findings have implications for the development of a real-time closed-loop EEG-based BCI-VR system for gait rehabilitation after stroke and for understanding cortical plasticity induced by a closed-loop BCI-VR system. PMID:27064824
Luu, Trieu Phat; He, Yongtian; Brown, Samuel; Nakagame, Sho; Contreras-Vidal, Jose L
2016-06-01
The control of human bipedal locomotion is of great interest to the field of lower-body brain-computer interfaces (BCIs) for gait rehabilitation. While the feasibility of closed-loop BCI systems for the control of a lower body exoskeleton has been recently shown, multi-day closed-loop neural decoding of human gait in a BCI virtual reality (BCI-VR) environment has yet to be demonstrated. BCI-VR systems provide valuable alternatives for movement rehabilitation when wearable robots are not desirable due to medical conditions, cost, accessibility, usability, or patient preferences. In this study, we propose a real-time closed-loop BCI that decodes lower limb joint angles from scalp electroencephalography (EEG) during treadmill walking to control a walking avatar in a virtual environment. Fluctuations in the amplitude of slow cortical potentials of EEG in the delta band (0.1-3 Hz) were used for prediction; thus, the EEG features correspond to time-domain amplitude modulated potentials in the delta band. Virtual kinematic perturbations resulting in asymmetric walking gait patterns of the avatar were also introduced to investigate gait adaptation using the closed-loop BCI-VR system over a period of eight days. Our results demonstrate the feasibility of using a closed-loop BCI to learn to control a walking avatar under normal and altered visuomotor perturbations, which involved cortical adaptations. The average decoding accuracies (Pearson's r values) in real-time BCI across all subjects increased from (Hip: 0.18 ± 0.31; Knee: 0.23 ± 0.33; Ankle: 0.14 ± 0.22) on Day 1 to (Hip: 0.40 ± 0.24; Knee: 0.55 ± 0.20; Ankle: 0.29 ± 0.22) on Day 8. These findings have implications for the development of a real-time closed-loop EEG-based BCI-VR system for gait rehabilitation after stroke and for understanding cortical plasticity induced by a closed-loop BCI-VR system.
Using EMG to anticipate head motion for virtual-environment applications
NASA Technical Reports Server (NTRS)
Barniv, Yair; Aguilar, Mario; Hasanbelliu, Erion
2005-01-01
In virtual environment (VE) applications, where virtual objects are presented in a see-through head-mounted display, virtual images must be continuously stabilized in space in response to user's head motion. Time delays in head-motion compensation cause virtual objects to "swim" around instead of being stable in space which results in misalignment errors when overlaying virtual and real objects. Visual update delays are a critical technical obstacle for implementing head-mounted displays in applications such as battlefield simulation/training, telerobotics, and telemedicine. Head motion is currently measurable by a head-mounted 6-degrees-of-freedom inertial measurement unit. However, even given this information, overall VE-system latencies cannot be reduced under about 25 ms. We present a novel approach to eliminating latencies, which is premised on the fact that myoelectric signals from a muscle precede its exertion of force, thereby limb or head acceleration. We thus suggest utilizing neck-muscles' myoelectric signals to anticipate head motion. We trained a neural network to map such signals onto equivalent time-advanced inertial outputs. The resulting network can achieve time advances of up to 70 ms.
Using EMG to anticipate head motion for virtual-environment applications.
Barniv, Yair; Aguilar, Mario; Hasanbelliu, Erion
2005-06-01
In virtual environment (VE) applications, where virtual objects are presented in a see-through head-mounted display, virtual images must be continuously stabilized in space in response to user's head motion. Time delays in head-motion compensation cause virtual objects to "swim" around instead of being stable in space which results in misalignment errors when overlaying virtual and real objects. Visual update delays are a critical technical obstacle for implementing head-mounted displays in applications such as battlefield simulation/training, telerobotics, and telemedicine. Head motion is currently measurable by a head-mounted 6-degrees-of-freedom inertial measurement unit. However, even given this information, overall VE-system latencies cannot be reduced under about 25 ms. We present a novel approach to eliminating latencies, which is premised on the fact that myoelectric signals from a muscle precede its exertion of force, thereby limb or head acceleration. We thus suggest utilizing neck-muscles' myoelectric signals to anticipate head motion. We trained a neural network to map such signals onto equivalent time-advanced inertial outputs. The resulting network can achieve time advances of up to 70 ms.
Lahav, Orly; Gedalevitz, Hadas; Battersby, Steven; Brown, David; Evett, Lindsay; Merritt, Patrick
2018-05-01
This paper examines the ability of people who are blind to construct a mental map and perform orientation tasks in real space by using Nintendo Wii technologies to explore virtual environments. The participant explores new spaces through haptic and auditory feedback triggered by pointing or walking in the virtual environments and later constructs a mental map, which can be used to navigate in real space. The study included 10 participants who were congenitally or adventitiously blind, divided into experimental and control groups. The research was implemented by using virtual environments exploration and orientation tasks in real spaces, using both qualitative and quantitative methods in its methodology. The results show that the mode of exploration afforded to the experimental group is radically new in orientation and mobility training; as a result 60% of the experimental participants constructed mental maps that were based on map model, compared with only 30% of the control group participants. Using technology that enabled them to explore and to collect spatial information in a way that does not exist in real space influenced the ability of the experimental group to construct a mental map based on the map model. Implications for rehabilitation The virtual cane system for the first time enables people who are blind to explore and collect spatial information via the look-around mode in addition to the walk-around mode. People who are blind prefer to use look-around mode to explore new spaces, as opposed to the walking mode. Although the look-around mode requires users to establish a complex collecting and processing procedure for the spatial data, people who are blind using this mode are able to construct a mental map as a map model. For people who are blind (as for the sighted) construction of a mental map based on map model offers more flexibility in choosing a walking path in a real space, accounting for changes that occur in the space.
Height effects in real and virtual environments.
Simeonov, Peter I; Hsiao, Hongwei; Dotson, Brian W; Ammons, Douglas E
2005-01-01
The study compared human perceptions of height, danger, and anxiety, as well as skin conductance and heart rate responses and postural instability effects, in real and virtual height environments. The 24 participants (12 men, 12 women), whose average age was 23.6 years, performed "lean-over-the-railing" and standing tasks on real and comparable virtual balconies, using a surround-screen virtual reality (SSVR) system. The results indicate that the virtual display of elevation provided realistic perceptual experience and induced some physiological responses and postural instability effects comparable to those found in a real environment. It appears that a simulation of elevated work environment in a SSVR system, although with reduced visual fidelity, is a valid tool for safety research. Potential applications of this study include the design of virtual environments that will help in safe evaluation of human performance at elevation, identification of risk factors leading to fall incidents, and assessment of new fall prevention strategies.
Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm
Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A.; Przekwas, Andrzej; Francis, Joseph T.; Lytton, William W.
2015-01-01
Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics. PMID:26635598
A Novel Integrating Virtual Reality Approach for the Assessment of the Attachment Behavioral System
Chicchi Giglioli, Irene Alice; Pravettoni, Gabriella; Sutil Martín, Dolores Lucia; Parra, Elena; Raya, Mariano A.
2017-01-01
Virtual reality (VR) technology represents a novel and powerful tool for behavioral research in psychological assessment. VR provides simulated experiences able to create the sensation of undergoing real situations. Users become active participants in the virtual environment seeing, hearing, feeling, and actuating as if they were in the real world. Currently, the most psychological VR applications concern the treatment of various mental disorders but not the assessment, that it is mainly based on paper and pencil tests. The observation of behaviors is costly, labor-intensive, and it is hard to create social situations in laboratory settings, even if the observation of actual behaviors could be particularly informative. In this framework, social stressful experiences can activate various behaviors of attachment for a significant person that can help to control and soothe them to promote individual’s well-being. Social support seeking, physical proximity, and positive and negative behaviors represent the main attachment behaviors that people can carry out during experiences of distress. We proposed VR as a novel integrating approach to measure real attachment behaviors. The first studies on attachment behavioral system by VR showed the potentiality of this approach. To improve the assessment during the VR experience, we proposed virtual stealth assessment (VSA) as a new method. VSA could represent a valid and novel technique to measure various psychological attributes in real-time during the virtual experience. The possible use of this method in psychology could be to generate a more complete, exhaustive, and accurate individual’s psychological evaluation. PMID:28701967
Virtualizing access to scientific applications with the Application Hosting Environment
NASA Astrophysics Data System (ADS)
Zasada, S. J.; Coveney, P. V.
2009-12-01
The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.
NASA Astrophysics Data System (ADS)
Zhang, Zhuoying; Yang, Hong; Shi, Minjun
2016-04-01
The North China Plain is the most water scarce region in China. Its water security is closely relevant to interregional water movement, which can be realized by real water transfers and/or virtual water transfers. This study investigates the roles of virtual water trade and real water transfer using Interregional Input-Output model. The results show that the region is receiving 19.4 billion m3/year of virtual water from the interregional trade, while exporting 16.4 billion m3/year of virtual water in the international trade. In balance, the region has a net virtual water gain of 3 billion m3/year from outside. Its virtual water inflow is dominated by agricultural products from other provinces, totalling 16.6 billion m3/year, whilst its virtual water export is dominated by manufacturing sectors to other countries, totalling 11.7 billion m3/year. Both virtual water import and real water transfer from South to North Water Diversion Project are important water supplements for the region. The results of this study provide useful scientific references for the establishment of combating strategies to deal with the water scarcity in the future.
Sensing and Virtual Worlds - A Survey of Research Opportunities
NASA Technical Reports Server (NTRS)
Moore, Dana
2012-01-01
Virtual Worlds (VWs) have been used effectively in live and constructive military training. An area that remains fertile ground for exploration and a new vision involves integrating various traditional and now non-traditional sensors into virtual worlds. In this paper, we will assert that the benefits of this integration are several. First, we maintain that virtual worlds offer improved sensor deployment planning through improved visualization and stimulation of the model, using geo-specific terrain and structure. Secondly, we assert that VWs enhance the mission rehearsal process, and that using a mix of live avatars, non-player characters, and live sensor feeds (e.g. real time meteorology) can help visualization of the area of operations. Finally, tactical operations are improved via better collaboration and integration of real world sensing capabilities, and in most situations, 30 VWs improve the state of the art over current "dots on a map" 20 geospatial visualization. However, several capability gaps preclude a fuller realization of this vision. In this paper, we identify many of these gaps and suggest research directions
Virtual Shaker Testing: Simulation Technology Improves Vibration Test Performance
NASA Technical Reports Server (NTRS)
Ricci, Stefano; Peeters, Bart; Fetter, Rebecca; Boland, Doug; Debille, Jan
2008-01-01
In the field of vibration testing, the interaction between the structure being tested and the instrumentation hardware used to perform the test is a critical issue. This is particularly true when testing massive structures (e.g. satellites), because due to physical design and manufacturing limits, the dynamics of the testing facility often couples with the test specimen one in the frequency range of interest. A further issue in this field is the standard use of a closed loop real-time vibration control scheme, which could potentially shift poles and change damping of the aforementioned coupled system. Virtual shaker testing is a novel approach to deal with these issues. It means performing a simulation which closely represents the real vibration test on the specific facility by taking into account all parameters which might impact the dynamic behavior of the specimen. In this paper, such a virtual shaker testing approach is developed. It consists of the following components: (1) Either a physical-based or an equation-based coupled electro-mechanical lumped parameter shaker model is created. The model parameters are obtained from manufacturer's specifications or by carrying out some dedicated experiments; (2) Existing real-time vibration control algorithm are ported to the virtual simulation environment; and (3) A structural model of the test object is created and after defining proper interface conditions structural modes are computed by means of the well-established Craig-Bampton CMS technique. At this stage, a virtual shaker test has been run, by coupling the three described models (shaker, control loop, structure) in a co-simulation routine. Numerical results have eventually been correlated with experimental ones in order to assess the robustness of the proposed methodology.
Ecological validity of virtual environments to assess human navigation ability
van der Ham, Ineke J. M.; Faber, Annemarie M. E.; Venselaar, Matthijs; van Kreveld, Marc J.; Löffler, Maarten
2015-01-01
Route memory is frequently assessed in virtual environments. These environments can be presented in a fully controlled manner and are easy to use. Yet they lack the physical involvement that participants have when navigating real environments. For some aspects of route memory this may result in reduced performance in virtual environments. We assessed route memory performance in four different environments: real, virtual, virtual with directional information (compass), and hybrid. In the hybrid environment, participants walked the route outside on an open field, while all route information (i.e., path, landmarks) was shown simultaneously on a handheld tablet computer. Results indicate that performance in the real life environment was better than in the virtual conditions for tasks relying on survey knowledge, like pointing to start and end point, and map drawing. Performance in the hybrid condition however, hardly differed from real life performance. Performance in the virtual environment did not benefit from directional information. Given these findings, the hybrid condition may offer the best of both worlds: the performance level is comparable to that of real life for route memory, yet it offers full control of visual input during route learning. PMID:26074831
Remote console for virtual telerehabilitation.
Lewis, Jeffrey A; Boian, Rares F; Burdea, Grigore; Deutsch, Judith E
2005-01-01
The Remote Console (ReCon) telerehabilitation system provides a platform for therapists to guide rehabilitation sessions from a remote location. The ReCon system integrates real-time graphics, audio/video communication, private therapist chat, post-test data graphs, extendable patient and exercise performance monitoring, exercise pre-configuration and modification under a single application. These tools give therapists the ability to conduct training, monitoring/assessment, and therapeutic intervention remotely and in real-time.
Acquiring Software Project Specifications in a Virtual World
ERIC Educational Resources Information Center
Ng, Vincent; Tang, Zoe
2012-01-01
In teaching software engineering, it is often interesting to introduce real life scenarios for students to experience and to learn how to collect information from respective clients. The ideal arrangement is to have some real clients willing to spend time to provide their ideas of a target system through interviews. However, this arrangement…
NASA Astrophysics Data System (ADS)
Wang, P.; Becker, A. A.; Jones, I. A.; Glover, A. T.; Benford, S. D.; Vloeberghs, M.
2009-08-01
A virtual-reality real-time simulation of surgical operations that incorporates the inclusion of a hard tumour is presented. The software is based on Boundary Element (BE) technique. A review of the BE formulation for real-time analysis of two-domain deformable objects, using the pre-solution technique, is presented. The two-domain BE software is incorporated into a surgical simulation system called VIRS to simulate the initiation of a cut on the surface of the soft tissue and extending the cut deeper until the tumour is reached.
Driving performance in a power wheelchair simulator.
Archambault, Philippe S; Tremblay, Stéphanie; Cachecho, Sarah; Routhier, François; Boissy, Patrick
2012-05-01
A power wheelchair simulator can allow users to safely experience various driving tasks. For such training to be efficient, it is important that driving performance be equivalent to that in a real wheelchair. This study aimed at comparing driving performance in a real and in a simulated environment. Two groups of healthy young adults performed different driving tasks, either in a real power wheelchair or in a simulator. Smoothness of joystick control as well as the time necessary to complete each task were recorded and compared between the two groups. Driving strategies were analysed from video recordings. The sense of presence, of really being in the virtual environment, was assessed through a questionnaire. Smoothness of joystick control was the same in the real and virtual groups. Task completion time was higher in the simulator for the more difficult tasks. Both groups showed similar strategies and difficulties. The simulator generated a good sense of presence, which is important for motivation. Performance was very similar for power wheelchair driving in the simulator or in real life. Thus, the simulator could potentially be used to complement training of individuals who require a power wheelchair and use a regular joystick. [Box: see text].
Szlavecz, Akos; Chiew, Yeong Shiong; Redmond, Daniel; Beatson, Alex; Glassenbury, Daniel; Corbett, Simon; Major, Vincent; Pretty, Christopher; Shaw, Geoffrey M; Benyo, Balazs; Desaive, Thomas; Chase, J Geoffrey
2014-09-30
Real-time patient respiratory mechanics estimation can be used to guide mechanical ventilation settings, particularly, positive end-expiratory pressure (PEEP). This work presents a software, Clinical Utilisation of Respiratory Elastance (CURE Soft), using a time-varying respiratory elastance model to offer this ability to aid in mechanical ventilation treatment. CURE Soft is a desktop application developed in JAVA. It has two modes of operation, 1) Online real-time monitoring decision support and, 2) Offline for user education purposes, auditing, or reviewing patient care. The CURE Soft has been tested in mechanically ventilated patients with respiratory failure. The clinical protocol, software testing and use of the data were approved by the New Zealand Southern Regional Ethics Committee. Using CURE Soft, patient's respiratory mechanics response to treatment and clinical protocol were monitored. Results showed that the patient's respiratory elastance (Stiffness) changed with the use of muscle relaxants, and responded differently to ventilator settings. This information can be used to guide mechanical ventilation therapy and titrate optimal ventilator PEEP. CURE Soft enables real-time calculation of model-based respiratory mechanics for mechanically ventilated patients. Results showed that the system is able to provide detailed, previously unavailable information on patient-specific respiratory mechanics and response to therapy in real-time. The additional insight available to clinicians provides the potential for improved decision-making, and thus improved patient care and outcomes.
Whitwell, Robert L.; Ganel, Tzvi; Byrne, Caitlin M.; Goodale, Melvyn A.
2015-01-01
Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. “Natural” prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object (“haptics-based object information”) once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets (“grip scaling”) when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF’s grip scaling slopes. In the second experiment, we examined an “unnatural” grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts. PMID:25999834
Whitwell, Robert L; Ganel, Tzvi; Byrne, Caitlin M; Goodale, Melvyn A
2015-01-01
Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. "Natural" prehensile acts are directed at the goal object and are executed using real-time vision. Typically, they also entail the use of tactile, proprioceptive, and kinesthetic sources of haptic feedback about the object ("haptics-based object information") once contact with the object has been made. Natural and simulated (pantomimed) forms of prehension are thought to recruit different cortical structures: patient DF, who has visual form agnosia following bilateral damage to her temporal-occipital cortex, loses her ability to scale her grasp aperture to the size of targets ("grip scaling") when her prehensile movements are based on a memory of a target previewed 2 s before the cue to respond or when her grasps are directed towards a visible virtual target but she is denied haptics-based information about the target. In the first of two experiments, we show that when DF performs real-time pantomimed grasps towards a 7.5 cm displaced imagined copy of a visible object such that her fingers make contact with the surface of the table, her grip scaling is in fact quite normal. This finding suggests that real-time vision and terminal tactile feedback are sufficient to preserve DF's grip scaling slopes. In the second experiment, we examined an "unnatural" grasping task variant in which a tangible target (along with any proxy such as the surface of the table) is denied (i.e., no terminal tactile feedback). To do this, we used a mirror-apparatus to present virtual targets with and without a spatially coincident copy for the participants to grasp. We compared the grasp kinematics from trials with and without terminal tactile feedback to a real-time-pantomimed grasping task (one without tactile feedback) in which participants visualized a copy of the visible target as instructed in our laboratory in the past. Compared to natural grasps, removing tactile feedback increased RT, slowed the velocity of the reach, reduced in-flight grip aperture, increased the slopes relating grip aperture to target width, and reduced the final grip aperture (FGA). All of these effects were also observed in the real time-pantomime grasping task. These effects seem to be independent of those that arise from using the mirror in general as we also compared grasps directed towards virtual targets to those directed at real ones viewed directly through a pane of glass. These comparisons showed that the grasps directed at virtual targets increased grip aperture, slowed the velocity of the reach, and reduced the slopes relating grip aperture to the widths of the target. Thus, using the mirror has real consequences on grasp kinematics, reflecting the importance of task-relevant sources of online visual information for the programming and updating of natural prehensile movements. Taken together, these results provide compelling support for the view that removing terminal tactile feedback, even when the grasps are target-directed, induces a switch from real-time visual control towards one that depends more on visual perception and cognitive supervision. Providing terminal tactile feedback and real-time visual information can evidently keep the dorsal visuomotor system operating normally for prehensile acts.
Virtual Environments Using Video Capture for Social Phobia with Psychosis
White, Richard; Clarke, Timothy; Turner, Ruth; Fowler, David
2013-01-01
Abstract A novel virtual environment (VE) system was developed and used as an adjunct to cognitive behavior therapy (CBT) with six socially anxious patients recovering from psychosis. The novel aspect of the VE system is that it uses video capture so the patients can see a life-size projection of themselves interacting with a specially scripted and digitally edited filmed environment played in real time on a screen in front of them. Within-session process outcomes (subjective units of distress and belief ratings on individual behavioral experiments), as well as patient feedback, generated the hypothesis that this type of virtual environment can potentially add value to CBT by helping patients understand the role of avoidance and safety behaviors in the maintenance of social anxiety and paranoia and by boosting their confidence to carry out “real-life” behavioral experiments. PMID:23659722
In Internet-Based Visualization System Study about Breakthrough Applet Security Restrictions
NASA Astrophysics Data System (ADS)
Chen, Jie; Huang, Yan
In the process of realization Internet-based visualization system of the protein molecules, system needs to allow users to use the system to observe the molecular structure of the local computer, that is, customers can generate the three-dimensional graphics from PDB file on the client computer. This requires Applet access to local file, related to the Applet security restrictions question. In this paper include two realization methods: 1.Use such as signature tools, key management tools and Policy Editor tools provided by the JDK to digital signature and authentication for Java Applet, breakthrough certain security restrictions in the browser. 2. Through the use of Servlet agent implement indirect access data methods, breakthrough the traditional Java Virtual Machine sandbox model restriction of Applet ability. The two ways can break through the Applet's security restrictions, but each has its own strengths.
Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm
Dura-Bernal, Salvador; Chadderdon, George L; Neymotin, Samuel A; Francis, Joseph T; Lytton, William W
2015-01-01
Brain-machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brain’s use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and software platforms, so that the different components can communicate in real-time. We present the first steps in an ongoing effort to integrate a biomimetic spiking neuronal model of motor learning with a robotic arm. The biomimetic model (BMM) was used to drive a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model in real time to drive mirroring motion of a Barrett Technology WAM robotic arm through a user datagram protocol (UDP) interface. The robotic arm sent back information on its joint positions, which was then used by a visualization tool on the remote computer to display a realistic 3D virtual model of the moving robotic arm in real time. This work paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, to be used as a platform for developing biomimetic learning algorithms for controlling real-time devices. PMID:26709323
A training platform for many-dimensional prosthetic devices using a virtual reality environment
Putrino, David; Wong, Yan T.; Weiss, Adam; Pesaran, Bijan
2014-01-01
Brain machine interfaces (BMIs) have the potential to assist in the rehabilitation of millions of patients worldwide. Despite recent advancements in BMI technology for the restoration of lost motor function, a training environment to restore full control of the anatomical segments of an upper limb extremity has not yet been presented. Here, we develop a virtual upper limb prosthesis with 27 independent dimensions, the anatomical dimensions of the human arm and hand, and deploy the virtual prosthesis as an avatar in a virtual reality environment (VRE) that can be controlled in real-time. The prosthesis avatar accepts kinematic control inputs that can be captured from movements of the arm and hand as well as neural control inputs derived from processed neural signals. We characterize the system performance under kinematic control using a commercially available motion capture system. We also present the performance under kinematic control achieved by two non-human primates (Macaca Mulatta) trained to use the prosthetic avatar to perform reaching and grasping tasks. This is the first virtual prosthetic device that is capable of emulating all the anatomical movements of a healthy upper limb in real-time. Since the system accepts both neural and kinematic inputs for a variety of many-dimensional skeletons, we propose it provides a customizable training platform for the acquisition of many-dimensional neural prosthetic control. PMID:24726625
Virtual Economies: Threats and Risks
NASA Astrophysics Data System (ADS)
Thorpe, Christopher; Hammer, Jessica; Camp, Jean; Callas, Jon; Bond, Mike
In virtual economies, human and computer players produce goods and services, hold assets, and trade them with other in-game entities, in the same way that people and corporations participate in "real-world" economies. As the border between virtual worlds and the real world grows more and more permeable, privacy and security in virtual worlds matter more and more.
The Virtual Earth-Solar Observatory of the SCiESMEX
NASA Astrophysics Data System (ADS)
De la Luz, V.; Gonzalez-Esparza, A.; Cifuentes-Nava, G.
2015-12-01
The Mexican Space Weather Service (SCiESMEX, http://www.sciesmex.unam.mx) started operations in October 2014. The project includes the Virtual Earth-Solar Observatory (VESO, http://www.veso.unam.mx). The VESO is a improved project wich objetive is integrate the space weather instrumentation network from the National Autonomous University of Mexico (UNAM). The network includes the Mexican Array Radiotelescope (MEXART), the Callisto receptor (MEXART), a Neutron Telescope, a Cosmic Ray Telescope. the Schumann Antenna, the National Magnetic Service, and the mexican GPS network (TlalocNet). The VESO facility is located at the Geophysics Institute campus Michoacan (UNAM). We offer the service of data store, real-time data, and quasi real-time data. The hardware of VESO includes a High Performance Computer (HPC) dedicated specially to big data storage.
Virtual Reality Cerebral Aneurysm Clipping Simulation With Real-time Haptic Feedback
Alaraj, Ali; Luciano, Cristian J.; Bailey, Daniel P.; Elsenousi, Abdussalam; Roitberg, Ben Z.; Bernardo, Antonio; Banerjee, P. Pat; Charbel, Fady T.
2014-01-01
Background With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. Objective To develop and evaluate the usefulness of a new haptic-based virtual reality (VR) simulator in the training of neurosurgical residents. Methods A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the Immersive Touch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomography angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-D immersive VR environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from three residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Results Residents felt that the simulation would be useful in preparing for real-life surgery. About two thirds of the residents felt that the 3-D immersive anatomical details provided a very close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They believed the simulation is useful for preoperative surgical rehearsal and neurosurgical training. One third of the residents felt that the technology in its current form provided very realistic haptic feedback for aneurysm surgery. Conclusion Neurosurgical residents felt that the novel immersive VR simulator is helpful in their training especially since they do not get a chance to perform aneurysm clippings until very late in their residency programs. PMID:25599200
Monitoring Programs Using Rewriting
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Rosu, Grigore; Lan, Sonie (Technical Monitor)
2001-01-01
We present a rewriting algorithm for efficiently testing future time Linear Temporal Logic (LTL) formulae on finite execution traces, The standard models of LTL are infinite traces, reflecting the behavior of reactive and concurrent systems which conceptually may be continuously alive in most past applications of LTL, theorem provers and model checkers have been used to formally prove that down-scaled models satisfy such LTL specifications. Our goal is instead to use LTL for up-scaled testing of real software applications, corresponding to analyzing the conformance of finite traces against LTL formulae. We first describe what it means for a finite trace to satisfy an LTL property end then suggest an optimized algorithm based on transforming LTL formulae. We use the Maude rewriting logic, which turns out to be a good notation and being supported by an efficient rewriting engine for performing these experiments. The work constitutes part of the Java PathExplorer (JPAX) project, the purpose of which is to develop a flexible tool for monitoring Java program executions.
Virtual Reality as a Distraction Technique in Chronic Pain Patients
Gao, Kenneth; Sulea, Camelia; Wiederhold, Mark D.
2014-01-01
Abstract We explored the use of virtual reality distraction techniques for use as adjunctive therapy to treat chronic pain. Virtual environments were specifically created to provide pleasant and engaging experiences where patients navigated on their own through rich and varied simulated worlds. Real-time physiological monitoring was used as a guide to determine the effectiveness and sustainability of this intervention. Human factors studies showed that virtual navigation is a safe and effective method for use with chronic pain patients. Chronic pain patients demonstrated significant relief in subjective ratings of pain that corresponded to objective measurements in peripheral, noninvasive physiological measures. PMID:24892196
New virtual laboratories presenting advanced motion control concepts
NASA Astrophysics Data System (ADS)
Goubej, Martin; Krejčí, Alois; Reitinger, Jan
2015-11-01
The paper deals with development of software framework for rapid generation of remote virtual laboratories. Client-server architecture is chosen in order to employ real-time simulation core which is running on a dedicated server. Ordinary web browser is used as a final renderer to achieve hardware independent solution which can be run on different target platforms including laptops, tablets or mobile phones. The provided toolchain allows automatic generation of the virtual laboratory source code from the configuration file created in the open- source Inkscape graphic editor. Three virtual laboratories presenting advanced motion control algorithms have been developed showing the applicability of the proposed approach.
Sawai, Takashi; Uzuki, Miwa; Miura, Yasuhiro; Kamataki, Akihisa; Matsumura, Tsubasa; Saito, Kenji; Kurose, Akira; Osamura, Yoshiyuki R.; Yoshimi, Naoki; Kanno, Hiroyuki; Moriya, Takuya; Ishida, Yoji; Satoh, Yohichi; Nakao, Masahiro; Ogawa, Emiko; Matsuo, Satoshi; Kasai, Hiroyuki; Kumagai, Kazuhiro; Motoda, Toshihiro; Hopson, Nathan
2013-01-01
Background: Recent advances in information technology have allowed the development of a telepathology system involving high-speed transfer of high-volume histological figures via fiber optic landlines. However, at present there are geographical limits to landlines. The Japan Aerospace Exploration Agency (JAXA) has developed the “Kizuna” ultra-high speed internet satellite and has pursued its various applications. In this study we experimented with telepathology in collaboration with JAXA using Kizuna. To measure the functionality of the Wideband InterNet working engineering test and Demonstration Satellite (WINDS) ultra-high speed internet satellite in remote pathological diagnosis and consultation, we examined the adequate data transfer speed and stability to conduct telepathology (both diagnosis and conferencing) with functionality, and ease similar or equal to telepathology using fiber-optic landlines. Materials and Methods: We performed experiments for 2 years. In year 1, we tested the usability of the WINDS for telepathology with real-time video and virtual slide systems. These are state-of-the-art technologies requiring massive volumes of data transfer. In year 2, we tested the usability of the WINDS for three-way teleconferencing with virtual slides. Facilities in Iwate (northern Japan), Tokyo, and Okinawa were connected via the WINDS and voice conferenced while remotely examining and manipulating virtual slides. Results: Network function parameters measured using ping and Iperf were within acceptable limits. However; stage movement, zoom, and conversation suffered a lag of approximately 0.8 s when using real-time video, and a delay of 60-90 s was experienced when accessing the first virtual slide in a session. No significant lag or inconvenience was experienced during diagnosis and conferencing, and the results were satisfactory. Our hypothesis was confirmed for both remote diagnosis using real-time video and virtual slide systems, and also for teleconferencing using virtual slide systems with voice functionality. Conclusions: Our results demonstrate the feasibility of ultra-high-speed internet satellite networks for use in telepathology. Because communications satellites have less geographical and infrastructural requirements than landlines, ultra-high-speed internet satellite telepathology represents a major step toward alleviating regional disparity in the quality of medical care. PMID:24244882
Localized intraoperative virtual endoscopy (LIVE) for surgical guidance in 16 skull base patients.
Haerle, Stephan K; Daly, Michael J; Chan, Harley; Vescan, Allan; Witterick, Ian; Gentili, Fred; Zadeh, Gelareh; Kucharczyk, Walter; Irish, Jonathan C
2015-01-01
Previous preclinical studies of localized intraoperative virtual endoscopy-image-guided surgery (LIVE-IGS) for skull base surgery suggest a potential clinical benefit. The first aim was to evaluate the registration accuracy of virtual endoscopy based on high-resolution magnetic resonance imaging under clinical conditions. The second aim was to implement and assess real-time proximity alerts for critical structures during skull base drilling. Patients consecutively referred for sinus and skull base surgery were enrolled in this prospective case series. Five patients were used to check registration accuracy and feasibility with the subsequent 11 patients being treated under LIVE-IGS conditions with presentation to the operating surgeon (phase 2). Sixteen skull base patients were endoscopically operated on by using image-based navigation while LIVE-IGS was tested in a clinical setting. Workload was quantitatively assessed using the validated National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire. Real-time localization of the surgical drill was accurate to ~1 to 2 mm in all cases. The use of 3-mm proximity alert zones around the carotid arteries and optic nerve found regular clinical use, as the median minimum distance between the tracked drill and these structures was 1 mm (0.2-3.1 mm) and 0.6 mm (0.2-2.5 mm), respectively. No statistical differences were found in the NASA-TLX indicators for this experienced surgical cohort. Real-time proximity alerts with virtual endoscopic guidance was sufficiently accurate under clinical conditions. Further clinical evaluation is required to evaluate the potential surgical benefits, particularly for less experienced surgeons or for teaching purposes. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.
Landes, Constantin A; Weichert, Frank; Geis, Philipp; Helga, Fritsch; Wagner, Mathias
2006-03-01
Cleft lip and palate reconstructive surgery requires thorough knowledge of normal and pathological labial, palatal, and velopharyngeal anatomy. This study compared two software algorithms and their 3D virtual anatomical reconstruction because exact 3D micromorphological reconstruction may improve learning, reveal spatial relationships, and provide data for mathematical modeling. Transverse and frontal serial sections of the midface of 18 fetal specimens (11th to 32nd gestational week) were used for two manual segmentation approaches. The first manual segmentation approach used bitmap images and either Windows-based or Mac-based SURFdriver commercial software that allowed manual contour matching, surface generation with average slice thickness, 3D triangulation, and real-time interactive virtual 3D reconstruction viewing. The second manual segmentation approach used tagged image format and platform-independent prototypical SeViSe software developed by one of the authors (F.W.). Distended or compressed structures were dynamically transformed. Registration was automatic but allowed manual correction, such as individual section thickness, surface generation, and interactive virtual 3D real-time viewing. SURFdriver permitted intuitive segmentation, easy manual offset correction, and the reconstruction showed complex spatial relationships in real time. However, frequent software crashes and erroneous landmarks appearing "out of the blue," requiring manual correction, were tedious. Individual section thickness, defined smoothing, and unlimited structure number could not be integrated. The reconstruction remained underdimensioned and not sufficiently accurate for this study's reconstruction problem. SeViSe permitted unlimited structure number, late addition of extra sections, and quantified smoothing and individual slice thickness; however, SeViSe required more elaborate work-up compared to SURFdriver, yet detailed and exact 3D reconstructions were created.
Virtual Guidance Ultrasound: A Tool to Obtain Diagnostic Ultrasound for Remote Environments
NASA Technical Reports Server (NTRS)
Caine,Timothy L.; Martin David S.; Matz, Timothy; Lee, Stuart M. C.; Stenger, Michael B.; Platts, Steven H.
2012-01-01
Astronauts currently acquire ultrasound images on the International Space Station with the assistance of real-time remote guidance from an ultrasound expert in Mission Control. Remote guidance will not be feasible when significant communication delays exist during exploration missions beyond low-Earth orbit. For example, there may be as much as a 20- minute delay in communications between the Earth and Mars. Virtual-guidance, a pre-recorded audio-visual tutorial viewed in real-time, is a viable modality for minimally trained scanners to obtain diagnostically-adequate images of clinically relevant anatomical structures in an autonomous manner. METHODS: Inexperienced ultrasound operators were recruited to perform carotid artery (n = 10) and ophthalmic (n = 9) ultrasound examinations using virtual guidance as their only instructional tool. In the carotid group, each each untrained operator acquired two-dimensional, pulsed, and color Doppler of the carotid artery. In the ophthalmic group, operators acquired representative images of the anterior chamber of the eye, retina, optic nerve, and nerve sheath. Ultrasound image quality was evaluated by independent imaging experts. RESULTS: Eight of the 10 carotid studies were judged to be diagnostically adequate. With one exception the quality of all the ophthalmic images were adequate to excellent. CONCLUSION: Diagnostically-adequate carotid and ophthalmic ultrasound examinations can be obtained by untrained operators with instruction only from an audio/video tutorial viewed in real time while scanning. This form of quick-response-guidance, can be developed for other ultrasound examinations, represents an opportunity to acquire important medical and scientific information for NASA flight surgeons and researchers when trained medical personnel are not present. Further, virtual guidance will allow untrained personnel to autonomously obtain important medical information in remote locations on Earth where communication is difficult or absent.
Virtual guidance as a tool to obtain diagnostic ultrasound for spaceflight and remote environments.
Martin, David S; Caine, Timothy L; Matz, Timothy; Lee, Stuart M C; Stenger, Michael B; Sargsyan, Ashot E; Platts, Steven H
2012-10-01
With missions planned to travel greater distances from Earth at ranges that make real-time two-way communication impractical, astronauts will be required to perform autonomous medical diagnostic procedures during future exploration missions. Virtual guidance is a form of just-in-time training developed to allow novice ultrasound operators to acquire diagnostically-adequate images of clinically relevant anatomical structures using a prerecorded audio/visual tutorial viewed in real-time. Individuals without previous experience in ultrasound were recruited to perform carotid artery (N = 10) and ophthalmic (N = 9) ultrasound examinations using virtual guidance as their only training tool. In the carotid group, each untrained operator acquired two-dimensional, pulsed and color Doppler of the carotid artery. In the ophthalmic group, operators acquired representative images of the anterior chamber of the eye, retina, optic nerve, and nerve sheath. Ultrasound image quality was evaluated by independent imaging experts. Of the studies, 8 of the 10 carotid and 17 of 18 of the ophthalmic images (2 images collected per study) were judged to be diagnostically adequate. The quality of all but one of the ophthalmic images ranged from adequate to excellent. Diagnostically-adequate carotid and ophthalmic ultrasound examinations can be obtained by previously untrained operators with assistance from only an audio/video tutorial viewed in real time while scanning. This form of just-in-time training, which can be applied to other examinations, represents an opportunity to acquire important information for NASA flight surgeons and researchers when trained medical personnel are not available or when remote guidance is impractical.
Design and Implementation of Campus Application APP Based on Android
NASA Astrophysics Data System (ADS)
dongxu, Zhu; yabin, liu; xian lei, PI; weixiang, Zhou; meng, Huang
2017-07-01
In this paper, "Internet + campus" as the entrance of the Android technology based on the application of campus design and implementation of Application program. Based on GIS(Geographic Information System) spatial database, GIS spatial analysis technology, Java development technology and Android development technology, this system server adopts the Model View Controller architectue to realize the efficient use of campus information and provide real-time information of all kinds of learning and life for campus student at the same time. "Fingertips on the Institute of Disaster Prevention Science and Technology" release for the campus students of all grades of life, learning, entertainment provides a convenient.
Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.
2009-01-01
The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.
Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.
2010-01-01
The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.
Lu, Yuhua; Liu, Qian
2018-01-01
We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications. PMID:29515870
Xu, Lang; Lu, Yuhua; Liu, Qian
2018-02-01
We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications.
Live Aircraft Encounter Visualization at FutureFlight Central
NASA Technical Reports Server (NTRS)
Murphy, James R.; Chinn, Fay; Monheim, Spencer; Otto, Neil; Kato, Kenji; Archdeacon, John
2018-01-01
Researchers at the National Aeronautics and Space Administration (NASA) have developed an aircraft data streaming capability that can be used to visualize live aircraft in near real-time. During a joint Federal Aviation Administration (FAA)/NASA Airborne Collision Avoidance System flight series, test sorties between unmanned aircraft and manned intruder aircraft were shown in real-time at NASA Ames' FutureFlight Central tower facility as a virtual representation of the encounter. This capability leveraged existing live surveillance, video, and audio data streams distributed through a Live, Virtual, Constructive test environment, then depicted the encounter from the point of view of any aircraft in the system showing the proximity of the other aircraft. For the demonstration, position report data were sent to the ground from on-board sensors on the unmanned aircraft. The point of view can be change dynamically, allowing encounters from all angles to be observed. Visualizing the encounters in real-time provides a safe and effective method for observation of live flight testing and a strong alternative to travel to the remote test range.
Visualization of multi-INT fusion data using Java Viewer (JVIEW)
NASA Astrophysics Data System (ADS)
Blasch, Erik; Aved, Alex; Nagy, James; Scott, Stephen
2014-05-01
Visualization is important for multi-intelligence fusion and we demonstrate issues for presenting physics-derived (i.e., hard) and human-derived (i.e., soft) fusion results. Physics-derived solutions (e.g., imagery) typically involve sensor measurements that are objective, while human-derived (e.g., text) typically involve language processing. Both results can be geographically displayed for user-machine fusion. Attributes of an effective and efficient display are not well understood, so we demonstrate issues and results for filtering, correlation, and association of data for users - be they operators or analysts. Operators require near-real time solutions while analysts have the opportunities of non-real time solutions for forensic analysis. In a use case, we demonstrate examples using the JVIEW concept that has been applied to piloting, space situation awareness, and cyber analysis. Using the open-source JVIEW software, we showcase a big data solution for multi-intelligence fusion application for context-enhanced information fusion.
Guiding brine shrimp through mazes by solving reaction diffusion equations
NASA Astrophysics Data System (ADS)
Singal, Krishma; Fenton, Flavio
Excitable systems driven by reaction diffusion equations have been shown to not only find solutions to mazes but to also to find the shortest path between the beginning and the end of the maze. In this talk we describe how we can use the Fitzhugh-Nagumo model, a generic model for excitable media, to solve a maze by varying the basin of attraction of its two fixed points. We demonstrate how two dimensional mazes are solved numerically using a Java Applet and then accelerated to run in real time by using graphic processors (GPUs). An application of this work is shown by guiding phototactic brine shrimp through a maze solved by the algorithm. Once the path is obtained, an Arduino directs the shrimp through the maze using lights from LEDs placed at the floor of the Maze. This method running in real time could be eventually used for guiding robots and cars through traffic.
Assessing Upper Extremity Motor Function in Practice of Virtual Activities of Daily Living
Adams, Richard J.; Lichter, Matthew D.; Krepkovich, Eileen T.; Ellington, Allison; White, Marga; Diamond, Paul T.
2015-01-01
A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An Unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user’s avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman’s rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs. PMID:25265612
Assessing upper extremity motor function in practice of virtual activities of daily living.
Adams, Richard J; Lichter, Matthew D; Krepkovich, Eileen T; Ellington, Allison; White, Marga; Diamond, Paul T
2015-03-01
A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user's avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman's rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs.
V-Man Generation for 3-D Real Time Animation. Chapter 5
NASA Technical Reports Server (NTRS)
Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang
2007-01-01
The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.
A threat to a virtual hand elicits motor cortex activation.
González-Franco, Mar; Peck, Tabitha C; Rodríguez-Fornells, Antoni; Slater, Mel
2014-03-01
We report an experiment where participants observed an attack on their virtual body as experienced in an immersive virtual reality (IVR) system. Participants sat by a table with their right hand resting upon it. In IVR, they saw a virtual table that was registered with the real one, and they had a virtual body that substituted their real body seen from a first person perspective. The virtual right hand was collocated with their real right hand. Event-related brain potentials were recorded in two conditions, one where the participant's virtual hand was attacked with a knife and a control condition where the knife only struck the virtual table. Significantly greater P450 potentials were obtained in the attack condition confirming our expectations that participants had a strong illusion of the virtual hand being their own, which was also strongly supported by questionnaire responses. Higher levels of subjective virtual hand ownership correlated with larger P450 amplitudes. Mu-rhythm event-related desynchronization in the motor cortex and readiness potential (C3-C4) negativity were clearly observed when the virtual hand was threatened-as would be expected, if the real hand was threatened and the participant tried to avoid harm. Our results support the idea that event-related potentials may provide a promising non-subjective measure of virtual embodiment. They also support previous experiments on pain observation and are placed into context of similar experiments and studies of body perception and body ownership within cognitive neuroscience.
NASA Astrophysics Data System (ADS)
Behr, Yannik; Clinton, John; Cua, Georgia; Cauzzi, Carlo; Heimers, Stefan; Kästli, Philipp; Becker, Jan; Heaton, Thomas
2013-04-01
The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS installations in southern Italy, western Greece, Istanbul, Romania, and Iceland are planned or underway. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. While originally based on the Earthworm system it has recently been ported to the SeisComp3 system. Besides taking advantage of SeisComp3's picking and phase association capabilities it greatly simplifies the potential installation of VS at networks in particular those already running SeisComp3. We present the architecture of the new SeisComp3 based version and compare its results from off-line tests with the real-time performance of VS in Switzerland over the past two years. We further show that the empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, perform well in Switzerland.
Kataoka, Satoshi; Ohe, Kazuhiko; Mochizuki, Mayumi; Ueda, Shiro
2002-01-01
We have developed an adverse drug reaction (ADR) reporting system integrating it with Hospital Information System (HIS) of the University of Tokyo Hospital. Since this system is designed with JAVA, it is portable without re-compiling to any operating systems on which JAVA virtual machines work. In this system, we implemented an automatic data filling function using XML-based (extended Markup Language) files generated by HIS. This new specification would decrease the time needed for physicians and pharmacists to fill the spontaneous ADR reports. By clicking a button, the report is sent to the text database through Simple Mail Transfer Protocol (SMTP) electronic mails. The destination of the report mail can be changed arbitrarily by administrators, which adds this system more flexibility for practical operation. Although we tried our best to use the SGML-based (Standard Generalized Markup Language) ICH M2 guideline to follow the global standard of the case report, we eventually adopted XML as the output report format. This is because we found some problems in handling two bytes characters with ICH guideline and XML has a lot of useful features. According to our pilot survey conducted at the University of Tokyo Hospital, many physicians answered that our idea, integrating ADR reporting system to HIS, would increase the ADR reporting numbers.
Living Color Frame System: PC graphics tool for data visualization
NASA Technical Reports Server (NTRS)
Truong, Long V.
1993-01-01
Living Color Frame System (LCFS) is a personal computer software tool for generating real-time graphics applications. It is highly applicable for a wide range of data visualization in virtual environment applications. Engineers often use computer graphics to enhance the interpretation of data under observation. These graphics become more complicated when 'run time' animations are required, such as found in many typical modern artificial intelligence and expert systems. Living Color Frame System solves many of these real-time graphics problems.
Plot of virtual surgery based on CT medical images
NASA Astrophysics Data System (ADS)
Song, Limei; Zhang, Chunbo
2009-10-01
Although the CT device can give the doctors a series of 2D medical images, it is difficult to give vivid view for the doctors to acknowledge the decrease part. In order to help the doctors to plot the surgery, the virtual surgery system is researched based on the three-dimensional visualization technique. After the disease part of the patient is scanned by the CT device, the 3D whole view will be set up based on the 3D reconstruction module of the system. TCut a part is the usually used function for doctors in the real surgery. A curve will be created on the 3D space; and some points can be added on the curve automatically or manually. The position of the point can change the shape of the cut curves. The curve can be adjusted by controlling the points. If the result of the cut function is not satisfied, all the operation can be cancelled to restart. The flexible virtual surgery gives more convenience to the real surgery. Contrast to the existing medical image process system, the virtual surgery system is added to the system, and the virtual surgery can be plotted for a lot of times, till the doctors have enough confidence to start the real surgery. Because the virtual surgery system can give more 3D information of the disease part, some difficult surgery can be discussed by the expert doctors in different city via internet. It is a useful function to understand the character of the disease part, thus to decrease the surgery risk.
Human responses to augmented virtual scaffolding models.
Hsiao, Hongwei; Simeonov, Peter; Dotson, Brian; Ammons, Douglas; Kau, Tsui-Ying; Chiou, Sharon
2005-08-15
This study investigated the effect of adding real planks, in virtual scaffolding models of elevation, on human performance in a surround-screen virtual reality (SSVR) system. Twenty-four construction workers and 24 inexperienced controls performed walking tasks on real and virtual planks at three virtual heights (0, 6 m, 12 m) and two scaffolding-platform-width conditions (30, 60 cm). Gait patterns, walking instability measurements and cardiovascular reactivity were assessed. The results showed differences in human responses to real vs. virtual planks in walking patterns, instability score and heart-rate inter-beat intervals; it appeared that adding real planks in the SSVR virtual scaffolding model enhanced the quality of SSVR as a human - environment interface research tool. In addition, there were significant differences in performance between construction workers and the control group. The inexperienced participants were more unstable as compared to construction workers. Both groups increased their stride length with repetitions of the task, indicating a possibly confidence- or habit-related learning effect. The practical implications of this study are in the adoption of augmented virtual models of elevated construction environments for injury prevention research, and the development of programme for balance-control training to reduce the risk of falls at elevation before workers enter a construction job.
Besnard, Jeremy; Richard, Paul; Banville, Frederic; Nolin, Pierre; Aubin, Ghislaine; Le Gall, Didier; Richard, Isabelle; Allain, Phillippe
2016-01-01
Traumatic brain injury (TBI) causes impairments affecting instrumental activities of daily living (IADL). However, few studies have considered virtual reality as an ecologically valid tool for the assessment of IADL in patients who have sustained a TBI. The main objective of the present study was to examine the use of the Nonimmersive Virtual Coffee Task (NI-VCT) for IADL assessment in patients with TBI. We analyzed the performance of 19 adults suffering from TBI and 19 healthy controls (HCs) in the real and virtual tasks of making coffee with a coffee machine, as well as in global IQ and executive functions. Patients performed worse than HCs on both real and virtual tasks and on all tests of executive functions. Correlation analyses revealed that NI-VCT scores were related to scores on the real task. Moreover, regression analyses demonstrated that performance on NI-VCT matched real-task performance. Our results support the idea that the virtual kitchen is a valid tool for IADL assessment in patients who have sustained a TBI.
ERIC Educational Resources Information Center
Stinson, Michael; Eisenberg, Sandy; Horn, Christy; Larson, Judy; Levitt, Harry; Stuckless, Ross
This report describes and discusses several applications of new computer-based technologies which enable postsecondary students with deafness or hearing impairments to read the text of the language being spoken by the instructor and fellow students virtually in real time. Two current speech-to-text options are described: (1) steno-based systems in…
[Development of a virtual model of fibro-bronchoscopy].
Solar, Mauricio; Ducoing, Eugenio
2011-09-01
A virtual model of fibro-bronchoscopy is reported. The virtual model represents in 3D the trachea and the bronchi creating a virtual world of the bronchial tree. The bronchoscope is modeled to look over the bronchial tree imitating the displacement and rotation of the real bronchoscope. The parameters of the virtual model were gradually adjusted according to expert opinion and allowed the training of specialists with a virtual bronchoscope of great realism. The virtual bronchial tree provides clues of reality regarding the movement of the bronchoscope, creating the illusion that the virtual instrument is behaving as the real one with all the benefits in costs that this means.
Distributed virtual environment for emergency medical training
NASA Astrophysics Data System (ADS)
Stytz, Martin R.; Banks, Sheila B.; Garcia, Brian W.; Godsell-Stytz, Gayl M.
1997-07-01
In many professions where individuals must work in a team in a high stress environment to accomplish a time-critical task, individual and team performance can benefit from joint training using distributed virtual environments (DVEs). One professional field that lacks but needs a high-fidelity team training environment is the field of emergency medicine. Currently, emergency department (ED) medical personnel train by using words to create a metal picture of a situation for the physician and staff, who then cooperate to solve the problems portrayed by the word picture. The need in emergency medicine for realistic virtual team training is critical because ED staff typically encounter rarely occurring but life threatening situations only once in their careers and because ED teams currently have no realistic environment in which to practice their team skills. The resulting lack of experience and teamwork makes diagnosis and treatment more difficult. Virtual environment based training has the potential to redress these shortfalls. The objective of our research is to develop a state-of-the-art virtual environment for emergency medicine team training. The virtual emergency room (VER) allows ED physicians and medical staff to realistically prepare for emergency medical situations by performing triage, diagnosis, and treatment on virtual patients within an environment that provides them with the tools they require and the team environment they need to realistically perform these three tasks. There are several issues that must be addressed before this vision is realized. The key issues deal with distribution of computations; the doctor and staff interface to the virtual patient and ED equipment; the accurate simulation of individual patient organs' response to injury, medication, and treatment; and an accurate modeling of the symptoms and appearance of the patient while maintaining a real-time interaction capability. Our ongoing work addresses all of these issues. In this paper we report on our prototype VER system and its distributed system architecture for an emergency department distributed virtual environment for emergency medical staff training. The virtual environment enables emergency department physicians and staff to develop their diagnostic and treatment skills using the virtual tools they need to perform diagnostic and treatment tasks. Virtual human imagery, and real-time virtual human response are used to create the virtual patient and present a scenario. Patient vital signs are available to the emergency department team as they manage the virtual case. The work reported here consists of the system architectures we developed for the distributed components of the virtual emergency room. The architectures we describe consist of the network level architecture as well as the software architecture for each actor within the virtual emergency room. We describe the role of distributed interactive simulation and other enabling technologies within the virtual emergency room project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakano, M; Kida, S; Masutani, Y
2014-06-01
Purpose: In the previous study, we developed time-ordered fourdimensional (4D) cone-beam CT (CBCT) technique to visualize nonperiodic organ motion, such as peristaltic motion of gastrointestinal organs and adjacent area, using half-scan reconstruction method. One important obstacle was that truncation of projection was caused by asymmetric location of flat-panel detector (FPD) in order to cover whole abdomen or pelvis in one rotation. In this study, we propose image mosaicing to extend projection data to make possible to reconstruct full field-of-view (FOV) image using half-scan reconstruction. Methods: The projections of prostate cancer patients were acquired using the X-ray Volume Imaging system (XVI,more » version 4.5) on Synergy linear accelerator system (Elekta, UK). The XVI system has three options of FOV, S, M and L, and M FOV was chosen for pelvic CBCT acquisition, with a FPD panel 11.5 cm offset. The method to produce extended projections consists of three main steps: First, normal three-dimensional (3D) reconstruction which contains whole pelvis was implemented using real projections. Second, virtual projections were produced by reprojection process of the reconstructed 3D image. Third, real and virtual projections in each angle were combined into one extended mosaic projection. Then, 4D CBCT images were reconstructed using our inhouse reconstruction software based on Feldkamp, Davis and Kress algorithm. The angular range of each reconstruction phase in the 4D reconstruction was 180 degrees, and the range moved as time progressed. Results: Projection data were successfully extended without discontinuous boundary between real and virtual projections. Using mosaic projections, 4D CBCT image sets were reconstructed without artifacts caused by the truncation, and thus, whole pelvis was clearly visible. Conclusion: The present method provides extended projections which contain whole pelvis. The presented reconstruction method also enables time-ordered 4D CBCT reconstruction of organs with non-periodic motion with full FOV without projection-truncation artifacts. This work was partly supported by the JSPS Core-to-Core Program(No. 23003). This work was partly supported by JSPS KAKENHI 24234567.« less
Freeman, Daniel; Bradley, Jonathan; Antley, Angus; Bourke, Emilie; DeWeever, Natalie; Evans, Nicole; Černis, Emma; Sheaves, Bryony; Waite, Felicity; Dunn, Graham; Slater, Mel; Clark, David M
2016-07-01
Persecutory delusions may be unfounded threat beliefs maintained by safety-seeking behaviours that prevent disconfirmatory evidence being successfully processed. Use of virtual reality could facilitate new learning. To test the hypothesis that enabling patients to test the threat predictions of persecutory delusions in virtual reality social environments with the dropping of safety-seeking behaviours (virtual reality cognitive therapy) would lead to greater delusion reduction than exposure alone (virtual reality exposure). Conviction in delusions and distress in a real-world situation were assessed in 30 patients with persecutory delusions. Patients were then randomised to virtual reality cognitive therapy or virtual reality exposure, both with 30 min in graded virtual reality social environments. Delusion conviction and real-world distress were then reassessed. In comparison with exposure, virtual reality cognitive therapy led to large reductions in delusional conviction (reduction 22.0%, P = 0.024, Cohen's d = 1.3) and real-world distress (reduction 19.6%, P = 0.020, Cohen's d = 0.8). Cognitive therapy using virtual reality could prove highly effective in treating delusions. © The Royal College of Psychiatrists 2016.
Freeman, Daniel; Bradley, Jonathan; Antley, Angus; Bourke, Emilie; DeWeever, Natalie; Evans, Nicole; Černis, Emma; Sheaves, Bryony; Waite, Felicity; Dunn, Graham; Slater, Mel; Clark, David M.
2016-01-01
Background Persecutory delusions may be unfounded threat beliefs maintained by safety-seeking behaviours that prevent disconfirmatory evidence being successfully processed. Use of virtual reality could facilitate new learning. Aims To test the hypothesis that enabling patients to test the threat predictions of persecutory delusions in virtual reality social environments with the dropping of safety-seeking behaviours (virtual reality cognitive therapy) would lead to greater delusion reduction than exposure alone (virtual reality exposure). Method Conviction in delusions and distress in a real-world situation were assessed in 30 patients with persecutory delusions. Patients were then randomised to virtual reality cognitive therapy or virtual reality exposure, both with 30 min in graded virtual reality social environments. Delusion conviction and real-world distress were then reassessed. Results In comparison with exposure, virtual reality cognitive therapy led to large reductions in delusional conviction (reduction 22.0%, P = 0.024, Cohen's d = 1.3) and real-world distress (reduction 19.6%, P = 0.020, Cohen's d = 0.8). Conclusion Cognitive therapy using virtual reality could prove highly effective in treating delusions. PMID:27151071
Addressing Dynamic Issues of Program Model Checking
NASA Technical Reports Server (NTRS)
Lerda, Flavio; Visser, Willem
2001-01-01
Model checking real programs has recently become an active research area. Programs however exhibit two characteristics that make model checking difficult: the complexity of their state and the dynamic nature of many programs. Here we address both these issues within the context of the Java PathFinder (JPF) model checker. Firstly, we will show how the state of a Java program can be encoded efficiently and how this encoding can be exploited to improve model checking. Next we show how to use symmetry reductions to alleviate some of the problems introduced by the dynamic nature of Java programs. Lastly, we show how distributed model checking of a dynamic program can be achieved, and furthermore, how dynamic partitions of the state space can improve model checking. We support all our findings with results from applying these techniques within the JPF model checker.
Tal, Aner; Wansink, Brian
2011-01-01
Virtual reality (VR) provides a potentially powerful tool for researchers seeking to investigate eating and physical activity. Some unique conditions are necessary to ensure that the psychological processes that influence real eating behavior also influence behavior in VR environments. Accounting for these conditions is critical if VR-assisted research is to accurately reflect real-world situations. The current work discusses key considerations VR researchers must take into account to ensure similar psychological functioning in virtual and actual reality and does so by focusing on the process of spontaneous mental simulation. Spontaneous mental simulation is prevalent under real-world conditions but may be absent under VR conditions, potentially leading to differences in judgment and behavior between virtual and actual reality. For simulation to occur, the virtual environment must be perceived as being available for action. A useful chart is supplied as a reference to help researchers to investigate eating and physical activity more effectively. PMID:21527088
Tal, Aner; Wansink, Brian
2011-03-01
Virtual reality (VR) provides a potentially powerful tool for researchers seeking to investigate eating and physical activity. Some unique conditions are necessary to ensure that the psychological processes that influence real eating behavior also influence behavior in VR environments. Accounting for these conditions is critical if VR-assisted research is to accurately reflect real-world situations. The current work discusses key considerations VR researchers must take into account to ensure similar psychological functioning in virtual and actual reality and does so by focusing on the process of spontaneous mental simulation. Spontaneous mental simulation is prevalent under real-world conditions but may be absent under VR conditions, potentially leading to differences in judgment and behavior between virtual and actual reality. For simulation to occur, the virtual environment must be perceived as being available for action. A useful chart is supplied as a reference to help researchers to investigate eating and physical activity more effectively. © 2011 Diabetes Technology Society.
Visuo-Haptic Mixed Reality with Unobstructed Tool-Hand Integration.
Cosco, Francesco; Garre, Carlos; Bruno, Fabio; Muzzupappa, Maurizio; Otaduy, Miguel A
2013-01-01
Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. Unfortunately, the use of commodity haptic devices poses obstruction and misalignment issues that complicate the correct integration of a virtual tool and the user's real hand in the mixed reality scene. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, using commodity haptic devices, and with a visually consistent integration of the user's hand and the virtual tool. We discuss the visual obstruction and misalignment issues introduced by commodity haptic devices, and then propose a solution that relies on four simple technical steps: color-based segmentation of the hand, tracking-based segmentation of the haptic device, background repainting using image-based models, and misalignment-free compositing of the user's hand. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects and interact with them in the context of a real scene, and we have evaluated the impact on user performance of obstruction and misalignment correction.
Pacheco, Thaiana Barbosa Ferreira; Oliveira Rego, Isabelle Ananda; Campos, Tania Fernandes; Cavalcanti, Fabrícia Azevedo da Costa
2017-01-01
Virtual Reality (VR) has been contributing to Neurological Rehabilitation because of its interactive and multisensory nature, providing the potential of brain reorganization. Given the use of mobile EEG devices, there is the possibility of investigating how the virtual therapeutic environment can influence brain activity. To compare theta, alpha, beta and gamma power in healthy young adults during a lower limb motor task in a virtual and real environment. Ten healthy adults were submitted to an EEG assessment while performing a one-minute task consisted of going up and down a step in a virtual environment - Nintendo Wii virtual game "Basic step" - and in a real environment. Real environment caused an increase in theta and alpha power, with small to large size effects mainly in the frontal region. VR caused a greater increase in beta and gamma power, however, with small or negligible effects on a variety of regions regarding beta frequency, and medium to very large effects on the frontal and the occipital regions considering gamma frequency. Theta, alpha, beta and gamma activity during the execution of a motor task differs according to the environment that the individual is exposed - real or virtual - and may have varying size effects if brain area activation and frequency spectrum in each environment are taken into consideration.
Teleoperation with virtual force feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R.J.
1993-08-01
In this paper we describe an algorithm for generating virtual forces in a bilateral teleoperator system. The virtual forces are generated from a world model and are used to provide real-time obstacle avoidance and guidance capabilities. The algorithm requires that the slaves tool and every object in the environment be decomposed into convex polyhedral Primitives. Intrusion distance and extraction vectors are then derived at every time step by applying Gilbert`s polyhedra distance algorithm, which has been adapted for the task. This information is then used to determine the compression and location of nonlinear virtual spring-dampers whose total force is summedmore » and applied to the manipulator/teleoperator system. Experimental results validate the whole approach, showing that it is possible to compute the algorithm and generate realistic, useful psuedo forces for a bilateral teleoperator system using standard VME bus hardware.« less
Stanton, D; Foreman, N; Wilson, P N
1998-01-01
In this chapter we review some of the ways in which the skills learned in virtual environments (VEs) transfer to real situations, and in particular how information about the spatial layouts of virtual buildings acquired from the exploration of three-dimensional computer-simulations transfers to their real equivalents. Four experiments are briefly described which examined VR use by disabled children. We conclude that spatial information of the kind required for navigation transfers effectively from virtual to real situations. Spatial skills in disabled children showed progressive improvement with repeated exploration of virtual environments. The results are discussed in relation to the potential future benefits of VR in special needs education and training.
Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments.
Rincon, J A; Poza-Lujan, Jose-Luis; Julian, V; Posadas-Yagüe, Juan-Luis; Carrascosa, C
2016-01-01
This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system.
Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments
2016-01-01
This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system. PMID:26926691
Future Evolution of Virtual Worlds as Communication Environments
NASA Astrophysics Data System (ADS)
Prisco, Giulio
Extensive experience creating locations and activities inside virtual worlds provides the basis for contemplating their future. Users of virtual worlds are diverse in their goals for these online environments; for example, immersionists want them to be alternative realities disconnected from real life, whereas augmentationists want them to be communication media supporting real-life activities. As the technology improves, the diversity of virtual worlds will increase along with their significance. Many will incorporate more advanced virtual reality, or serve as major media for long-distance collaboration, or become the venues for futurist social movements. Key issues are how people can create their own virtual worlds, travel across worlds, and experience a variety of multimedia immersive environments. This chapter concludes by noting the view among some computer scientists that future technologies will permit uploading human personalities to artificial intelligence avatars, thereby enhancing human beings and rendering the virtual worlds entirely real.