Virtual Sensor Test Instrumentation
NASA Technical Reports Server (NTRS)
Wang, Roy
2011-01-01
Virtual Sensor Test Instrumentation is based on the concept of smart sensor technology for testing with intelligence needed to perform sell-diagnosis of health, and to participate in a hierarchy of health determination at sensor, process, and system levels. A virtual sensor test instrumentation consists of five elements: (1) a common sensor interface, (2) microprocessor, (3) wireless interface, (4) signal conditioning and ADC/DAC (analog-to-digital conversion/ digital-to-analog conversion), and (5) onboard EEPROM (electrically erasable programmable read-only memory) for metadata storage and executable software to create powerful, scalable, reconfigurable, and reliable embedded and distributed test instruments. In order to maximize the efficient data conversion through the smart sensor node, plug-and-play functionality is required to interface with traditional sensors to enhance their identity and capabilities for data processing and communications. Virtual sensor test instrumentation can be accessible wirelessly via a Network Capable Application Processor (NCAP) or a Smart Transducer Interlace Module (STIM) that may be managed under real-time rule engines for mission-critical applications. The transducer senses the physical quantity being measured and converts it into an electrical signal. The signal is fed to an A/D converter, and is ready for use by the processor to execute functional transformation based on the sensor characteristics stored in a Transducer Electronic Data Sheet (TEDS). Virtual sensor test instrumentation is built upon an open-system architecture with standardized protocol modules/stacks to interface with industry standards and commonly used software. One major benefit for deploying the virtual sensor test instrumentation is the ability, through a plug-and-play common interface, to convert raw sensor data in either analog or digital form, to an IEEE 1451 standard-based smart sensor, which has instructions to program sensors for a wide variety of functions. The sensor data is processed in a distributed fashion across the network, providing a large pool of resources in real time to meet stringent latency requirements.
Virtual pyramid wavefront sensor for phase unwrapping.
Akondi, Vyas; Vohnsen, Brian; Marcos, Susana
2016-10-10
Noise affects wavefront reconstruction from wrapped phase data. A novel method of phase unwrapping is proposed with the help of a virtual pyramid wavefront sensor. The method was tested on noisy wrapped phase images obtained experimentally with a digital phase-shifting point diffraction interferometer. The virtuality of the pyramid wavefront sensor allows easy tuning of the pyramid apex angle and modulation amplitude. It is shown that an optimal modulation amplitude obtained by monitoring the Strehl ratio helps in achieving better accuracy. Through simulation studies and iterative estimation, it is shown that the virtual pyramid wavefront sensor is robust to random noise.
Virtual Sensors for Designing Irrigation Controllers in Greenhouses
Sánchez, Jorge Antonio; Rodríguez, Francisco; Guzmán, José Luis; Arahal, Manuel R
2012-01-01
Monitoring the greenhouse transpiration for control purposes is currently a difficult task. The absence of affordable sensors that provide continuous transpiration measurements motivates the use of estimators. In the case of tomato crops, the availability of estimators allows the design of automatic fertirrigation (irrigation + fertilization) schemes in greenhouses, minimizing the dispensed water while fulfilling crop needs. This paper shows how system identification techniques can be applied to obtain nonlinear virtual sensors for estimating transpiration. The greenhouse used for this study is equipped with a microlysimeter, which allows one to continuously sample the transpiration values. While the microlysimeter is an advantageous piece of equipment for research, it is also expensive and requires maintenance. This paper presents the design and development of a virtual sensor to model the crop transpiration, hence avoiding the use of this kind of expensive sensor. The resulting virtual sensor is obtained by dynamical system identification techniques based on regressors taken from variables typically found in a greenhouse, such as global radiation and vapor pressure deficit. The virtual sensor is thus based on empirical data. In this paper, some effort has been made to eliminate some problems associated with grey-box models: advance phenomenon and overestimation. The results are tested with real data and compared with other approaches. Better results are obtained with the use of nonlinear Black-box virtual sensors. This sensor is based on global radiation and vapor pressure deficit (VPD) measurements. Predictive results for the three models are developed for comparative purposes. PMID:23202208
Ranky, Richard G; Sivak, Mark L; Lewis, Jeffrey A; Gade, Venkata K; Deutsch, Judith E; Mavroidis, Constantinos
2014-06-05
Cycling has been used in the rehabilitation of individuals with both chronic and post-surgical conditions. Among the challenges with implementing bicycling for rehabilitation is the recruitment of both extremities, in particular when one is weaker or less coordinated. Feedback embedded in virtual reality (VR) augmented cycling may serve to address the requirement for efficacious cycling; specifically recruitment of both extremities and exercising at a high intensity. In this paper a mechatronic rehabilitation bicycling system with an interactive virtual environment, called Virtual Reality Augmented Cycling Kit (VRACK), is presented. Novel hardware components embedded with sensors were implemented on a stationary exercise bicycle to monitor physiological and biomechanical parameters of participants while immersing them in an augmented reality simulation providing the user with visual, auditory and haptic feedback. This modular and adaptable system attaches to commercially-available stationary bicycle systems and interfaces with a personal computer for simulation and data acquisition processes. The complete bicycle system includes: a) handle bars based on hydraulic pressure sensors; b) pedals that monitor pedal kinematics with an inertial measurement unit (IMU) and forces on the pedals while providing vibratory feedback; c) off the shelf electronics to monitor heart rate and d) customized software for rehabilitation. Bench testing for the handle and pedal systems is presented for calibration of the sensors detecting force and angle. The modular mechatronic kit for exercise bicycles was tested in bench testing and human tests. Bench tests performed on the sensorized handle bars and the instrumented pedals validated the measurement accuracy of these components. Rider tests with the VRACK system focused on the pedal system and successfully monitored kinetic and kinematic parameters of the rider's lower extremities. The VRACK system, a virtual reality mechatronic bicycle rehabilitation modular system was designed to convert most bicycles in virtual reality (VR) cycles. Preliminary testing of the augmented reality bicycle system was successful in demonstrating that a modular mechatronic kit can monitor and record kinetic and kinematic parameters of several riders.
NASA Technical Reports Server (NTRS)
Doggett, William; Vazquez, Sixto
2000-01-01
A visualization system is being developed out of the need to monitor, interpret, and make decisions based on the information from several thousand sensors during experimental testing to facilitate development and validation of structural health monitoring algorithms. As an added benefit the system will enable complete real-time sensor assessment of complex test specimens. Complex structural specimens are routinely tested that have hundreds or thousands of sensors. During a test, it is impossible for a single researcher to effectively monitor all the sensors and subsequently interesting phenomena occur that are not recognized until post-test analysis. The ability to detect and alert the researcher to these unexpected phenomena as the test progresses will significantly enhance the understanding and utilization of complex test articles. Utilization is increased by the ability to halt a test when the health monitoring algorithm response is not satisfactory or when an unexpected phenomenon occurs, enabling focused investigation potentially through the installation of additional sensors. Often if the test continues, structural changes make it impossible to reproduce the conditions that exhibited the phenomena. The prohibitive time and costs associated with fabrication, sensoring, and subsequent testing of additional test articles generally makes it impossible to further investigate the phenomena. A scalable architecture is described to address the complex computational demands of structural health monitoring algorithm development and laboratory experimental test monitoring. The researcher monitors the test using a photographic quality 3D graphical model with actual sensor locations identified. In addition, researchers can quickly activate plots displaying time or load versus selected sensor response along with the expected values and predefined limits. The architecture has several key features. First, distributed dissimilar computers may be seamlessly integrated into the information flow. Second, virtual sensors may be defined that are complex functions of existing sensors or other virtual sensors. Virtual sensors represent a calculated value not directly measured by particular physical instrument. They can be used, for example, to represent the maximum difference in a range of sensors or the calculated buckling load based on the current strains. Third, the architecture enables autonomous response to preconceived events, where by the system can be configured to suspend or abort a test if a failure is detected in the load introduction system. Fourth, the architecture is designed to allow cooperative monitoring and control of the test progression from multiple stations both remote and local to the test system. To illustrate the architecture, a preliminary implementation is described monitoring the Stitched Composite Wing recently tested at LaRC.
NASA Astrophysics Data System (ADS)
Wang, H.; Jing, X. J.
2017-02-01
This paper proposes a novel method for the fault diagnosis of complex structures based on an optimized virtual beam-like structure approach. A complex structure can be regarded as a combination of numerous virtual beam-like structures considering the vibration transmission path from vibration sources to each sensor. The structural 'virtual beam' consists of a sensor chain automatically obtained by an Improved Bacterial Optimization Algorithm (IBOA). The biologically inspired optimization method (i.e. IBOA) is proposed for solving the discrete optimization problem associated with the selection of the optimal virtual beam for fault diagnosis. This novel virtual beam-like-structure approach needs less or little prior knowledge. Neither does it require stationary response data, nor is it confined to a specific structure design. It is easy to implement within a sensor network attached to the monitored structure. The proposed fault diagnosis method has been tested on the detection of loosening screws located at varying positions in a real satellite-like model. Compared with empirical methods, the proposed virtual beam-like structure method has proved to be very effective and more reliable for fault localization.
Evaluation of Sensor Configurations for Robotic Surgical Instruments
Gómez-de-Gabriel, Jesús M.; Harwin, William
2015-01-01
Designing surgical instruments for robotic-assisted minimally-invasive surgery (RAMIS) is challenging due to constraints on the number and type of sensors imposed by considerations such as space or the need for sterilization. A new method for evaluating the usability of virtual teleoperated surgical instruments based on virtual sensors is presented. This method uses virtual prototyping of the surgical instrument with a dual physical interaction, which allows testing of different sensor configurations in a real environment. Moreover, the proposed approach has been applied to the evaluation of prototypes of a two-finger grasper for lump detection by remote pinching. In this example, the usability of a set of five different sensor configurations, with a different number of force sensors, is evaluated in terms of quantitative and qualitative measures in clinical experiments with 23 volunteers. As a result, the smallest number of force sensors needed in the surgical instrument that ensures the usability of the device can be determined. The details of the experimental setup are also included. PMID:26516863
Evaluation of Sensor Configurations for Robotic Surgical Instruments.
Gómez-de-Gabriel, Jesús M; Harwin, William
2015-10-27
Designing surgical instruments for robotic-assisted minimally-invasive surgery (RAMIS) is challenging due to constraints on the number and type of sensors imposed by considerations such as space or the need for sterilization. A new method for evaluating the usability of virtual teleoperated surgical instruments based on virtual sensors is presented. This method uses virtual prototyping of the surgical instrument with a dual physical interaction, which allows testing of different sensor configurations in a real environment. Moreover, the proposed approach has been applied to the evaluation of prototypes of a two-finger grasper for lump detection by remote pinching. In this example, the usability of a set of five different sensor configurations, with a different number of force sensors, is evaluated in terms of quantitative and qualitative measures in clinical experiments with 23 volunteers. As a result, the smallest number of force sensors needed in the surgical instrument that ensures the usability of the device can be determined. The details of the experimental setup are also included.
2014-01-01
Background Cycling has been used in the rehabilitation of individuals with both chronic and post-surgical conditions. Among the challenges with implementing bicycling for rehabilitation is the recruitment of both extremities, in particular when one is weaker or less coordinated. Feedback embedded in virtual reality (VR) augmented cycling may serve to address the requirement for efficacious cycling; specifically recruitment of both extremities and exercising at a high intensity. Methods In this paper a mechatronic rehabilitation bicycling system with an interactive virtual environment, called Virtual Reality Augmented Cycling Kit (VRACK), is presented. Novel hardware components embedded with sensors were implemented on a stationary exercise bicycle to monitor physiological and biomechanical parameters of participants while immersing them in an augmented reality simulation providing the user with visual, auditory and haptic feedback. This modular and adaptable system attaches to commercially-available stationary bicycle systems and interfaces with a personal computer for simulation and data acquisition processes. The complete bicycle system includes: a) handle bars based on hydraulic pressure sensors; b) pedals that monitor pedal kinematics with an inertial measurement unit (IMU) and forces on the pedals while providing vibratory feedback; c) off the shelf electronics to monitor heart rate and d) customized software for rehabilitation. Bench testing for the handle and pedal systems is presented for calibration of the sensors detecting force and angle. Results The modular mechatronic kit for exercise bicycles was tested in bench testing and human tests. Bench tests performed on the sensorized handle bars and the instrumented pedals validated the measurement accuracy of these components. Rider tests with the VRACK system focused on the pedal system and successfully monitored kinetic and kinematic parameters of the rider’s lower extremities. Conclusions The VRACK system, a virtual reality mechatronic bicycle rehabilitation modular system was designed to convert most bicycles in virtual reality (VR) cycles. Preliminary testing of the augmented reality bicycle system was successful in demonstrating that a modular mechatronic kit can monitor and record kinetic and kinematic parameters of several riders. PMID:24902780
Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis
Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel
2013-01-01
This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007
VERDEX: A virtual environment demonstrator for remote driving applications
NASA Technical Reports Server (NTRS)
Stone, Robert J.
1991-01-01
One of the key areas of the National Advanced Robotics Centre's enabling technologies research program is that of the human system interface, phase 1 of which started in July 1989 and is currently addressing the potential of virtual environments to permit intuitive and natural interactions between a human operator and a remote robotic vehicle. The aim of the first 12 months of this program (to September, 1990) is to develop a virtual human-interface demonstrator for use later as a test bed for human factors experimentation. This presentation will describe the current state of development of the test bed, and will outline some human factors issues and problems for more general discussion. In brief, the virtual telepresence system for remote driving has been designed to take the following form. The human operator will be provided with a helmet-mounted stereo display assembly, facilities for speech recognition and synthesis (using the Marconi Macrospeak system), and a VPL DataGlove Model 2 unit. The vehicle to be used for the purposes of remote driving is a Cybermotion Navmaster K2A system, which will be equipped with a stereo camera and microphone pair, mounted on a motorized high-speed pan-and-tilt head incorporating a closed-loop laser ranging sensor for camera convergence control (currently under contractual development). It will be possible to relay information to and from the vehicle and sensory system via an umbilical or RF link. The aim is to develop an interactive audio-visual display system capable of presenting combined stereo TV pictures and virtual graphics windows, the latter featuring control representations appropriate for vehicle driving and interaction using a graphical 'hand,' slaved to the flex and tracking sensors of the DataGlove and an additional helmet-mounted Polhemus IsoTrack sensor. Developments planned for the virtual environment test bed include transfer of operator control between remote driving and remote manipulation, dexterous end effector integration, virtual force and tactile sensing (also the focus of a current ARRL contract, initially employing a 14-pneumatic bladder glove attachment), and sensor-driven world modeling for total virtual environment generation and operator-assistance in remote scene interrogation.
Virtual Sensor for Kinematic Estimation of Flexible Links in Parallel Robots
Cabanes, Itziar; Mancisidor, Aitziber; Pinto, Charles
2017-01-01
The control of flexible link parallel manipulators is still an open area of research, endpoint trajectory tracking being one of the main challenges in this type of robot. The flexibility and deformations of the limbs make the estimation of the Tool Centre Point (TCP) position a challenging one. Authors have proposed different approaches to estimate this deformation and deduce the location of the TCP. However, most of these approaches require expensive measurement systems or the use of high computational cost integration methods. This work presents a novel approach based on a virtual sensor which can not only precisely estimate the deformation of the flexible links in control applications (less than 2% error), but also its derivatives (less than 6% error in velocity and 13% error in acceleration) according to simulation results. The validity of the proposed Virtual Sensor is tested in a Delta Robot, where the position of the TCP is estimated based on the Virtual Sensor measurements with less than a 0.03% of error in comparison with the flexible approach developed in ADAMS Multibody Software. PMID:28832510
Virtual DRI dataset development
NASA Astrophysics Data System (ADS)
Hixson, Jonathan G.; Teaney, Brian P.; May, Christopher; Maurer, Tana; Nelson, Michael B.; Pham, Justin R.
2017-05-01
The U.S. Army RDECOM CERDEC NVESD MSD's target acquisition models have been used for many years by the military analysis community for sensor design, trade studies, and field performance prediction. This paper analyzes the results of perception tests performed to compare the results of a field DRI (Detection, Recognition, and Identification Test) performed in 2009 to current Soldier performance viewing the same imagery in a laboratory environment and simulated imagery of the same data set. The purpose of the experiment is to build a robust data set for use in the virtual prototyping of infrared sensors. This data set will provide a strong foundation relating, model predictions, field DRI results and simulated imagery.
Toyonaga, Shinya; Kominami, Daichi; Murata, Masayuki
2016-01-01
Many researchers are devoting attention to the so-called “Internet of Things” (IoT), and wireless sensor networks (WSNs) are regarded as a critical technology for realizing the communication infrastructure of the future, including the IoT. Against this background, virtualization is a crucial technique for the integration of multiple WSNs. Designing virtualized WSNs for actual environments will require further detailed studies. Within the IoT environment, physical networks can undergo dynamic change, and so, many problems exist that could prevent applications from running without interruption when using the existing approaches. In this paper, we show an overall architecture that is suitable for constructing and running virtual wireless sensor network (VWSN) services within a VWSN topology. Our approach provides users with a reliable VWSN network by assigning redundant resources according to each user’s demand and providing a recovery method to incorporate environmental changes. We tested this approach by simulation experiment, with the results showing that the VWSN network is reliable in many cases, although physical deployment of sensor nodes and the modular structure of the VWSN will be quite important to the stability of services within the VWSN topology. PMID:27548177
Toyonaga, Shinya; Kominami, Daichi; Murata, Masayuki
2016-08-19
Many researchers are devoting attention to the so-called "Internet of Things" (IoT), and wireless sensor networks (WSNs) are regarded as a critical technology for realizing the communication infrastructure of the future, including the IoT. Against this background, virtualization is a crucial technique for the integration of multiple WSNs. Designing virtualized WSNs for actual environments will require further detailed studies. Within the IoT environment, physical networks can undergo dynamic change, and so, many problems exist that could prevent applications from running without interruption when using the existing approaches. In this paper, we show an overall architecture that is suitable for constructing and running virtual wireless sensor network (VWSN) services within a VWSN topology. Our approach provides users with a reliable VWSN network by assigning redundant resources according to each user's demand and providing a recovery method to incorporate environmental changes. We tested this approach by simulation experiment, with the results showing that the VWSN network is reliable in many cases, although physical deployment of sensor nodes and the modular structure of the VWSN will be quite important to the stability of services within the VWSN topology.
Halim, Dunant; Cheng, Li; Su, Zhongqing
2011-04-01
The work proposed an optimization approach for structural sensor placement to improve the performance of vibro-acoustic virtual sensor for active noise control applications. The vibro-acoustic virtual sensor was designed to estimate the interior sound pressure of an acoustic-structural coupled enclosure using structural sensors. A spectral-spatial performance metric was proposed, which was used to quantify the averaged structural sensor output energy of a vibro-acoustic system excited by a spatially varying point source. It was shown that (i) the overall virtual sensing error energy was contributed additively by the modal virtual sensing error and the measurement noise energy; (ii) each of the modal virtual sensing error system was contributed by both the modal observability levels for the structural sensing and the target acoustic virtual sensing; and further (iii) the strength of each modal observability level was influenced by the modal coupling and resonance frequencies of the associated uncoupled structural/cavity modes. An optimal design of structural sensor placement was proposed to achieve sufficiently high modal observability levels for certain important panel- and cavity-controlled modes. Numerical analysis on a panel-cavity system demonstrated the importance of structural sensor placement on virtual sensing and active noise control performance, particularly for cavity-controlled modes.
Virtual sensor models for real-time applications
NASA Astrophysics Data System (ADS)
Hirsenkorn, Nils; Hanke, Timo; Rauch, Andreas; Dehlink, Bernhard; Rasshofer, Ralph; Biebl, Erwin
2016-09-01
Increased complexity and severity of future driver assistance systems demand extensive testing and validation. As supplement to road tests, driving simulations offer various benefits. For driver assistance functions the perception of the sensors is crucial. Therefore, sensors also have to be modeled. In this contribution, a statistical data-driven sensor-model, is described. The state-space based method is capable of modeling various types behavior. In this contribution, the modeling of the position estimation of an automotive radar system, including autocorrelations, is presented. For rendering real-time capability, an efficient implementation is presented.
Virtual sensors for robust on-line monitoring (OLM) and Diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Lerchen, Megan E.; Ramuhalli, Pradeep
Unscheduled shutdown of nuclear power facilities for recalibration and replacement of faulty sensors can be expensive and disruptive to grid management. In this work, we present virtual (software) sensors that can replace a faulty physical sensor for a short duration thus allowing recalibration to be safely deferred to a later time. The virtual sensor model uses a Gaussian process model to process input data from redundant and other nearby sensors. Predicted data includes uncertainty bounds including spatial association uncertainty and measurement noise and error. Using data from an instrumented cooling water flow loop testbed, the virtual sensor model has predictedmore » correct sensor measurements and the associated error corresponding to a faulty sensor.« less
Wavelets and Elman Neural Networks for monitoring environmental variables
NASA Astrophysics Data System (ADS)
Ciarlini, Patrizia; Maniscalco, Umberto
2008-11-01
An application in cultural heritage is introduced. Wavelet decomposition and Neural Networks like virtual sensors are jointly used to simulate physical and chemical measurements in specific locations of a monument. Virtual sensors, suitably trained and tested, can substitute real sensors in monitoring the monument surface quality, while the real ones should be installed for a long time and at high costs. The application of the wavelet decomposition to the environmental data series allows getting the treatment of underlying temporal structure at low frequencies. Consequently a separate training of suitable Elman Neural Networks for high/low components can be performed, thus improving the networks convergence in learning time and measurement accuracy in working time.
Halim, Dunant; Cheng, Li; Su, Zhongqing
2011-03-01
The work was aimed to develop a robust virtual sensing design methodology for sensing and active control applications of vibro-acoustic systems. The proposed virtual sensor was designed to estimate a broadband acoustic interior sound pressure using structural sensors, with robustness against certain dynamic uncertainties occurring in an acoustic-structural coupled enclosure. A convex combination of Kalman sub-filters was used during the design, accommodating different sets of perturbed dynamic model of the vibro-acoustic enclosure. A minimax optimization problem was set up to determine an optimal convex combination of Kalman sub-filters, ensuring an optimal worst-case virtual sensing performance. The virtual sensing and active noise control performance was numerically investigated on a rectangular panel-cavity system. It was demonstrated that the proposed virtual sensor could accurately estimate the interior sound pressure, particularly the one dominated by cavity-controlled modes, by using a structural sensor. With such a virtual sensing technique, effective active noise control performance was also obtained even for the worst-case dynamics. © 2011 Acoustical Society of America
A Survey on Virtualization of Wireless Sensor Networks
Islam, Md. Motaharul; Hassan, Mohammad Mehedi; Lee, Ga-Won; Huh, Eui-Nam
2012-01-01
Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization. PMID:22438759
A survey on virtualization of Wireless Sensor Networks.
Islam, Md Motaharul; Hassan, Mohammad Mehedi; Lee, Ga-Won; Huh, Eui-Nam
2012-01-01
Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization.
Virtual Sensors for Advanced Controllers in Rehabilitation Robotics.
Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Portillo, Eva; Jung, Je Hyung
2018-03-05
In order to properly control rehabilitation robotic devices, the measurement of interaction force and motion between patient and robot is an essential part. Usually, however, this is a complex task that requires the use of accurate sensors which increase the cost and the complexity of the robotic device. In this work, we address the development of virtual sensors that can be used as an alternative of actual force and motion sensors for the Universal Haptic Pantograph (UHP) rehabilitation robot for upper limbs training. These virtual sensors estimate the force and motion at the contact point where the patient interacts with the robot using the mathematical model of the robotic device and measurement through low cost position sensors. To demonstrate the performance of the proposed virtual sensors, they have been implemented in an advanced position/force controller of the UHP rehabilitation robot and experimentally evaluated. The experimental results reveal that the controller based on the virtual sensors has similar performance to the one using direct measurement (less than 0.005 m and 1.5 N difference in mean error). Hence, the developed virtual sensors to estimate interaction force and motion can be adopted to replace actual precise but normally high-priced sensors which are fundamental components for advanced control of rehabilitation robotic devices.
Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos
2016-01-01
This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform’s mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument’s working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform. PMID:27869722
Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos
2016-11-18
This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform's mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument's working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform.
NASA Astrophysics Data System (ADS)
Mahajan, Ajay; Chitikeshi, Sanjeevi; Utterbach, Lucas; Bandhil, Pavan; Figueroa, Fernando
2006-05-01
This paper describes the application of intelligent sensors in the Integrated Systems Health Monitoring (ISHM) as applied to a rocket test stand. The development of intelligent sensors is attempted as an integrated system approach, i.e. one treats the sensors as a complete system with its own physical transducer, A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the NASA Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements associated with the rocket tests stands. These smart elements can be sensors, actuators or other devices. Though the immediate application is the monitoring of the rocket test stands, the technology should be generally applicable to the ISHM vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent sensors (PIS) and Virtual Intelligent Sensors (VIS).
Speller, Nicholas C; Siraj, Noureen; Regmi, Bishnu P; Marzoughi, Hassan; Neal, Courtney; Warner, Isiah M
2015-01-01
Herein, we demonstrate an alternative strategy for creating QCM-based sensor arrays by use of a single sensor to provide multiple responses per analyte. The sensor, which simulates a virtual sensor array (VSA), was developed by depositing a thin film of ionic liquid, either 1-octyl-3-methylimidazolium bromide ([OMIm][Br]) or 1-octyl-3-methylimidazolium thiocyanate ([OMIm][SCN]), onto the surface of a QCM-D transducer. The sensor was exposed to 18 different organic vapors (alcohols, hydrocarbons, chlorohydrocarbons, nitriles) belonging to the same or different homologous series. The resulting frequency shifts (Δf) were measured at multiple harmonics and evaluated using principal component analysis (PCA) and discriminant analysis (DA) which revealed that analytes can be classified with extremely high accuracy. In almost all cases, the accuracy for identification of a member of the same class, that is, intraclass discrimination, was 100% as determined by use of quadratic discriminant analysis (QDA). Impressively, some VSAs allowed classification of all 18 analytes tested with nearly 100% accuracy. Such results underscore the importance of utilizing lesser exploited properties that influence signal transduction. Overall, these results demonstrate excellent potential of the virtual sensor array strategy for detection and discrimination of vapor phase analytes utilizing the QCM. To the best of our knowledge, this is the first report on QCM VSAs, as well as an experimental sensor array, that is based primarily on viscoelasticity, film thickness, and harmonics.
Phase unwrapping with a virtual Hartmann-Shack wavefront sensor.
Akondi, Vyas; Falldorf, Claas; Marcos, Susana; Vohnsen, Brian
2015-10-05
The use of a spatial light modulator for implementing a digital phase-shifting (PS) point diffraction interferometer (PDI) allows tunability in fringe spacing and in achieving PS without the need for mechanically moving parts. However, a small amount of detector or scatter noise could affect the accuracy of wavefront sensing. Here, a novel method of wavefront reconstruction incorporating a virtual Hartmann-Shack (HS) wavefront sensor is proposed that allows easy tuning of several wavefront sensor parameters. The proposed method was tested and compared with a Fourier unwrapping method implemented on a digital PS PDI. The rewrapping of the Fourier reconstructed wavefronts resulted in phase maps that matched well the original wrapped phase and the performance was found to be more stable and accurate than conventional methods. Through simulation studies, the superiority of the proposed virtual HS phase unwrapping method is shown in comparison with the Fourier unwrapping method in the presence of noise. Further, combining the two methods could improve accuracy when the signal-to-noise ratio is sufficiently high.
Augmented reality visualization of deformable tubular structures for surgical simulation.
Ferrari, Vincenzo; Viglialoro, Rosanna Maria; Nicoli, Paola; Cutolo, Fabrizio; Condino, Sara; Carbone, Marina; Siesto, Mentore; Ferrari, Mauro
2016-06-01
Surgical simulation based on augmented reality (AR), mixing the benefits of physical and virtual simulation, represents a step forward in surgical training. However, available systems are unable to update the virtual anatomy following deformations impressed on actual anatomy. A proof-of-concept solution is described providing AR visualization of hidden deformable tubular structures using nitinol tubes sensorized with electromagnetic sensors. This system was tested in vitro on a setup comprised of sensorized cystic, left and right hepatic, and proper hepatic arteries. In the trial session, the surgeon deformed the tubular structures with surgical forceps in 10 positions. The mean, standard deviation, and maximum misalignment between virtual and real arteries were 0.35, 0.22, and 0.99 mm, respectively. The alignment accuracy obtained demonstrates the feasibility of the approach, which can be adopted in advanced AR simulations, in particular as an aid to the identification and isolation of tubular structures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.
2016-05-01
The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.
Virtualization of event sources in wireless sensor networks for the internet of things.
Lucas Martínez, Néstor; Martínez, José-Fernán; Hernández Díaz, Vicente
2014-12-01
Wireless Sensor Networks (WSNs) are generally used to collect information from the environment. The gathered data are delivered mainly to sinks or gateways that become the endpoints where applications can retrieve and process such data. However, applications would also expect from a WSN an event-driven operational model, so that they can be notified whenever occur some specific environmental changes instead of continuously analyzing the data provided periodically. In either operational model, WSNs represent a collection of interconnected objects, as outlined by the Internet of Things. Additionally, in order to fulfill the Internet of Things principles, Wireless Sensor Networks must have a virtual representation that allows indirect access to their resources, a model that should also include the virtualization of event sources in a WSN. Thus, in this paper a model for a virtual representation of event sources in a WSN is proposed. They are modeled as internet resources that are accessible by any internet application, following an Internet of Things approach. The model has been tested in a real implementation where a WSN has been deployed in an open neighborhood environment. Different event sources have been identified in the proposed scenario, and they have been represented following the proposed model.
Physical environment virtualization for human activities recognition
NASA Astrophysics Data System (ADS)
Poshtkar, Azin; Elangovan, Vinayak; Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen
2015-05-01
Human activity recognition research relies heavily on extensive datasets to verify and validate performance of activity recognition algorithms. However, obtaining real datasets are expensive and highly time consuming. A physics-based virtual simulation can accelerate the development of context based human activity recognition algorithms and techniques by generating relevant training and testing videos simulating diverse operational scenarios. In this paper, we discuss in detail the requisite capabilities of a virtual environment to aid as a test bed for evaluating and enhancing activity recognition algorithms. To demonstrate the numerous advantages of virtual environment development, a newly developed virtual environment simulation modeling (VESM) environment is presented here to generate calibrated multisource imagery datasets suitable for development and testing of recognition algorithms for context-based human activities. The VESM environment serves as a versatile test bed to generate a vast amount of realistic data for training and testing of sensor processing algorithms. To demonstrate the effectiveness of VESM environment, we present various simulated scenarios and processed results to infer proper semantic annotations from the high fidelity imagery data for human-vehicle activity recognition under different operational contexts.
A lightweight sensor network management system design
Yuan, F.; Song, W.-Z.; Peterson, N.; Peng, Y.; Wang, L.; Shirazi, B.; LaHusen, R.
2008-01-01
In this paper, we propose a lightweight and transparent management framework for TinyOS sensor networks, called L-SNMS, which minimizes the overhead of management functions, including memory usage overhead, network traffic overhead, and integration overhead. We accomplish this by making L-SNMS virtually transparent to other applications hence requiring minimal integration. The proposed L-SNMS framework has been successfully tested on various sensor node platforms, including TelosB, MICAz and IMote2. ?? 2008 IEEE.
Low-complexity piecewise-affine virtual sensors: theory and design
NASA Astrophysics Data System (ADS)
Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco
2014-03-01
This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.
A Virtual Sensor for Online Fault Detection of Multitooth-Tools
Bustillo, Andres; Correa, Maritza; Reñones, Anibal
2011-01-01
The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a Bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases. PMID:22163766
A virtual sensor for online fault detection of multitooth-tools.
Bustillo, Andres; Correa, Maritza; Reñones, Anibal
2011-01-01
The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases.
An Integrated FDD System for HVAC&R Based on Virtual Sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Woohyun
According to the U.S Department of Energy, space heating, ventilation and air conditioning system account for 40% of residential primary energy use and for 30% of primary energy use in commercial buildings. A study released by the Energy Information Administration indicated that packaged air conditioners are widely used in 46% of all commercial buildings in the U.S. This study indicates that the annual cooling energy consumption related to the packaged air conditioner is about 160 trillion Btus. Therefore, an automated FDD system that can automatically detect and diagnose faults and evaluate fault impacts has the potential for improving energy efficiencymore » along with reducing service costs and comfort complaints. The primary bottlenecks to diagnostic implementation in the field are the high initial costs of additional sensors. To prevent those limitations, virtual sensors with low cost measurements and simple models are developed to estimate quantities that would be expensive and or difficult to measure directly. The use of virtual sensors can reduce costs compared to the use of real sensors and provide additional information for economic assessment. The virtual sensor can be embedded in a permanently installed control or monitoring system and continuous monitoring potentially leads to early detection of faults. The virtual sensors of individual equipment components can be integrated to estimate overall diagnostic information using the output of each virtual sensor.« less
ROBUST ONLINE MONITORING FOR CALIBRATION ASSESSMENT OF TRANSMITTERS AND INSTRUMENTATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Tipireddy, Ramakrishna; Lerchen, Megan E.
Robust online monitoring (OLM) technologies are expected to enable the extension or elimination of periodic sensor calibration intervals in operating and new reactors. Specifically, the next generation of OLM technology is expected to include newly developed advanced algorithms that improve monitoring of sensor/system performance and enable the use of plant data to derive information that currently cannot be measured. These advances in OLM technologies will improve the safety and reliability of current and planned nuclear power systems through improved accuracy and increased reliability of sensors used to monitor key parameters. In this paper, we discuss an overview of research beingmore » performed within the Nuclear Energy Enabling Technologies (NEET)/Advanced Sensors and Instrumentation (ASI) program, for the development of OLM algorithms to use sensor outputs and, in combination with other available information, 1) determine whether one or more sensors are out of calibration or failing and 2) replace a failing sensor with reliable, accurate sensor outputs. Algorithm development is focused on the following OLM functions: • Signal validation – fault detection and selection of acceptance criteria • Virtual sensing – signal value prediction and acceptance criteria • Response-time assessment – fault detection and acceptance criteria selection A GP-based uncertainty quantification (UQ) method previously developed for UQ in OLM, was adapted for use in sensor-fault detection and virtual sensing. For signal validation, the various components to the OLM residual (which is computed using an AAKR model) were explicitly defined and modeled using a GP. Evaluation was conducted using flow loop data from multiple sources. Results using experimental data from laboratory-scale flow loops indicate that the approach, while capable of detecting sensor drift, may be incapable of discriminating between sensor drift and model inadequacy. This may be due to a simplification applied in the initial modeling, where the sensor degradation is assumed to be stationary. In the case of virtual sensors, the GP model was used in a predictive mode to estimate the correct sensor reading for sensors that may have failed. Results have indicated the viability of using this approach for virtual sensing. However, the GP model has proven to be computationally expensive, and so alternative algorithms for virtual sensing are being evaluated. Finally, automated approaches to performing noise analysis for extracting sensor response time were developed. Evaluation of this technique using laboratory-scale data indicates that it compares well with manual techniques previously used for noise analysis. Moreover, the automated and manual approaches for noise analysis also compare well with the current “gold standard”, hydraulic ramp testing, for response time monitoring. Ongoing research in this project is focused on further evaluation of the algorithms, optimization for accuracy and computational efficiency, and integration into a suite of tools for robust OLM that are applicable to monitoring sensor calibration state in nuclear power plants.« less
Ultrasonic imaging of material flaws exploiting multipath information
NASA Astrophysics Data System (ADS)
Shen, Xizhong; Zhang, Yimin D.; Demirli, Ramazan; Amin, Moeness G.
2011-05-01
In this paper, we consider ultrasonic imaging for the visualization of flaws in a material. Ultrasonic imaging is a powerful nondestructive testing (NDT) tool which assesses material conditions via the detection, localization, and classification of flaws inside a structure. Multipath exploitations provide extended virtual array apertures and, in turn, enhance imaging capability beyond the limitation of traditional multisensor approaches. We utilize reflections of ultrasonic signals which occur when encountering different media and interior discontinuities. The waveforms observed at the physical as well as virtual sensors yield additional measurements corresponding to different aspect angles. Exploitation of multipath information addresses unique issues observed in ultrasonic imaging. (1) Utilization of physical and virtual sensors significantly extends the array aperture for image enhancement. (2) Multipath signals extend the angle of view of the narrow beamwidth of the ultrasound transducers, allowing improved visibility and array design flexibility. (3) Ultrasonic signals experience difficulty in penetrating a flaw, thus the aspect angle of the observation is limited unless access to other sides is available. The significant extension of the aperture makes it possible to yield flaw observation from multiple aspect angles. We show that data fusion of physical and virtual sensor data significantly improves the detection and localization performance. The effectiveness of the proposed multipath exploitation approach is demonstrated through experimental studies.
Virtualization of Event Sources in Wireless Sensor Networks for the Internet of Things
Martínez, Néstor Lucas; Martínez, José-Fernán; Díaz, Vicente Hernández
2014-01-01
Wireless Sensor Networks (WSNs) are generally used to collect information from the environment. The gathered data are delivered mainly to sinks or gateways that become the endpoints where applications can retrieve and process such data. However, applications would also expect from a WSN an event-driven operational model, so that they can be notified whenever occur some specific environmental changes instead of continuously analyzing the data provided periodically. In either operational model, WSNs represent a collection of interconnected objects, as outlined by the Internet of Things. Additionally, in order to fulfill the Internet of Things principles, Wireless Sensor Networks must have a virtual representation that allows indirect access to their resources, a model that should also include the virtualization of event sources in a WSN. Thus, in this paper a model for a virtual representation of event sources in a WSN is proposed. They are modeled as internet resources that are accessible by any internet application, following an Internet of Things approach. The model has been tested in a real implementation where a WSN has been deployed in an open neighborhood environment. Different event sources have been identified in the proposed scenario, and they have been represented following the proposed model. PMID:25470489
Automatic 3D virtual scenes modeling for multisensors simulation
NASA Astrophysics Data System (ADS)
Latger, Jean; Le Goff, Alain; Cathala, Thierry; Larive, Mathieu
2006-05-01
SEDRIS that stands for Synthetic Environment Data Representation and Interchange Specification is a DoD/DMSO initiative in order to federate and make interoperable 3D mocks up in the frame of virtual reality and simulation. This paper shows an original application of SEDRIS concept for research physical multi sensors simulation, when SEDRIS is more classically known for training simulation. CHORALE (simulated Optronic Acoustic Radar battlefield) is used by the French DGA/DCE (Directorate for Test and Evaluation of the French Ministry of Defense) to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes, and generate the physical signal received by a sensor, typically an IR sensor. In the scope of this CHORALE workshop, French DGA has decided to introduce a SEDRIS based new 3D terrain modeling tool that enables to create automatically 3D databases, directly usable by the physical sensor simulation CHORALE renderers. This AGETIM tool turns geographical source data (including GIS facilities) into meshed geometry enhanced with the sensor physical extensions, fitted to the ray tracing rendering of CHORALE, both for the infrared, electromagnetic and acoustic spectrum. The basic idea is to enhance directly the 2D source level with the physical data, rather than enhancing the 3D meshed level, which is more efficient (rapid database generation) and more reliable (can be generated many times, changing some parameters only). The paper concludes with the last current evolution of AGETIM in the scope mission rehearsal for urban war using sensors. This evolution includes indoor modeling for automatic generation of inner parts of buildings.
Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars
Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho
2015-01-01
In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629
Virtual and remote robotic laboratory using EJS, MATLAB and LabVIEW.
Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián
2013-02-21
This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.
Virtual and Remote Robotic Laboratory Using EJS, MATLAB and Lab VIEW
Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián
2013-01-01
This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented. PMID:23429578
NASA Astrophysics Data System (ADS)
Heavner, M. J.; Fatland, D. R.; Moeller, H.; Hood, E.; Schultz, M.
2007-12-01
The University of Alaska Southeast is currently implementing a sensor web identified as the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research (SEAMONSTER). From power systems and instrumentation through data management, visualization, education, and public outreach, SEAMONSTER is designed with modularity in mind. We are utilizing virtual earth infrastructures to enhance both sensor web management and data access. We will describe how the design philosophy of using open, modular components contributes to the exploration of different virtual earth environments. We will also describe the sensor web physical implementation and how the many components have corresponding virtual earth representations. This presentation will provide an example of the integration of sensor webs into a virtual earth. We suggest that IPY sensor networks and sensor webs may integrate into virtual earth systems and provide an IPY legacy easily accessible to both scientists and the public. SEAMONSTER utilizes geobrowsers for education and public outreach, sensor web management, data dissemination, and enabling collaboration. We generate near-real-time auto-updating geobrowser files of the data. In this presentation we will describe how we have implemented these technologies to date, the lessons learned, and our efforts towards greater OGC standard implementation. A major focus will be on demonstrating how geobrowsers have made this project possible.
A Survey of Middleware for Sensor and Network Virtualization
Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd.
2014-01-01
Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization. PMID:25615737
A survey of middleware for sensor and network virtualization.
Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd
2014-12-12
Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization.
Experimental Characterization of Microfabricated VirtualImpactor Efficiency
The Air-Microfluidics Group is developing a microelectromechanical systems-based direct reading particulate matter (PM) mass sensor. The sensor consists of two main components: a microfabricated virtual impactor (VI) and a PM mass sensor. The VI leverages particle inertia to sepa...
Virtual IED sensor at an rf-biased electrode in low-pressure plasma
NASA Astrophysics Data System (ADS)
Bogdanova, Maria; Lopaev, Dmitry; Zyryanov, Sergey; Rakhimov, Alexander
2016-09-01
The majority of present-day technologies resort to ion-assisted processes in rf low-pressure plasma. In order to control the process precisely, the energy distribution of ions (IED) bombarding the sample placed on the rf-biased electrode should be tracked. In this work the ``Virtual IED sensor'' concept is considered. The idea is to obtain the IED ``virtually'' from the plasma sheath model including a set of externally measurable discharge parameters. The applicability of the ``Virtual IED sensor'' concept was studied for dual-frequency asymmetric ICP and CCP discharges. The IED measurements were carried out in Ar and H2 plasmas in a wide range of conditions. The calculated IEDs were compared to those measured by the Retarded Field Energy Analyzer. To calibrate the ``Virtual IED sensor'', the ion flux was measured by the pulsed self-bias method and then compared to plasma density measurements by Langmuir and hairpin probes. It is shown that if there is a reliable calibration procedure, the ``Virtual IED sensor'' can be successfully realized on the basis of analytical and semianalytical plasma sheath models including measurable discharge parameters. This research is supported by Russian Science Foundation (RSF) Grant 14-12-01012.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tricaud, Christophe; Ernst, Timothy C.; Zigan, James A.
The disclosure provides a waste heat recovery system with a system and method for calculation of the net output torque from the waste heat recovery system. The calculation uses inputs from existing pressure and speed sensors to create a virtual pump torque sensor and a virtual expander torque sensor, and uses these sensors to provide an accurate net torque output from the WHR system.
NASA Technical Reports Server (NTRS)
Matthews, Bryan L.; Srivastava, Ashok N.
2010-01-01
Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.
Head-mounted active noise control system with virtual sensing technique
NASA Astrophysics Data System (ADS)
Miyazaki, Nobuhiro; Kajikawa, Yoshinobu
2015-03-01
In this paper, we apply a virtual sensing technique to a head-mounted active noise control (ANC) system we have already proposed. The proposed ANC system can reduce narrowband noise while improving the noise reduction ability at the desired locations. A head-mounted ANC system based on an adaptive feedback structure can reduce noise with periodicity or narrowband components. However, since quiet zones are formed only at the locations of error microphones, an adequate noise reduction cannot be achieved at the locations where error microphones cannot be placed such as near the eardrums. A solution to this problem is to apply a virtual sensing technique. A virtual sensing ANC system can achieve higher noise reduction at the desired locations by measuring the system models from physical sensors to virtual sensors, which will be used in the online operation of the virtual sensing ANC algorithm. Hence, we attempt to achieve the maximum noise reduction near the eardrums by applying the virtual sensing technique to the head-mounted ANC system. However, it is impossible to place the microphone near the eardrums. Therefore, the system models from physical sensors to virtual sensors are estimated using the Head And Torso Simulator (HATS) instead of human ears. Some simulation, experimental, and subjective assessment results demonstrate that the head-mounted ANC system with virtual sensing is superior to that without virtual sensing in terms of the noise reduction ability at the desired locations.
NASA Astrophysics Data System (ADS)
Wang, H.; Jing, X. J.
2017-07-01
This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.
Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang
2017-12-12
Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.
Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang
2017-01-01
Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868
Evaluation of glucose controllers in virtual environment: methodology and sample application.
Chassin, Ludovic J; Wilinska, Malgorzata E; Hovorka, Roman
2004-11-01
Adaptive systems to deliver medical treatment in humans are safety-critical systems and require particular care in both the testing and the evaluation phase, which are time-consuming, costly, and confounded by ethical issues. The objective of the present work is to develop a methodology to test glucose controllers of an artificial pancreas in a simulated (virtual) environment. A virtual environment comprising a model of the carbohydrate metabolism and models of the insulin pump and the glucose sensor is employed to simulate individual glucose excursions in subjects with type 1 diabetes. The performance of the control algorithm within the virtual environment is evaluated by considering treatment and operational scenarios. The developed methodology includes two dimensions: testing in relation to specific life style conditions, i.e. fasting, post-prandial, and life style (metabolic) disturbances; and testing in relation to various operating conditions, i.e. expected operating conditions, adverse operating conditions, and system failure. We define safety and efficacy criteria and describe the measures to be taken prior to clinical testing. The use of the methodology is exemplified by tuning and evaluating a model predictive glucose controller being developed for a wearable artificial pancreas focused on fasting conditions. Our methodology to test glucose controllers in a virtual environment is instrumental in anticipating the results of real clinical tests for different physiological conditions and for different operating conditions. The thorough testing in the virtual environment reduces costs and speeds up the development process.
Human-computer interface glove using flexible piezoelectric sensors
NASA Astrophysics Data System (ADS)
Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min
2017-05-01
In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.
Three-Dimensional Sensor Common Operating Picture (3-D Sensor COP)
2017-01-01
created. Additionally, a 3-D model of the sensor itself can be created. Using these 3-D models, along with emerging virtual and augmented reality tools...augmented reality 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 20 19a...iii Contents List of Figures iv 1. Introduction 1 2. The 3-D Sensor COP 2 3. Virtual Sensor Placement 7 4. Conclusions 10 5. References 11
Ant-Based Cyber Defense (also known as
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glenn Fink, PNNL
2015-09-29
ABCD is a four-level hierarchy with human supervisors at the top, a top-level agent called a Sergeant controlling each enclave, Sentinel agents located at each monitored host, and mobile Sensor agents that swarm through the enclaves to detect cyber malice and misconfigurations. The code comprises four parts: (1) the core agent framework, (2) the user interface and visualization, (3) test-range software to create a network of virtual machines including a simulated Internet and user and host activity emulation scripts, and (4) a test harness to allow the safe running of adversarial code within the framework of monitored virtual machines.
Stennis personnel participate in test program
NASA Technical Reports Server (NTRS)
2008-01-01
Fernando Figueroa (left), an aerospace technologist at Stennis, and John Schmatzel (center), a professor on loan from Rowan University in Glassboro, N.J., joined Ray Wang, president of Mobitrum Corp., in Silver Springs, Md., to test a virtual sensor instrument in development. The test was performed as part of NASA's Facilitated Access to the Space Environment for Technology Development and Training program.
Stennis personnel participate in test program
2008-09-09
Fernando Figueroa (left), an aerospace technologist at Stennis, and John Schmatzel (center), a professor on loan from Rowan University in Glassboro, N.J., joined Ray Wang, president of Mobitrum Corp., in Silver Springs, Md., to test a virtual sensor instrument in development. The test was performed as part of NASA's Facilitated Access to the Space Environment for Technology Development and Training program.
A Study on Immersion and Presence of a Portable Hand Haptic System for Immersive Virtual Reality
Kim, Mingyu; Jeon, Changyu; Kim, Jinmo
2017-01-01
This paper proposes a portable hand haptic system using Leap Motion as a haptic interface that can be used in various virtual reality (VR) applications. The proposed hand haptic system was designed as an Arduino-based sensor architecture to enable a variety of tactile senses at low cost, and is also equipped with a portable wristband. As a haptic system designed for tactile feedback, the proposed system first identifies the left and right hands and then sends tactile senses (vibration and heat) to each fingertip (thumb and index finger). It is incorporated into a wearable band-type system, making its use easy and convenient. Next, hand motion is accurately captured using the sensor of the hand tracking system and is used for virtual object control, thus achieving interaction that enhances immersion. A VR application was designed with the purpose of testing the immersion and presence aspects of the proposed system. Lastly, technical and statistical tests were carried out to assess whether the proposed haptic system can provide a new immersive presence to users. According to the results of the presence questionnaire and the simulator sickness questionnaire, we confirmed that the proposed hand haptic system, in comparison to the existing interaction that uses only the hand tracking system, provided greater presence and a more immersive environment in the virtual reality. PMID:28513545
A Study on Immersion and Presence of a Portable Hand Haptic System for Immersive Virtual Reality.
Kim, Mingyu; Jeon, Changyu; Kim, Jinmo
2017-05-17
This paper proposes a portable hand haptic system using Leap Motion as a haptic interface that can be used in various virtual reality (VR) applications. The proposed hand haptic system was designed as an Arduino-based sensor architecture to enable a variety of tactile senses at low cost, and is also equipped with a portable wristband. As a haptic system designed for tactile feedback, the proposed system first identifies the left and right hands and then sends tactile senses (vibration and heat) to each fingertip (thumb and index finger). It is incorporated into a wearable band-type system, making its use easy and convenient. Next, hand motion is accurately captured using the sensor of the hand tracking system and is used for virtual object control, thus achieving interaction that enhances immersion. A VR application was designed with the purpose of testing the immersion and presence aspects of the proposed system. Lastly, technical and statistical tests were carried out to assess whether the proposed haptic system can provide a new immersive presence to users. According to the results of the presence questionnaire and the simulator sickness questionnaire, we confirmed that the proposed hand haptic system, in comparison to the existing interaction that uses only the hand tracking system, provided greater presence and a more immersive environment in the virtual reality.
New virtual sonar and wireless sensor system concepts
NASA Astrophysics Data System (ADS)
Houston, B. H.; Bucaro, J. A.; Romano, A. J.
2004-05-01
Recently, exciting new sensor array concepts have been proposed which, if realized, could revolutionize how we approach surface mounted acoustic sensor systems for underwater vehicles. Two such schemes are so-called ``virtual sonar'' which is formulated around Helmholtz integral processing and ``wireless'' systems which transfer sensor information through radiated RF signals. The ``virtual sonar'' concept provides an interesting framework through which to combat the dilatory effects of the structure on surface mounted sensor systems including structure-borne vibration and variations in structure-backing impedance. The ``wireless'' concept would eliminate the necessity of a complex wiring or fiber-optic external network while minimizing vehicle penetrations. Such systems, however, would require a number of advances in sensor and RF waveguide technologies. In this presentation, we will discuss those sensor and sensor-related developments which are desired or required in order to make practical such new sensor system concepts, and we will present several underwater applications from the perspective of exploiting these new sonar concepts. [Work supported by ONR.
VLSI Design of Trusted Virtual Sensors.
Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada
2018-01-25
This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).
VLSI Design of Trusted Virtual Sensors
2018-01-01
This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μs. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time). PMID:29370141
Intelligent Sensors: Strategies for an Integrated Systems Approach
NASA Technical Reports Server (NTRS)
Chitikeshi, Sanjeevi; Mahajan, Ajay; Bandhil, Pavan; Utterbach, Lucas; Figueroa, Fernando
2005-01-01
This paper proposes the development of intelligent sensors as an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Intelligent Systems Health Monitoring (ISHM) vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent Sensors (PIS) and Virtual Intelligent Sensors (VIS).
Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image
Wen, Wei; Khatibi, Siamak
2017-01-01
Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459
The Virtual Tablet: Virtual Reality as a Control System
NASA Technical Reports Server (NTRS)
Chronister, Andrew
2016-01-01
In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.
Tokumitsu, Masahiro; Hasegawa, Keisuke; Ishida, Yoshiteru
2016-01-01
This paper attempts to construct a resilient sensor network model with an example of space weather forecasting. The proposed model is based on a dynamic relational network. Space weather forecasting is vital for a satellite operation because an operational team needs to make a decision for providing its satellite service. The proposed model is resilient to failures of sensors or missing data due to the satellite operation. In the proposed model, the missing data of a sensor is interpolated by other sensors associated. This paper demonstrates two examples of space weather forecasting that involves the missing observations in some test cases. In these examples, the sensor network for space weather forecasting continues a diagnosis by replacing faulted sensors with virtual ones. The demonstrations showed that the proposed model is resilient against sensor failures due to suspension of hardware failures or technical reasons. PMID:27092508
Tokumitsu, Masahiro; Hasegawa, Keisuke; Ishida, Yoshiteru
2016-04-15
This paper attempts to construct a resilient sensor network model with an example of space weather forecasting. The proposed model is based on a dynamic relational network. Space weather forecasting is vital for a satellite operation because an operational team needs to make a decision for providing its satellite service. The proposed model is resilient to failures of sensors or missing data due to the satellite operation. In the proposed model, the missing data of a sensor is interpolated by other sensors associated. This paper demonstrates two examples of space weather forecasting that involves the missing observations in some test cases. In these examples, the sensor network for space weather forecasting continues a diagnosis by replacing faulted sensors with virtual ones. The demonstrations showed that the proposed model is resilient against sensor failures due to suspension of hardware failures or technical reasons.
Virtual odors to transmit emotions in virtual agents
NASA Astrophysics Data System (ADS)
Delgado-Mata, Carlos; Aylett, Ruth
2003-04-01
In this paper we describe an emotional-behvioral architecture. The emotional engine sits at a higher layer than the behavior system, and can alter behavior patterns, the engine is designed to simulate Emotionally-Intelligent Agents in a Virtual Environment, where each agent senses its own emotions, and other creature emotions through a virtual smell sensor; senses obstacles and other moving creatures in the environment and reacts to them. The architecture consists of an emotion engine, behavior synthesis system, a motor layer and a library of sensors.
Tailoring gas sensor arrays via the design of short peptides sequences as binding elements.
Mascini, Marcello; Pizzoni, Daniel; Perez, German; Chiarappa, Emilio; Di Natale, Corrado; Pittia, Paola; Compagnone, Dario
2017-07-15
A semi-combinatorial virtual approach was used to prepare peptide-based gas sensors with binding properties towards five different chemical classes (alcohols, aldehydes, esters, hydrocarbons and ketones). Molecular docking simulations were conducted for a complete tripeptide library (8000 elements) versus 58 volatile compounds belonging to those five chemical classes. By maximizing the differences between chemical classes, a subset of 120 tripeptides was extracted and used as scaffolds for generating a combinatorial library of 7912 tetrapeptides. This library was processed in an analogous way to the former. Five tetrapeptides (IHRI, KSDS, LGFD, TGKF and WHVS) were chosen depending on their virtual affinity and cross-reactivity for the experimental step. The five peptides were covalently bound to gold nanoparticles by adding a terminal cysteine to each tetrapeptide and deposited onto 20MHz quartz crystal microbalances to construct the gas sensors. The behavior of peptides after this chemical modification was simulated at the pH range used in the immobilization step. ΔF signals analyzed by principal component analysis matched the virtually screened data. The array was able to clearly discriminate the 13 volatile compounds tested based on their hydrophobicity and hydrophilicity molecules as well as the molecular weight. Copyright © 2016 Elsevier B.V. All rights reserved.
Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System.
de Moura, Karina de O A; Balbinot, Alexandre
2018-05-01
A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior.
Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System
Balbinot, Alexandre
2018-01-01
A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior. PMID:29723994
Reactor protection system with automatic self-testing and diagnostic
Gaubatz, Donald C.
1996-01-01
A reactor protection system having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically "identical" values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic.
Reactor protection system with automatic self-testing and diagnostic
Gaubatz, D.C.
1996-12-17
A reactor protection system is disclosed having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically ``identical`` values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic. 16 figs.
Visualizing vascular structures in virtual environments
NASA Astrophysics Data System (ADS)
Wischgoll, Thomas
2013-01-01
In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.
Scientific Workflows and the Sensor Web for Virtual Environmental Observatories
NASA Astrophysics Data System (ADS)
Simonis, I.; Vahed, A.
2008-12-01
Virtual observatories mature from their original domain and become common practice for earth observation research and policy building. The term Virtual Observatory originally came from the astronomical research community. Here, virtual observatories provide universal access to the available astronomical data archives of space and ground-based observatories. Further on, as those virtual observatories aim at integrating heterogeneous ressources provided by a number of participating organizations, the virtual observatory acts as a coordinating entity that strives for common data analysis techniques and tools based on common standards. The Sensor Web is on its way to become one of the major virtual observatories outside of the astronomical research community. Like the original observatory that consists of a number of telescopes, each observing a specific part of the wave spectrum and with a collection of astronomical instruments, the Sensor Web provides a multi-eyes perspective on the current, past, as well as future situation of our planet and its surrounding spheres. The current view of the Sensor Web is that of a single worldwide collaborative, coherent, consistent and consolidated sensor data collection, fusion and distribution system. The Sensor Web can perform as an extensive monitoring and sensing system that provides timely, comprehensive, continuous and multi-mode observations. This technology is key to monitoring and understanding our natural environment, including key areas such as climate change, biodiversity, or natural disasters on local, regional, and global scales. The Sensor Web concept has been well established with ongoing global research and deployment of Sensor Web middleware and standards and represents the foundation layer of systems like the Global Earth Observation System of Systems (GEOSS). The Sensor Web consists of a huge variety of physical and virtual sensors as well as observational data, made available on the Internet at standardized interfaces. All data sets and sensor communication follow well-defined abstract models and corresponding encodings, mostly developed by the OGC Sensor Web Enablement initiative. Scientific progress is currently accelerated by an emerging new concept called scientific workflows, which organize and manage complex distributed computations. A scientific workflow represents and records the highly complex processes that a domain scientist typically would follow in exploration, discovery and ultimately, transformation of raw data to publishable results. The challenge is now to integrate the benefits of scientific workflows with those provided by the Sensor Web in order to leverage all resources for scientific exploration, problem solving, and knowledge generation. Scientific workflows for the Sensor Web represent the next evolutionary step towards efficient, powerful, and flexible earth observation frameworks and platforms. Those platforms support the entire process from capturing data, sharing and integrating, to requesting additional observations. Multiple sites and organizations will participate on single platforms and scientists from different countries and organizations interact and contribute to large-scale research projects. Simultaneously, the data- and information overload becomes manageable, as multiple layers of abstraction will free scientists to deal with underlying data-, processing or storage peculiarities. The vision are automated investigation and discovery mechanisms that allow scientists to pose queries to the system, which in turn would identify potentially related resources, schedules processing tasks and assembles all parts in workflows that may satisfy the query.
Virtual Deformation Control of the X-56A Model with Simulated Fiber Optic Sensors
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.
2014-01-01
A robust control law design methodology is presented to stabilize the X-56A model and command its wing shape. The X-56A was purposely designed to experience flutter modes in its flight envelope. The methodology introduces three phases: the controller design phase, the modal filter design phase, and the reference signal design phase. A mu-optimal controller is designed and made robust to speed and parameter variations. A conversion technique is presented for generating sensor strain modes from sensor deformation mode shapes. The sensor modes are utilized for modal filtering and simulating fiber optic sensors for feedback to the controller. To generate appropriate virtual deformation reference signals, rigid-body corrections are introduced to the deformation mode shapes. After successful completion of the phases, virtual deformation control is demonstrated. The wing is deformed and it is shown that angle-ofattack changes occur which could potentially be used to an advantage. The X-56A program must demonstrate active flutter suppression. It is shown that the virtual deformation controller can achieve active flutter suppression on the X-56A simulation model.
Virtual Deformation Control of the X-56A Model with Simulated Fiber Optic Sensors
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander Wong
2013-01-01
A robust control law design methodology is presented to stabilize the X-56A model and command its wing shape. The X-56A was purposely designed to experience flutter modes in its flight envelope. The methodology introduces three phases: the controller design phase, the modal filter design phase, and the reference signal design phase. A mu-optimal controller is designed and made robust to speed and parameter variations. A conversion technique is presented for generating sensor strain modes from sensor deformation mode shapes. The sensor modes are utilized for modal filtering and simulating fiber optic sensors for feedback to the controller. To generate appropriate virtual deformation reference signals, rigid-body corrections are introduced to the deformation mode shapes. After successful completion of the phases, virtual deformation control is demonstrated. The wing is deformed and it is shown that angle-of-attack changes occur which could potentially be used to an advantage. The X-56A program must demonstrate active flutter suppression. It is shown that the virtual deformation controller can achieve active flutter suppression on the X-56A simulation model.
Wang, Xue; Wang, Sheng; Ma, Jun-Jie
2007-01-01
The effectiveness of wireless sensor networks (WSNs) depends on the coverage and target detection probability provided by dynamic deployment, which is usually supported by the virtual force (VF) algorithm. However, in the VF algorithm, the virtual force exerted by stationary sensor nodes will hinder the movement of mobile sensor nodes. Particle swarm optimization (PSO) is introduced as another dynamic deployment algorithm, but in this case the computation time required is the big bottleneck. This paper proposes a dynamic deployment algorithm which is named “virtual force directed co-evolutionary particle swarm optimization” (VFCPSO), since this algorithm combines the co-evolutionary particle swarm optimization (CPSO) with the VF algorithm, whereby the CPSO uses multiple swarms to optimize different components of the solution vectors for dynamic deployment cooperatively and the velocity of each particle is updated according to not only the historical local and global optimal solutions, but also the virtual forces of sensor nodes. Simulation results demonstrate that the proposed VFCPSO is competent for dynamic deployment in WSNs and has better performance with respect to computation time and effectiveness than the VF, PSO and VFPSO algorithms.
Sensor Webs and Virtual Globes: Enabling Understanding of Changes in a partially Glaciated Watershed
NASA Astrophysics Data System (ADS)
Heavner, M.; Fatland, D. R.; Habermann, M.; Berner, L.; Hood, E.; Connor, C.; Galbraith, J.; Knuth, E.; O'Brien, W.
2008-12-01
The University of Alaska Southeast is currently implementing a sensor web identified as the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research (SEAMONSTER). SEAMONSTER is operating in the partially glaciated Mendenhall and Lemon Creek Watersheds, in the Juneau area, on the margins of the Juneau Icefield. These watersheds are studied for both 1. long term monitoring of changes, and 2. detection and analysis of transient events (such as glacier lake outburst floods). The heterogeneous sensors (meteorologic, dual frequency GPS, water quality, lake level, etc), power and bandwidth constraints, and competing time scales of interest require autonomous reactivity of the sensor web. They also present challenges for operational management of the sensor web. The harsh conditions on the glaciers provide additional operating constraints. The tight integration of the sensor web and virtual global enabling technology enhance the project in multiple ways. We are utilizing virtual globe infrastructures to enhance both sensor web management and data access. SEAMONSTER utilizes virtual globes for education and public outreach, sensor web management, data dissemination, and enabling collaboration. Using a PosgreSQL with GIS extensions database coupled to the Open Geospatial Consortium (OGC) Geoserver, we generate near-real-time auto-updating geobrowser files of the data in multiple OGC standard formats (e.g KML, WCS). Additionally, embedding wiki pages in this database allows the development of a geospatially aware wiki describing the projects for better public outreach and education. In this presentation we will describe how we have implemented these technologies to date, the lessons learned, and our efforts towards greater OGC standard implementation. A major focus will be on demonstrating how geobrowsers and virtual globes have made this project possible.
Molecular Rift: Virtual Reality for Drug Designers.
Norrby, Magnus; Grebner, Christoph; Eriksson, Joakim; Boström, Jonas
2015-11-23
Recent advances in interaction design have created new ways to use computers. One example is the ability to create enhanced 3D environments that simulate physical presence in the real world--a virtual reality. This is relevant to drug discovery since molecular models are frequently used to obtain deeper understandings of, say, ligand-protein complexes. We have developed a tool (Molecular Rift), which creates a virtual reality environment steered with hand movements. Oculus Rift, a head-mounted display, is used to create the virtual settings. The program is controlled by gesture-recognition, using the gaming sensor MS Kinect v2, eliminating the need for standard input devices. The Open Babel toolkit was integrated to provide access to powerful cheminformatics functions. Molecular Rift was developed with a focus on usability, including iterative test-group evaluations. We conclude with reflections on virtual reality's future capabilities in chemistry and education. Molecular Rift is open source and can be downloaded from GitHub.
The Language of Glove: Wireless gesture decoder with low-power and stretchable hybrid electronics.
O'Connor, Timothy F; Fach, Matthew E; Miller, Rachel; Root, Samuel E; Mercier, Patrick P; Lipomi, Darren J
2017-01-01
This communication describes a glove capable of wirelessly translating the American Sign Language (ASL) alphabet into text displayable on a computer or smartphone. The key components of the device are strain sensors comprising a piezoresistive composite of carbon particles embedded in a fluoroelastomer. These sensors are integrated with a wearable electronic module consisting of digitizers, a microcontroller, and a Bluetooth radio. Finite-element analysis predicts a peak strain on the sensors of 5% when the knuckles are fully bent. Fatigue studies suggest that the sensors successfully detect the articulation of the knuckles even when bent to their maximal degree 1,000 times. In concert with an accelerometer and pressure sensors, the glove is able to translate all 26 letters of the ASL alphabet. Lastly, data taken from the glove are used to control a virtual hand; this application suggests new ways in which stretchable and wearable electronics can enable humans to interface with virtual environments. Critically, this system was constructed of components costing less than $100 and did not require chemical synthesis or access to a cleanroom. It can thus be used as a test bed for materials scientists to evaluate the performance of new materials and flexible and stretchable hybrid electronics.
The Language of Glove: Wireless gesture decoder with low-power and stretchable hybrid electronics
O’Connor, Timothy F.; Fach, Matthew E.; Miller, Rachel; Root, Samuel E.; Mercier, Patrick P.
2017-01-01
This communication describes a glove capable of wirelessly translating the American Sign Language (ASL) alphabet into text displayable on a computer or smartphone. The key components of the device are strain sensors comprising a piezoresistive composite of carbon particles embedded in a fluoroelastomer. These sensors are integrated with a wearable electronic module consisting of digitizers, a microcontroller, and a Bluetooth radio. Finite-element analysis predicts a peak strain on the sensors of 5% when the knuckles are fully bent. Fatigue studies suggest that the sensors successfully detect the articulation of the knuckles even when bent to their maximal degree 1,000 times. In concert with an accelerometer and pressure sensors, the glove is able to translate all 26 letters of the ASL alphabet. Lastly, data taken from the glove are used to control a virtual hand; this application suggests new ways in which stretchable and wearable electronics can enable humans to interface with virtual environments. Critically, this system was constructed of components costing less than $100 and did not require chemical synthesis or access to a cleanroom. It can thus be used as a test bed for materials scientists to evaluate the performance of new materials and flexible and stretchable hybrid electronics. PMID:28700603
NASA Astrophysics Data System (ADS)
McMullen, Sonya A. H.; Henderson, Troy; Ison, David
2017-05-01
The miniaturization of unmanned systems and spacecraft, as well as computing and sensor technologies, has opened new opportunities in the areas of remote sensing and multi-sensor data fusion for a variety of applications. Remote sensing and data fusion historically have been the purview of large government organizations, such as the Department of Defense (DoD), National Aeronautics and Space Administration (NASA), and National Geospatial-Intelligence Agency (NGA) due to the high cost and complexity of developing, fielding, and operating such systems. However, miniaturized computers with high capacity processing capabilities, small and affordable sensors, and emerging, commercially available platforms such as UAS and CubeSats to carry such sensors, have allowed for a vast range of novel applications. In order to leverage these developments, Embry-Riddle Aeronautical University (ERAU) has developed an advanced sensor and data fusion laboratory to research component capabilities and their employment on a wide-range of autonomous, robotic, and transportation systems. This lab is unique in several ways, for example, it provides a traditional campus laboratory for students and faculty to model and test sensors in a range of scenarios, process multi-sensor data sets (both simulated and experimental), and analyze results. Moreover, such allows for "virtual" modeling, testing, and teaching capability reaching beyond the physical confines of the facility for use among ERAU Worldwide students and faculty located around the globe. Although other institutions such as Georgia Institute of Technology, Lockheed Martin, University of Dayton, and University of Central Florida have optical sensor laboratories, the ERAU virtual concept is the first such lab to expand to multispectral sensors and data fusion, while focusing on the data collection and data products and not on the manufacturing aspect. Further, the initiative is a unique effort among Embry-Riddle faculty to develop multi-disciplinary, cross-campus research to facilitate faculty- and student-driven research. Specifically, the ERAU Worldwide Campus, with locations across the globe and delivering curricula online, will be leveraged to provide novel approaches to remote sensor experimentation and simulation. The purpose of this paper and presentation is to present this new laboratory, research, education, and collaboration process.
Experimental Robot Position Sensor Fault Tolerance Using Accelerometers and Joint Torque Sensors
NASA Technical Reports Server (NTRS)
Aldridge, Hal A.; Juang, Jer-Nan
1997-01-01
Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. The proposed method uses joint torque sensors found in most existing advanced robot designs along with easily locatable, lightweight accelerometers to provide a joint position sensor fault recovery mode. This mode uses the torque sensors along with a virtual passive control law for stability and accelerometers for joint position information. Two methods for conversion from Cartesian acceleration to joint position based on robot kinematics, not integration, are presented. The fault tolerant control method was tested on several joints of a laboratory robot. The controllers performed well with noisy, biased data and a model with uncertain parameters.
NASA Astrophysics Data System (ADS)
Gregorio, Massimo De
In this paper we present an intelligent active video surveillance system currently adopted in two different application domains: railway tunnels and outdoor storage areas. The system takes advantages of the integration of Artificial Neural Networks (ANN) and symbolic Artificial Intelligence (AI). This hybrid system is formed by virtual neural sensors (implemented as WiSARD-like systems) and BDI agents. The coupling of virtual neural sensors with symbolic reasoning for interpreting their outputs, makes this approach both very light from a computational and hardware point of view, and rather robust in performances. The system works on different scenarios and in difficult light conditions.
Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael
2011-01-01
This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation.
Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael
2011-01-01
This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation. PMID:22247677
Wearable Virtual White Cane Network for navigating people with visual impairment.
Gao, Yabiao; Chandrawanshi, Rahul; Nau, Amy C; Tse, Zion Tsz Ho
2015-09-01
Navigating the world with visual impairments presents inconveniences and safety concerns. Although a traditional white cane is the most commonly used mobility aid due to its low cost and acceptable functionality, electronic traveling aids can provide more functionality as well as additional benefits. The Wearable Virtual Cane Network is an electronic traveling aid that utilizes ultrasound sonar technology to scan the surrounding environment for spatial information. The Wearable Virtual Cane Network is composed of four sensing nodes: one on each of the user's wrists, one on the waist, and one on the ankle. The Wearable Virtual Cane Network employs vibration and sound to communicate object proximity to the user. While conventional navigation devices are typically hand-held and bulky, the hands-free design of our prototype allows the user to perform other tasks while using the Wearable Virtual Cane Network. When the Wearable Virtual Cane Network prototype was tested for distance resolution and range detection limits at various displacements and compared with a traditional white cane, all participants performed significantly above the control bar (p < 4.3 × 10(-5), standard t-test) in distance estimation. Each sensor unit can detect an object with a surface area as small as 1 cm(2) (1 cm × 1 cm) located 70 cm away. Our results showed that the walking speed for an obstacle course was increased by 23% on average when subjects used the Wearable Virtual Cane Network rather than the white cane. The obstacle course experiment also shows that the use of the white cane in combination with the Wearable Virtual Cane Network can significantly improve navigation over using either the white cane or the Wearable Virtual Cane Network alone (p < 0.05, paired t-test). © IMechE 2015.
Ju, Jinyong; Li, Wei; Wang, Yuqiao; Fan, Mengbao; Yang, Xuefeng
2016-01-01
Effective feedback control requires all state variable information of the system. However, in the translational flexible-link manipulator (TFM) system, it is unrealistic to measure the vibration signals and their time derivative of any points of the TFM by infinite sensors. With the rigid-flexible coupling between the global motion of the rigid base and the elastic vibration of the flexible-link manipulator considered, a two-time scale virtual sensor, which includes the speed observer and the vibration observer, is designed to achieve the estimation for the vibration signals and their time derivative of the TFM, as well as the speed observer and the vibration observer are separately designed for the slow and fast subsystems, which are decomposed from the dynamic model of the TFM by the singular perturbation. Additionally, based on the linear-quadratic differential games, the observer gains of the two-time scale virtual sensor are optimized, which aims to minimize the estimation error while keeping the observer stable. Finally, the numerical calculation and experiment verify the efficiency of the designed two-time scale virtual sensor. PMID:27801840
Assessing Upper Extremity Motor Function in Practice of Virtual Activities of Daily Living
Adams, Richard J.; Lichter, Matthew D.; Krepkovich, Eileen T.; Ellington, Allison; White, Marga; Diamond, Paul T.
2015-01-01
A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An Unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user’s avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman’s rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs. PMID:25265612
Assessing upper extremity motor function in practice of virtual activities of daily living.
Adams, Richard J; Lichter, Matthew D; Krepkovich, Eileen T; Ellington, Allison; White, Marga; Diamond, Paul T
2015-03-01
A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user's avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman's rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs.
Gietzelt, Matthias; Schnabel, Stephan; Wolf, Klaus-Hendrik; Büsching, Felix; Song, Bianying; Rust, Stefan; Marschollek, Michael
2012-05-01
One of the key problems in accelerometry based gait analyses is that it may not be possible to attach an accelerometer to the lower trunk so that its axes are perfectly aligned to the axes of the subject. In this paper we will present an algorithm that was designed to virtually align the axes of the accelerometer to the axes of the subject during walking sections. This algorithm is based on a physically reasonable approach and built for measurements in unsupervised settings, where the test persons are applying the sensors by themselves. For evaluation purposes we conducted a study with 6 healthy subjects and measured their gait with a manually aligned and a skewed accelerometer attached to the subject's lower trunk. After applying the algorithm the intra-axis correlation of both sensors was on average 0.89±0.1 with a mean absolute error of 0.05g. We concluded that the algorithm was able to adjust the skewed sensor node virtually to the coordinate system of the subject. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Eglin virtual range database for hardware-in-the-loop testing
NASA Astrophysics Data System (ADS)
Talele, Sunjay E.; Pickard, J. W., Jr.; Owens, Monte A.; Foster, Joseph; Watson, John S.; Amick, Mary Amenda; Anthony, Kenneth
1998-07-01
Realistic backgrounds are necessary to support high fidelity hardware-in-the-loop testing. Advanced avionics and weapon system sensors are driving the requirement for higher resolution imagery. The model-test-model philosophy being promoted by the T&E community is resulting in the need for backgrounds that are realistic or virtual representations of actual test areas. Combined, these requirements led to a major upgrade of the terrain database used for hardware-in-the-loop testing at the Guided Weapons Evaluation Facility (GWEF) at Eglin Air Force Base, Florida. This paper will describe the process used to generate the high-resolution (1-foot) database of ten sites totaling over 20 square kilometers of the Eglin range. this process involved generating digital elevation maps from stereo aerial imagery and classifying ground cover material using the spectral content. These databases were then optimized for real-time operation at 90 Hz.
Structural health management of aerospace hotspots under fatigue loading
NASA Astrophysics Data System (ADS)
Soni, Sunilkumar
Sustainability and life-cycle assessments of aerospace systems, such as aircraft structures and propulsion systems, represent growing challenges in engineering. Hence, there has been an increasing demand in using structural health monitoring (SHM) techniques for continuous monitoring of these systems in an effort to improve safety and reduce maintenance costs. The current research is part of an ongoing multidisciplinary effort to develop a robust SHM framework resulting in improved models for damage-state awareness and life prediction, and enhancing capability of future aircraft systems. Lug joints, a typical structural hotspot, were chosen as the test article for the current study. The thesis focuses on integrated SHM techniques for damage detection and characterization in lug joints. Piezoelectric wafer sensors (PZTs) are used to generate guided Lamb waves as they can be easily used for onboard applications. Sensor placement in certain regions of a structural component is not feasible due to the inaccessibility of the area to be monitored. Therefore, a virtual sensing concept is introduced to acquire sensor data from finite element (FE) models. A full three dimensional FE analysis of lug joints with piezoelectric transducers, accounting for piezoelectrical-mechanical coupling, was performed in Abaqus and the sensor signals were simulated. These modeled sensors are called virtual sensors. A combination of real data from PZTs and virtual sensing data from FE analysis is used to monitor and detect fatigue damage in aluminum lug joints. Experiments were conducted on lug joints under fatigue loads and sensor signals collected were used to validate the simulated sensor response. An optimal sensor placement methodology for lug joints is developed based on a detection theory framework to maximize the detection rate and minimize the false alarm rate. The placement technique is such that the sensor features can be directly correlated to damage. The technique accounts for a number of factors, such as actuation frequency and strength, minimum damage size, damage detection scheme, material damping, signal to noise ratio and sensing radius. Advanced information processing methodologies are discussed for damage diagnosis. A new, instantaneous approach for damage detection, localization and quantification is proposed for applications to practical problems associated with changes in reference states under different environmental and operational conditions. Such an approach improves feature extraction for state awareness, resulting in robust life prediction capabilities.
Enhancing Autonomy of Aerial Systems Via Integration of Visual Sensors into Their Avionics Suite
2016-09-01
aerial platform for subsequent visual sensor integration. 14. SUBJECT TERMS autonomous system, quadrotors, direct method, inverse ...CONTROLLER ARCHITECTURE .....................................................43 B. INVERSE DYNAMICS IN THE VIRTUAL DOMAIN ......................45 1...control station GPS Global-Positioning System IDVD inverse dynamics in the virtual domain ILP integer linear program INS inertial-navigation system
Juárez-Aguirre, Raúl; Domínguez-Nicolás, Saúl M.; Manjarrez, Elías; Tapia, Jesús A.; Figueras, Eduard; Vázquez-Leal, Héctor; Aguilera-Cortés, Luz A.; Herrera-May, Agustín L.
2013-01-01
We present a signal processing system with virtual instrumentation of a MEMS sensor to detect magnetic flux density for biomedical applications. This system consists of a magnetic field sensor, electronic components implemented on a printed circuit board (PCB), a data acquisition (DAQ) card, and a virtual instrument. It allows the development of a semi-portable prototype with the capacity to filter small electromagnetic interference signals through digital signal processing. The virtual instrument includes an algorithm to implement different configurations of infinite impulse response (IIR) filters. The PCB contains a precision instrumentation amplifier, a demodulator, a low-pass filter (LPF) and a buffer with operational amplifier. The proposed prototype is used for real-time non-invasive monitoring of magnetic flux density in the thoracic cage of rats. The response of the rat respiratory magnetogram displays a similar behavior as the rat electromyogram (EMG). PMID:24196434
Juárez-Aguirre, Raúl; Domínguez-Nicolás, Saúl M; Manjarrez, Elías; Tapia, Jesús A; Figueras, Eduard; Vázquez-Leal, Héctor; Aguilera-Cortés, Luz A; Herrera-May, Agustín L
2013-11-05
We present a signal processing system with virtual instrumentation of a MEMS sensor to detect magnetic flux density for biomedical applications. This system consists of a magnetic field sensor, electronic components implemented on a printed circuit board (PCB), a data acquisition (DAQ) card, and a virtual instrument. It allows the development of a semi-portable prototype with the capacity to filter small electromagnetic interference signals through digital signal processing. The virtual instrument includes an algorithm to implement different configurations of infinite impulse response (IIR) filters. The PCB contains a precision instrumentation amplifier, a demodulator, a low-pass filter (LPF) and a buffer with operational amplifier. The proposed prototype is used for real-time non-invasive monitoring of magnetic flux density in the thoracic cage of rats. The response of the rat respiratory magnetogram displays a similar behavior as the rat electromyogram (EMG).
Heredia, Guillermo; Ollero, Aníbal
2010-01-01
The Helicopter Adaptive Aircraft (HADA) is a morphing aircraft which is able to take-off as a helicopter and, when in forward flight, unfold the wings that are hidden under the fuselage, and transfer the power from the main rotor to a propeller, thus morphing from a helicopter to an airplane. In this process, the reliable folding and unfolding of the wings is critical, since a failure may determine the ability to perform a mission, and may even be catastrophic. This paper proposes a virtual sensor based Fault Detection, Identification and Recovery (FDIR) system to increase the reliability of the HADA aircraft. The virtual sensor is able to capture the nonlinear interaction between the folding/unfolding wings aerodynamics and the HADA airframe using the navigation sensor measurements. The proposed FDIR system has been validated using a simulation model of the HADA aircraft, which includes real phenomena as sensor noise and sampling characteristics and turbulence and wind perturbations. PMID:22294922
Heredia, Guillermo; Ollero, Aníbal
2010-01-01
The Helicopter Adaptive Aircraft (HADA) is a morphing aircraft which is able to take-off as a helicopter and, when in forward flight, unfold the wings that are hidden under the fuselage, and transfer the power from the main rotor to a propeller, thus morphing from a helicopter to an airplane. In this process, the reliable folding and unfolding of the wings is critical, since a failure may determine the ability to perform a mission, and may even be catastrophic. This paper proposes a virtual sensor based Fault Detection, Identification and Recovery (FDIR) system to increase the reliability of the HADA aircraft. The virtual sensor is able to capture the nonlinear interaction between the folding/unfolding wings aerodynamics and the HADA airframe using the navigation sensor measurements. The proposed FDIR system has been validated using a simulation model of the HADA aircraft, which includes real phenomena as sensor noise and sampling characteristics and turbulence and wind perturbations.
Performance analysis of cooperative virtual MIMO systems for wireless sensor networks.
Rafique, Zimran; Seet, Boon-Chong; Al-Anbuky, Adnan
2013-05-28
Multi-Input Multi-Output (MIMO) techniques can be used to increase the data rate for a given bit error rate (BER) and transmission power. Due to the small form factor, energy and processing constraints of wireless sensor nodes, a cooperative Virtual MIMO as opposed to True MIMO system architecture is considered more feasible for wireless sensor network (WSN) applications. Virtual MIMO with Vertical-Bell Labs Layered Space-Time (V-BLAST) multiplexing architecture has been recently established to enhance WSN performance. In this paper, we further investigate the impact of different modulation techniques, and analyze for the first time, the performance of a cooperative Virtual MIMO system based on V-BLAST architecture with multi-carrier modulation techniques. Through analytical models and simulations using real hardware and environment settings, both communication and processing energy consumptions, BER, spectral efficiency, and total time delay of multiple cooperative nodes each with single antenna are evaluated. The results show that cooperative Virtual-MIMO with Binary Phase Shift Keying-Wavelet based Orthogonal Frequency Division Multiplexing (BPSK-WOFDM) modulation is a promising solution for future high data-rate and energy-efficient WSNs.
Performance Analysis of Cooperative Virtual MIMO Systems for Wireless Sensor Networks
Rafique, Zimran; Seet, Boon-Chong; Al-Anbuky, Adnan
2013-01-01
Multi-Input Multi-Output (MIMO) techniques can be used to increase the data rate for a given bit error rate (BER) and transmission power. Due to the small form factor, energy and processing constraints of wireless sensor nodes, a cooperative Virtual MIMO as opposed to True MIMO system architecture is considered more feasible for wireless sensor network (WSN) applications. Virtual MIMO with Vertical-Bell Labs Layered Space-Time (V-BLAST) multiplexing architecture has been recently established to enhance WSN performance. In this paper, we further investigate the impact of different modulation techniques, and analyze for the first time, the performance of a cooperative Virtual MIMO system based on V-BLAST architecture with multi-carrier modulation techniques. Through analytical models and simulations using real hardware and environment settings, both communication and processing energy consumptions, BER, spectral efficiency, and total time delay of multiple cooperative nodes each with single antenna are evaluated. The results show that cooperative Virtual-MIMO with Binary Phase Shift Keying-Wavelet based Orthogonal Frequency Division Multiplexing (BPSK-WOFDM) modulation is a promising solution for future high data-rate and energy-efficient WSNs. PMID:23760087
A source-attractor approach to network detection of radiation sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Barry, M. L..; Grieme, M.
Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less
Novel Corrosion Sensor for Vision 21 Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heng Ban; Bharat Soni
2007-03-31
Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indicationmore » of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall goal of this project is to develop a technology for on-line fireside corrosion monitoring. This objective is achieved by the laboratory development of sensors and instrumentation, testing them in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. This project successfully developed two types of sensors and measurement systems, and successful tested them in a muffle furnace in the laboratory. The capacitance sensor had a high fabrication cost and might be more appropriate in other applications. The low-cost resistance sensor was tested in a power plant burning eastern bituminous coals. The results show that the fireside corrosion measurement system can be used to determine the corrosion rate at waterwall and superheater locations. Electron microscope analysis of the corroded sensor surface provided detailed picture of the corrosion process.« less
NASA Astrophysics Data System (ADS)
Chao, Jie; Chiu, Jennifer L.; DeJaegher, Crystal J.; Pan, Edward A.
2016-02-01
Deep learning of science involves integration of existing knowledge and normative science concepts. Past research demonstrates that combining physical and virtual labs sequentially or side by side can take advantage of the unique affordances each provides for helping students learn science concepts. However, providing simultaneously connected physical and virtual experiences has the potential to promote connections among ideas. This paper explores the effect of augmenting a virtual lab with physical controls on high school chemistry students' understanding of gas laws. We compared students using the augmented virtual lab to students using a similar sensor-based physical lab with teacher-led discussions. Results demonstrate that students in the augmented virtual lab condition made significant gains from pretest and posttest and outperformed traditional students on some but not all concepts. Results provide insight into incorporating mixed-reality technologies into authentic classroom settings.
Open Source Virtual Worlds and Low Cost Sensors for Physical Rehab of Patients with Chronic Diseases
NASA Astrophysics Data System (ADS)
Romero, Salvador J.; Fernandez-Luque, Luis; Sevillano, José L.; Vognild, Lars
For patients with chronic diseases, exercise is a key part of rehab to deal better with their illness. Some of them do rehabilitation at home with telemedicine systems. However, keeping to their exercising program is challenging and many abandon the rehabilitation. We postulate that information technologies for socializing and serious games can encourage patients to keep doing physical exercise and rehab. In this paper we present Virtual Valley, a low cost telemedicine system for home exercising, based on open source virtual worlds and utilizing popular low cost motion controllers (e.g. Wii Remote) and medical sensors. Virtual Valley allows patient to socialize, learn, and play group based serious games while exercising.
Enhancing patient freedom in rehabilitation robotics using gaze-based intention detection.
Novak, Domen; Riener, Robert
2013-06-01
Several design strategies for rehabilitation robotics have aimed to improve patients' experiences using motivating and engaging virtual environments. This paper presents a new design strategy: enhancing patient freedom with a complex virtual environment that intelligently detects patients' intentions and supports the intended actions. A 'virtual kitchen' scenario has been developed in which many possible actions can be performed at any time, allowing patients to experiment and giving them more freedom. Remote eye tracking is used to detect the intended action and trigger appropriate support by a rehabilitation robot. This approach requires no additional equipment attached to the patient and has a calibration time of less than a minute. The system was tested on healthy subjects using the ARMin III arm rehabilitation robot. It was found to be technically feasible and usable by healthy subjects. However, the intention detection algorithm should be improved using better sensor fusion, and clinical tests with patients are needed to evaluate the system's usability and potential therapeutic benefits.
Minimizing Input-to-Output Latency in Virtual Environment
NASA Technical Reports Server (NTRS)
Adelstein, Bernard D.; Ellis, Stephen R.; Hill, Michael I.
2009-01-01
A method and apparatus were developed to minimize latency (time delay ) in virtual environment (VE) and other discrete- time computer-base d systems that require real-time display in response to sensor input s. Latency in such systems is due to the sum of the finite time requi red for information processing and communication within and between sensors, software, and displays.
A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks.
Gui, Jinsong; Zhou, Kai; Xiong, Naixue
2016-09-25
Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude.
A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks
Gui, Jinsong; Zhou, Kai; Xiong, Naixue
2016-01-01
Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude. PMID:27681731
High-fidelity simulation capability for virtual testing of seismic and acoustic sensors
NASA Astrophysics Data System (ADS)
Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.
2005-05-01
This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.
Virtual Instrument for Emissions Measurement of Internal Combustion Engines
Pérez, Armando; Montero, Gisela; Coronado, Marcos; García, Conrado; Pérez, Rubén
2016-01-01
The gases emissions measurement systems in internal combustion engines are strict and expensive nowadays. For this reason, a virtual instrument was developed to measure the combustion emissions from an internal combustion diesel engine, running with diesel-biodiesel mixtures. This software is called virtual instrument for emissions measurement (VIEM), and it was developed in the platform of LabVIEW 2010® virtual programming. VIEM works with sensors connected to a signal conditioning system, and a data acquisition system is used as interface for a computer in order to measure and monitor in real time the emissions of O2, NO, CO, SO2, and CO2 gases. This paper shows the results of the VIEM programming, the integrated circuits diagrams used for the signal conditioning of sensors, and the sensors characterization of O2, NO, CO, SO2, and CO2. VIEM is a low-cost instrument and is simple and easy to use. Besides, it is scalable, making it flexible and defined by the user. PMID:27034893
Sensing and Virtual Worlds - A Survey of Research Opportunities
NASA Technical Reports Server (NTRS)
Moore, Dana
2012-01-01
Virtual Worlds (VWs) have been used effectively in live and constructive military training. An area that remains fertile ground for exploration and a new vision involves integrating various traditional and now non-traditional sensors into virtual worlds. In this paper, we will assert that the benefits of this integration are several. First, we maintain that virtual worlds offer improved sensor deployment planning through improved visualization and stimulation of the model, using geo-specific terrain and structure. Secondly, we assert that VWs enhance the mission rehearsal process, and that using a mix of live avatars, non-player characters, and live sensor feeds (e.g. real time meteorology) can help visualization of the area of operations. Finally, tactical operations are improved via better collaboration and integration of real world sensing capabilities, and in most situations, 30 VWs improve the state of the art over current "dots on a map" 20 geospatial visualization. However, several capability gaps preclude a fuller realization of this vision. In this paper, we identify many of these gaps and suggest research directions
Ubiquitous virtual private network: a solution for WSN seamless integration.
Villa, David; Moya, Francisco; Villanueva, Félix Jesús; Aceña, Óscar; López, Juan Carlos
2014-01-06
Sensor networks are becoming an essential part of ubiquitous systems and applications. However, there are no well-defined protocols or mechanisms to access the sensor network from the enterprise information system. We consider this issue as a heterogeneous network interconnection problem, and as a result, the same concepts may be applied. Specifically, we propose the use of object-oriented middlewares to provide a virtual private network in which all involved elements (sensor nodes or computer applications) will be able to communicate as if all of them were in a single and uniform network.
Sensor Network Infrastructure for a Home Care Monitoring System
Palumbo, Filippo; Ullberg, Jonas; Štimec, Ales; Furfari, Francesco; Karlsson, Lars; Coradeschi, Silvia
2014-01-01
This paper presents the sensor network infrastructure for a home care system that allows long-term monitoring of physiological data and everyday activities. The aim of the proposed system is to allow the elderly to live longer in their home without compromising safety and ensuring the detection of health problems. The system offers the possibility of a virtual visit via a teleoperated robot. During the visit, physiological data and activities occurring during a period of time can be discussed. These data are collected from physiological sensors (e.g., temperature, blood pressure, glucose) and environmental sensors (e.g., motion, bed/chair occupancy, electrical usage). The system can also give alarms if sudden problems occur, like a fall, and warnings based on more long-term trends, such as the deterioration of health being detected. It has been implemented and tested in a test environment and has been deployed in six real homes for a year-long evaluation. The key contribution of the paper is the presentation of an implemented system for ambient assisted living (AAL) tested in a real environment, combining the acquisition of sensor data, a flexible and adaptable middleware compliant with the OSGistandard and a context recognition application. The system has been developed in a European project called GiraffPlus. PMID:24573309
Sensor network infrastructure for a home care monitoring system.
Palumbo, Filippo; Ullberg, Jonas; Stimec, Ales; Furfari, Francesco; Karlsson, Lars; Coradeschi, Silvia
2014-02-25
This paper presents the sensor network infrastructure for a home care system that allows long-term monitoring of physiological data and everyday activities. The aim of the proposed system is to allow the elderly to live longer in their home without compromising safety and ensuring the detection of health problems. The system offers the possibility of a virtual visit via a teleoperated robot. During the visit, physiological data and activities occurring during a period of time can be discussed. These data are collected from physiological sensors (e.g., temperature, blood pressure, glucose) and environmental sensors (e.g., motion, bed/chair occupancy, electrical usage). The system can also give alarms if sudden problems occur, like a fall, and warnings based on more long-term trends, such as the deterioration of health being detected. It has been implemented and tested in a test environment and has been deployed in six real homes for a year-long evaluation. The key contribution of the paper is the presentation of an implemented system for ambient assisted living (AAL) tested in a real environment, combining the acquisition of sensor data, a flexible and adaptable middleware compliant with the OSGistandard and a context recognition application. The system has been developed in a European project called GiraffPlus.
NASA Astrophysics Data System (ADS)
Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.
2017-09-01
To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.
Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.
Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong
2018-01-01
Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.
A Plug-and-Play Human-Centered Virtual TEDS Architecture for the Web of Things.
Hernández-Rojas, Dixys L; Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Escudero, Carlos J
2018-06-27
This article presents a Virtual Transducer Electronic Data Sheet (VTEDS)-based framework for the development of intelligent sensor nodes with plug-and-play capabilities in order to contribute to the evolution of the Internet of Things (IoT) toward the Web of Things (WoT). It makes use of new lightweight protocols that allow sensors to self-describe, auto-calibrate, and auto-register. Such protocols enable the development of novel IoT solutions while guaranteeing low latency, low power consumption, and the required Quality of Service (QoS). Thanks to the developed human-centered tools, it is possible to configure and modify dynamically IoT device firmware, managing the active transducers and their communication protocols in an easy and intuitive way, without requiring any prior programming knowledge. In order to evaluate the performance of the system, it was tested when using Bluetooth Low Energy (BLE) and Ethernet-based smart sensors in different scenarios. Specifically, user experience was quantified empirically (i.e., how fast the system shows collected data to a user was measured). The obtained results show that the proposed VTED architecture is very fast, with some smart sensors (located in Europe) able to self-register and self-configure in a remote cloud (in South America) in less than 3 s and to display data to remote users in less than 2 s.
NASA Astrophysics Data System (ADS)
Entwistle, Elizabeth; Curtis, Andrew; Galetti, Erica; Baptie, Brian; Meles, Giovanni
2015-04-01
If energy emitted by a seismic source such as an earthquake is recorded on a suitable backbone array of seismometers, source-receiver interferometry (SRI) is a method that allows those recordings to be projected to the location of another target seismometer, providing an estimate of the seismogram that would have been recorded at that location. Since the other seismometer may not have been deployed at the time the source occurred, this renders possible the concept of 'retrospective seismology' whereby the installation of a sensor at one period of time allows the construction of virtual seismograms as though that sensor had been active before or after its period of installation. Using the benefit of hindsight of earthquake location or magnitude estimates, SRI can establish new measurement capabilities closer to earthquake epicenters, thus potentially improving earthquake location estimates. Recently we showed that virtual SRI seismograms can be constructed on target sensors in both industrial seismic and earthquake seismology settings, using both active seismic sources and ambient seismic noise to construct SRI propagators, and on length scales ranging over 5 orders of magnitude from ~40 m to ~2500 km[1]. Here we present the results from earthquake seismology by comparing virtual earthquake seismograms constructed at target sensors by SRI to those actually recorded on the same sensors. We show that spatial integrations required by interferometric theory can be calculated over irregular receiver arrays by embedding these arrays within 2D spatial Voronoi cells, thus improving spatial interpolation and interferometric results. The results of SRI are significantly improved by restricting the backbone receiver array to include approximately those receivers that provide a stationary phase contribution to the interferometric integrals. We apply both correlation-correlation and correlation-convolution SRI, and show that the latter constructs virtual seismograms with fewer non-physical arrivals. Finally we reconstruct earthquake seismograms at sensors that were previously active but were subsequently removed before the earthquakes occurred; thus we create virtual earthquake seismograms at those sensors, truly retrospectively. Such SRI seismograms can be used to create a catalogue of new, virtual earthquake seismograms that are available to complement real earthquake data in future earthquake seismology studies. [1]E. Entwistle, Curtis, A., Galetti, E., Baptie, B., Meles, G., Constructing new seismograms from old earthquakes: Retrospective seismology at multiple length scales, JGR, in press.
Reliability modelling and analysis of thermal MEMS
NASA Astrophysics Data System (ADS)
Muratet, Sylvaine; Lavu, Srikanth; Fourniols, Jean-Yves; Bell, George; Desmulliez, Marc P. Y.
2006-04-01
This paper presents a MEMS reliability study methodology based on the novel concept of 'virtual prototyping'. This methodology can be used for the development of reliable sensors or actuators and also to characterize their behaviour in specific use conditions and applications. The methodology is demonstrated on the U-shaped micro electro thermal actuator used as test vehicle. To demonstrate this approach, a 'virtual prototype' has been developed with the modeling tools MatLab and VHDL-AMS. A best practice FMEA (Failure Mode and Effect Analysis) is applied on the thermal MEMS to investigate and assess the failure mechanisms. Reliability study is performed by injecting the identified defaults into the 'virtual prototype'. The reliability characterization methodology predicts the evolution of the behavior of these MEMS as a function of the number of cycles of operation and specific operational conditions.
Spectral Reconstruction for Obtaining Virtual Hyperspectral Images
NASA Astrophysics Data System (ADS)
Perez, G. J. P.; Castro, E. C.
2016-12-01
Hyperspectral sensors demonstrated its capabalities in identifying materials and detecting processes in a satellite scene. However, availability of hyperspectral images are limited due to the high development cost of these sensors. Currently, most of the readily available data are from multi-spectral instruments. Spectral reconstruction is an alternative method to address the need for hyperspectral information. The spectral reconstruction technique has been shown to provide a quick and accurate detection of defects in an integrated circuit, recovers damaged parts of frescoes, and it also aids in converting a microscope into an imaging spectrometer. By using several spectral bands together with a spectral library, a spectrum acquired by a sensor can be expressed as a linear superposition of elementary signals. In this study, spectral reconstruction is used to estimate the spectra of different surfaces imaged by Landsat 8. Four atmospherically corrected surface reflectance from three visible bands (499 nm, 585 nm, 670 nm) and one near-infrared band (872 nm) of Landsat 8, and a spectral library of ground elements acquired from the United States Geological Survey (USGS) are used. The spectral library is limited to 420-1020 nm spectral range, and is interpolated at one nanometer resolution. Singular Value Decomposition (SVD) is used to calculate the basis spectra, which are then applied to reconstruct the spectrum. The spectral reconstruction is applied for test cases within the library consisting of vegetation communities. This technique was successful in reconstructing a hyperspectral signal with error of less than 12% for most of the test cases. Hence, this study demonstrated the potential of simulating information at any desired wavelength, creating a virtual hyperspectral sensor without the need for additional satellite bands.
Core body temperature control by total liquid ventilation using a virtual lung temperature sensor.
Nadeau, Mathieu; Micheau, Philippe; Robert, Raymond; Avoine, Olivier; Tissier, Renaud; Germim, Pamela Samanta; Vandamme, Jonathan; Praud, Jean-Paul; Walti, Herve
2014-12-01
In total liquid ventilation (TLV), the lungs are filled with a breathable liquid perfluorocarbon (PFC) while a liquid ventilator ensures proper gas exchange by renewal of a tidal volume of oxygenated and temperature-controlled PFC. Given the rapid changes in core body temperature generated by TLV using the lung has a heat exchanger, it is crucial to have accurate and reliable core body temperature monitoring and control. This study presents the design of a virtual lung temperature sensor to control core temperature. In the first step, the virtual sensor, using expired PFC to estimate lung temperature noninvasively, was validated both in vitro and in vivo. The virtual lung temperature was then used to rapidly and automatically control core temperature. Experimentations were performed using the Inolivent-5.0 liquid ventilator with a feedback controller to modulate inspired PFC temperature thereby controlling lung temperature. The in vivo experimental protocol was conducted on seven newborn lambs instrumented with temperature sensors at the femoral artery, pulmonary artery, oesophagus, right ear drum, and rectum. After stabilization in conventional mechanical ventilation, TLV was initiated with fast hypothermia induction, followed by slow posthypothermic rewarming for 1 h, then by fast rewarming to normothermia and finally a second fast hypothermia induction phase. Results showed that the virtual lung temperature was able to provide an accurate estimation of systemic arterial temperature. Results also demonstrate that TLV can precisely control core body temperature and can be favorably compared to extracorporeal circulation in terms of speed.
A new method for aerodynamic test of high altitude propellers
NASA Astrophysics Data System (ADS)
Gong, Xiying; Zhang, Lin
A ground test system is designed for aerodynamic performance tests of high altitude propellers. The system is consisted of stable power supply, servo motors, two-component balance constructed by tension-compression sensors, ultrasonic anemometer, data acquisition module. It is loaded on a truck to simulate propellers' wind-tunnel test for different wind velocities at low density circumstance. The graphical programming language LABVIEW for developing virtual instrument is used to realize the test system control and data acquisition. Aerodynamic performance test of a propeller with 6.8 m diameter was completed by using this system. The results verify the feasibility of the ground test method.
Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments.
Rincon, J A; Poza-Lujan, Jose-Luis; Julian, V; Posadas-Yagüe, Juan-Luis; Carrascosa, C
2016-01-01
This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system.
Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments
2016-01-01
This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system. PMID:26926691
Hybrid Feedforward-Feedback Noise Control Using Virtual Sensors
NASA Technical Reports Server (NTRS)
Bean, Jacob; Fuller, Chris; Schiller, Noah
2016-01-01
Several approaches to active noise control using virtual sensors are evaluated for eventual use in an active headrest. Specifically, adaptive feedforward, feedback, and hybrid control structures are compared. Each controller incorporates the traditional filtered-x least mean squares algorithm. The feedback controller is arranged in an internal model configuration to draw comparisons with standard feedforward control theory results. Simulation and experimental results are presented that illustrate each controllers ability to minimize the pressure at both physical and virtual microphone locations. The remote microphone technique is used to obtain pressure estimates at the virtual locations. It is shown that a hybrid controller offers performance benefits over the traditional feedforward and feedback controllers. Stability issues associated with feedback and hybrid controllers are also addressed. Experimental results show that 15-20 dB reduction in broadband disturbances can be achieved by minimizing the measured pressure, whereas 10-15 dB reduction is obtained when minimizing the estimated pressure at a virtual location.
Virtual Mission Operations of Remote Sensors With Rapid Access To and From Space
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Stewart, Dave; Walke, Jon; Dikeman, Larry; Sage, Steven; Miller, Eric; Northam, James; Jackson, Chris; Taylor, John; Lynch, Scott;
2010-01-01
This paper describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the United Kingdom Disaster Monitoring Constellation (UK-DMC), is used as the space-based sensor. The UK-DMC s availability is determined via machine-to-machine communications using SSTL s mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL s and Universal Space Network s (USN) ground assets. The availability and scheduling of USN s assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards.
NASA Astrophysics Data System (ADS)
Boulandet, R.; Michau, M.; Micheau, P.; Berry, A.
2016-01-01
This paper deals with an active structural acoustic control approach to reduce the transmission of tonal noise in aircraft cabins. The focus is on the practical implementation of the virtual mechanical impedances method by using sensoriactuators instead of conventional control units composed of separate sensors and actuators. The experimental setup includes two sensoriactuators developed from the electrodynamic inertial exciter and distributed over an aircraft trim panel which is subject to a time-harmonic diffuse sound field. The target mechanical impedances are first defined by solving a linear optimization problem from sound power measurements before being applied to the test panel using a complex envelope controller. Measured data are compared to results obtained with sensor-actuator pairs consisting of an accelerometer and an inertial exciter, particularly as regards sound power reduction. It is shown that the two types of control unit provide similar performance, and that here virtual impedance control stands apart from conventional active damping. In particular, it is clear from this study that extra vibrational energy must be provided by the actuators for optimal sound power reduction, mainly due to the high structural damping in the aircraft trim panel. Concluding remarks on the benefits of using these electrodynamic sensoriactuators to control tonal disturbances are also provided.
OPTICAL FIBER SENSOR TECHNOLOGIES FOR EFFICIENT AND ECONOMICAL OIL RECOVERY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anbo Wang; Kristie L. Cooper; Gary R. Pickrell
2003-06-01
Efficient recovery of petroleum reserves from existing oil wells has been proven to be difficult due to the lack of robust instrumentation that can accurately and reliably monitor processes in the downhole environment. Commercially available sensors for measurement of pressure, temperature, and fluid flow exhibit shortened lifetimes in the harsh downhole conditions, which are characterized by high pressures (up to 20 kpsi), temperatures up to 250 C, and exposure to chemically reactive fluids. Development of robust sensors that deliver continuous, real-time data on reservoir performance and petroleum flow pathways will facilitate application of advanced recovery technologies, including horizontal and multilateralmore » wells. This is the final report for the four-year program ''Optical Fiber Sensor Technologies for Efficient and Economical Oil Recovery'', funded by the National Petroleum Technology Office of the U.S. Department of Energy, and performed by the Center for Photonics Technology of the Bradley Department of Electrical and Computer Engineering at Virginia Tech from October 1, 1999 to March 31, 2003. The main objective of this research program was to develop cost-effective, reliable optical fiber sensor instrumentation for real-time monitoring of various key parameters crucial to efficient and economical oil production. During the program, optical fiber sensors were demonstrated for the measurement of temperature, pressure, flow, and acoustic waves, including three successful field tests in the Chevron/Texaco oil fields in Coalinga, California, and at the world-class oil flow simulation facilities in Tulsa, Oklahoma. Research efforts included the design and fabrication of sensor probes, development of signal processing algorithms, construction of test systems, development and testing of strategies for the protection of optical fibers and sensors in the downhole environment, development of remote monitoring capabilities allowing real-time monitoring of the field test data from virtually anywhere in the world, and development of novel data processing techniques. Comprehensive testing was performed to systematically evaluate the performance of the fiber optic sensor systems in both lab and field environments.« less
Virtual Sensors: Using Data Mining to Efficiently Estimate Spectra
NASA Technical Reports Server (NTRS)
Srivastava, Ashok; Oza, Nikunj; Stroeve, Julienne
2004-01-01
Detecting clouds within a satellite image is essential for retrieving surface geophysical parameters, such as albedo and temperature, from optical and thermal imagery because the retrieval methods tend to be valid for clear skies only. Thus, routine satellite data processing requires reliable automated cloud detection algorithms that are applicable to many surface types. Unfortunately, cloud detection over snow and ice is difficult due to the lack of spectral contrast between clouds and snow. Snow and clouds are both highly reflective in the visible wavelen,ats and often show little contrast in the thermal Infrared. However, at 1.6 microns, the spectral signatures of snow and clouds differ enough to allow improved snow/ice/cloud discrimination. The recent Terra and Aqua Moderate Resolution Imaging Spectro-Radiometer (MODIS) sensors have a channel (channel 6) at 1.6 microns. Presently the most comprehensive, long-term information on surface albedo and temperature over snow- and ice-covered surfaces comes from the Advanced Very High Resolution Radiometer ( AVHRR) sensor that has been providing imagery since July 1981. The earlier AVHRR sensors (e.g. AVHRR/2) did not however have a channel designed for discriminating clouds from snow, such as the 1.6 micron channel available on the more recent AVHRR/3 or the MODIS sensors. In the absence of the 1.6 micron channel, the AVHRR Polar Pathfinder (APP) product performs cloud detection using a combination of time-series analysis and multispectral threshold tests based on the satellite's measuring channels to produce a cloud mask. The method has been found to work reasonably well over sea ice, but not so well over the ice sheets. Thus, improving the cloud mask in the APP dataset would be extremely helpful toward increasing the accuracy of the albedo and temperature retrievals, as well as extending the time-series of albedo and temperature retrievals from the more recent sensors to the historical ones. In this work, we use data mining methods to construct a model of MODIS channel 6 as a function of other channels that are common to both MODIS and AVHRR. The idea is to use the model to generate the equivalent of MODIS channel 6 for AVHRR as a function of the AVHRR equivalents to MODIS channels. We call this a Virtual Sensor because it predicts unmeasured spectra. The goal is to use this virtual channel 6. to yield a cloud mask superior to what is currently used in APP . Our results show that several data mining methods such as multilayer perceptrons (MLPs), ensemble methods (e.g., bagging), and kernel methods (e.g., support vector machines) generate channel 6 for unseen MODIS images with high accuracy. Because the true channel 6 is not available for AVHRR images, we qualitatively assess the virtual channel 6 for several AVHRR images.
Ubiquitous Virtual Private Network: A Solution for WSN Seamless Integration
Villa, David; Moya, Francisco; Villanueva, Félix Jesús; Aceña, Óscar; López, Juan Carlos
2014-01-01
Sensor networks are becoming an essential part of ubiquitous systems and applications. However, there are no well-defined protocols or mechanisms to access the sensor network from the enterprise information system. We consider this issue as a heterogeneous network interconnection problem, and as a result, the same concepts may be applied. Specifically, we propose the use of object-oriented middlewares to provide a virtual private network in which all involved elements (sensor nodes or computer applications) will be able to communicate as if all of them were in a single and uniform network. PMID:24399154
Assessing Arthroscopic Skills Using Wireless Elbow-Worn Motion Sensors.
Kirby, Georgina S J; Guyver, Paul; Strickland, Louise; Alvand, Abtin; Yang, Guang-Zhong; Hargrove, Caroline; Lo, Benny P L; Rees, Jonathan L
2015-07-01
Assessment of surgical skill is a critical component of surgical training. Approaches to assessment remain predominantly subjective, although more objective measures such as Global Rating Scales are in use. This study aimed to validate the use of elbow-worn, wireless, miniaturized motion sensors to assess the technical skill of trainees performing arthroscopic procedures in a simulated environment. Thirty participants were divided into three groups on the basis of their surgical experience: novices (n = 15), intermediates (n = 10), and experts (n = 5). All participants performed three standardized tasks on an arthroscopic virtual reality simulator while wearing wireless wrist and elbow motion sensors. Video output was recorded and a validated Global Rating Scale was used to assess performance; dexterity metrics were recorded from the simulator. Finally, live motion data were recorded via Bluetooth from the wireless wrist and elbow motion sensors and custom algorithms produced an arthroscopic performance score. Construct validity was demonstrated for all tasks, with Global Rating Scale scores and virtual reality output metrics showing significant differences between novices, intermediates, and experts (p < 0.001). The correlation of the virtual reality path length to the number of hand movements calculated from the wireless sensors was very high (p < 0.001). A comparison of the arthroscopic performance score levels with virtual reality output metrics also showed highly significant differences (p < 0.01). Comparisons of the arthroscopic performance score levels with the Global Rating Scale scores showed strong and highly significant correlations (p < 0.001) for both sensor locations, but those of the elbow-worn sensors were stronger and more significant (p < 0.001) than those of the wrist-worn sensors. A new wireless assessment of surgical performance system for objective assessment of surgical skills has proven valid for assessing arthroscopic skills. The elbow-worn sensors were shown to achieve an accurate assessment of surgical dexterity and performance. The validation of an entirely objective assessment of arthroscopic skill with wireless elbow-worn motion sensors introduces, for the first time, a feasible assessment system for the live operating theater with the added potential to be applied to other surgical and interventional specialties. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.
Hybrid architecture for building secure sensor networks
NASA Astrophysics Data System (ADS)
Owens, Ken R., Jr.; Watkins, Steve E.
2012-04-01
Sensor networks have various communication and security architectural concerns. Three approaches are defined to address these concerns for sensor networks. The first area is the utilization of new computing architectures that leverage embedded virtualization software on the sensor. Deploying a small, embedded virtualization operating system on the sensor nodes that is designed to communicate to low-cost cloud computing infrastructure in the network is the foundation to delivering low-cost, secure sensor networks. The second area focuses on securing the sensor. Sensor security components include developing an identification scheme, and leveraging authentication algorithms and protocols that address security assurance within the physical, communication network, and application layers. This function will primarily be accomplished through encrypting the communication channel and integrating sensor network firewall and intrusion detection/prevention components to the sensor network architecture. Hence, sensor networks will be able to maintain high levels of security. The third area addresses the real-time and high priority nature of the data that sensor networks collect. This function requires that a quality-of-service (QoS) definition and algorithm be developed for delivering the right data at the right time. A hybrid architecture is proposed that combines software and hardware features to handle network traffic with diverse QoS requirements.
NASA Technical Reports Server (NTRS)
1994-01-01
This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation.
Compact and high resolution virtual mouse using lens array and light sensor
NASA Astrophysics Data System (ADS)
Qin, Zong; Chang, Yu-Cheng; Su, Yu-Jie; Huang, Yi-Pai; Shieh, Han-Ping David
2016-06-01
Virtual mouse based on IR source, lens array and light sensor was designed and implemented. Optical architecture including lens amount, lens pitch, baseline length, sensor length, lens-sensor gap, focal length etc. was carefully designed to achieve low detective error, high resolution, and simultaneously, compact system volume. System volume is 3.1mm (thickness) × 4.5mm (length) × 2, which is much smaller than that of camera-based device. Relative detective error of 0.41mm and minimum resolution of 26ppi were verified in experiments, so that it can replace conventional touchpad/touchscreen. If system thickness is eased to 20mm, resolution higher than 200ppi can be achieved to replace real mouse.
Telemedicine, virtual reality, and surgery
NASA Technical Reports Server (NTRS)
Mccormack, Percival D.; Charles, Steve
1994-01-01
Two types of synthetic experience are covered: virtual reality (VR) and surgery, and telemedicine. The topics are presented in viewgraph form and include the following: geometric models; physiological sensors; surgical applications; virtual cadaver; VR surgical simulation; telesurgery; VR Surgical Trainer; abdominal surgery pilot study; advanced abdominal simulator; examples of telemedicine; and telemedicine spacebridge.
Encountered-Type Haptic Interface for Representation of Shape and Rigidity of 3D Virtual Objects.
Takizawa, Naoki; Yano, Hiroaki; Iwata, Hiroo; Oshiro, Yukio; Ohkohchi, Nobuhiro
2017-01-01
This paper describes the development of an encountered-type haptic interface that can generate the physical characteristics, such as shape and rigidity, of three-dimensional (3D) virtual objects using an array of newly developed non-expandable balloons. To alter the rigidity of each non-expandable balloon, the volume of air in it is controlled through a linear actuator and a pressure sensor based on Hooke's law. Furthermore, to change the volume of each balloon, its exposed surface area is controlled by using another linear actuator with a trumpet-shaped tube. A position control mechanism is constructed to display virtual objects using the balloons. The 3D position of each balloon is controlled using a flexible tube and a string. The performance of the system is tested and the results confirm the effectiveness of the proposed principle and interface.
An Energy-Efficient Approach to Enhance Virtual Sensors Provisioning in Sensor Clouds Environments
Filho, Raimir Holanda; Rabêlo, Ricardo de Andrade L.; de Carvalho, Carlos Giovanni N.; Mendes, Douglas Lopes de S.; Costa, Valney da Gama
2018-01-01
Virtual sensors provisioning is a central issue for sensors cloud middleware since it is responsible for selecting physical nodes, usually from Wireless Sensor Networks (WSN) of different owners, to handle user’s queries or applications. Recent works perform provisioning by clustering sensor nodes based on the correlation measurements and then selecting as few nodes as possible to preserve WSN energy. However, such works consider only homogeneous nodes (same set of sensors). Therefore, those works are not entirely appropriate for sensor clouds, which in most cases comprises heterogeneous sensor nodes. In this paper, we propose ACxSIMv2, an approach to enhance the provisioning task by considering heterogeneous environments. Two main algorithms form ACxSIMv2. The first one, ACASIMv1, creates multi-dimensional clusters of sensor nodes, taking into account the measurements correlations instead of the physical distance between nodes like most works on literature. Then, the second algorithm, ACOSIMv2, based on an Ant Colony Optimization system, selects an optimal set of sensors nodes from to respond user’s queries while attending all parameters and preserving the overall energy consumption. Results from initial experiments show that the approach reduces significantly the sensor cloud energy consumption compared to traditional works, providing a solution to be considered in sensor cloud scenarios. PMID:29495406
An Energy-Efficient Approach to Enhance Virtual Sensors Provisioning in Sensor Clouds Environments.
Lemos, Marcus Vinícius de S; Filho, Raimir Holanda; Rabêlo, Ricardo de Andrade L; de Carvalho, Carlos Giovanni N; Mendes, Douglas Lopes de S; Costa, Valney da Gama
2018-02-26
Virtual sensors provisioning is a central issue for sensors cloud middleware since it is responsible for selecting physical nodes, usually from Wireless Sensor Networks (WSN) of different owners, to handle user's queries or applications. Recent works perform provisioning by clustering sensor nodes based on the correlation measurements and then selecting as few nodes as possible to preserve WSN energy. However, such works consider only homogeneous nodes (same set of sensors). Therefore, those works are not entirely appropriate for sensor clouds, which in most cases comprises heterogeneous sensor nodes. In this paper, we propose ACxSIMv2, an approach to enhance the provisioning task by considering heterogeneous environments. Two main algorithms form ACxSIMv2. The first one, ACASIMv1, creates multi-dimensional clusters of sensor nodes, taking into account the measurements correlations instead of the physical distance between nodes like most works on literature. Then, the second algorithm, ACOSIMv2, based on an Ant Colony Optimization system, selects an optimal set of sensors nodes from to respond user's queries while attending all parameters and preserving the overall energy consumption. Results from initial experiments show that the approach reduces significantly the sensor cloud energy consumption compared to traditional works, providing a solution to be considered in sensor cloud scenarios.
Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T
2015-01-01
To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.
Virtual optical interfaces for the transportation industry
NASA Astrophysics Data System (ADS)
Hejmadi, Vic; Kress, Bernard
2010-04-01
We present a novel implementation of virtual optical interfaces for the transportation industry (automotive and avionics). This new implementation includes two functionalities in a single device; projection of a virtual interface and sensing of the position of the fingers on top of the virtual interface. Both functionalities are produced by diffraction of laser light. The device we are developing include both functionalities in a compact package which has no optical elements to align since all of them are pre-aligned on a single glass wafer through optical lithography. The package contains a CMOS sensor which diffractive objective lens is optimized for the projected interface color as well as for the IR finger position sensor based on structured illumination. Two versions are proposed: a version which senses the 2d position of the hand and a version which senses the hand position in 3d.
NASA Technical Reports Server (NTRS)
Vranish, John M.
2006-01-01
The term "virtual feel" denotes a type of capaciflector (an advanced capacitive proximity sensor) and a methodology for designing and using a sensor of this type to guide a robot in manipulating a tool (e.g., a wrench socket) into alignment with a mating fastener (e.g., a bolt head) or other electrically conductive object. A capaciflector includes at least one sensing electrode, excited with an alternating voltage, that puts out a signal indicative of the capacitance between that electrode and a proximal object.
NASA Astrophysics Data System (ADS)
Terzopoulos, Demetri; Qureshi, Faisal Z.
Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.
Virtual Sensor Web Architecture
NASA Astrophysics Data System (ADS)
Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.
2006-12-01
NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.
A New User Interface for On-Demand Customizable Data Products for Sensors in a SensorWeb
NASA Technical Reports Server (NTRS)
Mandl, Daniel; Cappelaere, Pat; Frye, Stuart; Sohlberg, Rob; Ly, Vuong; Chien, Steve; Sullivan, Don
2011-01-01
A SensorWeb is a set of sensors, which can consist of ground, airborne and space-based sensors interoperating in an automated or autonomous collaborative manner. The NASA SensorWeb toolbox, developed at NASA/GSFC in collaboration with NASA/JPL, NASA/Ames and other partners, is a set of software and standards that (1) enables users to create virtual private networks of sensors over open networks; (2) provides the capability to orchestrate their actions; (3) provides the capability to customize the output data products and (4) enables automated delivery of the data products to the users desktop. A recent addition to the SensorWeb Toolbox is a new user interface, together with web services co-resident with the sensors, to enable rapid creation, loading and execution of new algorithms for processing sensor data. The web service along with the user interface follows the Open Geospatial Consortium (OGC) standard called Web Coverage Processing Service (WCPS). This presentation will detail the prototype that was built and how the WCPS was tested against a HyspIRI flight testbed and an elastic computation cloud on the ground with EO-1 data. HyspIRI is a future NASA decadal mission. The elastic computation cloud stores EO-1 data and runs software similar to Amazon online shopping.
Rivera-Gutierrez, Diego; Ferdig, Rick; Li, Jian; Lok, Benjamin
2014-04-01
We have created You, M.D., an interactive museum exhibit in which users learn about topics in public health literacy while interacting with virtual humans. You, M.D. is equipped with a weight sensor, a height sensor and a Microsoft Kinect that gather basic user information. Conceptually, You, M.D. could use this user information to dynamically select the appearance of the virtual humans in the interaction attempting to improve learning outcomes and user perception for each particular user. For this concept to be possible, a better understanding of how different elements of the visual appearance of a virtual human affects user perceptions is required. In this paper, we present the results of an initial user study with a large sample size (n =333) ran using You, M.D. The study measured users reactions based on the users gender and body-mass index (BMI) when facing virtual humans with BMI either concordant or discordant from the users BMI. The results of the study indicate that concordance between the users BMI and the virtual humans BMI affects male and female users differently. The results also show that female users rate virtual humans as more knowledgeable than male users rate the same virtual humans.
Inertial Motion-Tracking Technology for Virtual 3-D
NASA Technical Reports Server (NTRS)
2005-01-01
In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.
Sensor Webs as Virtual Data Systems for Earth Science
NASA Astrophysics Data System (ADS)
Moe, K. L.; Sherwood, R.
2008-05-01
The NASA Earth Science Technology Office established a 3-year Advanced Information Systems Technology (AIST) development program in late 2006 to explore the technical challenges associated with integrating sensors, sensor networks, data assimilation and modeling components into virtual data systems called "sensor webs". The AIST sensor web program was initiated in response to a renewed emphasis on the sensor web concepts. In 2004, NASA proposed an Earth science vision for a more robust Earth observing system, coupled with remote sensing data analysis tools and advances in Earth system models. The AIST program is conducting the research and developing components to explore the technology infrastructure that will enable the visionary goals. A working statement for a NASA Earth science sensor web vision is the following: On-demand sensing of a broad array of environmental and ecological phenomena across a wide range of spatial and temporal scales, from a heterogeneous suite of sensors both in-situ and in orbit. Sensor webs will be dynamically organized to collect data, extract information from it, accept input from other sensor / forecast / tasking systems, interact with the environment based on what they detect or are tasked to perform, and communicate observations and results in real time. The focus on sensor webs is to develop the technology and prototypes to demonstrate the evolving sensor web capabilities. There are 35 AIST projects ranging from 1 to 3 years in duration addressing various aspects of sensor webs involving space sensors such as Earth Observing-1, in situ sensor networks such as the southern California earthquake network, and various modeling and forecasting systems. Some of these projects build on proof-of-concept demonstrations of sensor web capabilities like the EO-1 rapid fire response initially implemented in 2003. Other projects simulate future sensor web configurations to evaluate the effectiveness of sensor-model interactions for producing improved science predictions. Still other projects are maturing technology to support autonomous operations, communications and system interoperability. This paper will highlight lessons learned by various projects during the first half of the AIST program. Several sensor web demonstrations have been implemented and resulting experience with evolving standards, such as the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) among others, will be featured. The role of sensor webs in support of the intergovernmental Group on Earth Observations' Global Earth Observation System of Systems (GEOSS) will also be discussed. The GEOSS vision is a distributed system of systems that builds on international components to supply observing and processing systems that are, in the whole, comprehensive, coordinated and sustained. Sensor web prototypes are under development to demonstrate how remote sensing satellite data, in situ sensor networks and decision support systems collaborate in applications of interest to GEO, such as flood monitoring. Furthermore, the international Committee on Earth Observation Satellites (CEOS) has stepped up to the challenge to provide the space-based systems component for GEOSS. CEOS has proposed "virtual constellations" to address emerging data gaps in environmental monitoring, avoid overlap among observing systems, and make maximum use of existing space and ground assets. Exploratory applications that support the objectives of virtual constellations will also be discussed as a future role for sensor webs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crussell, Jonathan; Erickson, Jeremy; Fritz, David
minimega is an emulytics platform for creating testbeds of networked devices. The platoform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. minimega allows experiments to be brought up quickly with almost no configuration. minimega also includes tools for simple cluster, management, as well as tools for creating Linux-based virtual machines. This release of minimega includes new emulated sensors for Android devices to improve the fidelity of testbeds that include mobile devices. Emulated sensors include GPS and
Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing
Invitto, Sara; Faggiano, Chiara; Sammarco, Silvia; De Luca, Valerio; De Paolis, Lucio T.
2016-01-01
In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user’s hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning. PMID:26999151
Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing.
Invitto, Sara; Faggiano, Chiara; Sammarco, Silvia; De Luca, Valerio; De Paolis, Lucio T
2016-03-18
In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user's hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning.
Virtual microphone sensing through vibro-acoustic modelling and Kalman filtering
NASA Astrophysics Data System (ADS)
van de Walle, A.; Naets, F.; Desmet, W.
2018-05-01
This work proposes a virtual microphone methodology which enables full field acoustic measurements for vibro-acoustic systems. The methodology employs a Kalman filtering framework in order to combine a reduced high-fidelity vibro-acoustic model with a structural excitation measurement and small set of real microphone measurements on the system under investigation. By employing model order reduction techniques, a high order finite element model can be converted in a much smaller model which preserves the desired accuracy and maintains the main physical properties of the original model. Due to the low order of the reduced-order model, it can be effectively employed in a Kalman filter. The proposed methodology is validated experimentally on a strongly coupled vibro-acoustic system. The virtual sensor vastly improves the accuracy with respect to regular forward simulation. The virtual sensor also allows to recreate the full sound field of the system, which is very difficult/impossible to do through classical measurements.
Multiple object, three-dimensional motion tracking using the Xbox Kinect sensor
NASA Astrophysics Data System (ADS)
Rosi, T.; Onorato, P.; Oss, S.
2017-11-01
In this article we discuss the capability of the Xbox Kinect sensor to acquire three-dimensional motion data of multiple objects. Two experiments regarding fundamental features of Newtonian mechanics are performed to test the tracking abilities of our setup. Particular attention is paid to check and visualise the conservation of linear momentum, angular momentum and energy. In both experiments, two objects are tracked while falling in the gravitational field. The obtained data is visualised in a 3D virtual environment to help students understand the physics behind the performed experiments. The proposed experiments were analysed with a group of university students who are aspirant physics and mathematics teachers. Their comments are presented in this paper.
Taylor, Gavin J; Paulk, Angelique C; Pearson, Thomas W J; Moore, Richard J D; Stacey, Jacqui A; Ball, David; van Swinderen, Bruno; Srinivasan, Mandyam V
2015-10-01
When using virtual-reality paradigms to study animal behaviour, careful attention must be paid to how the animal's actions are detected. This is particularly relevant in closed-loop experiments where the animal interacts with a stimulus. Many different sensor types have been used to measure aspects of behaviour, and although some sensors may be more accurate than others, few studies have examined whether, and how, such differences affect an animal's behaviour in a closed-loop experiment. To investigate this issue, we conducted experiments with tethered honeybees walking on an air-supported trackball and fixating a visual object in closed-loop. Bees walked faster and along straighter paths when the motion of the trackball was measured in the classical fashion - using optical motion sensors repurposed from computer mice - than when measured more accurately using a computer vision algorithm called 'FicTrac'. When computer mouse sensors were used to measure bees' behaviour, the bees modified their behaviour and achieved improved control of the stimulus. This behavioural change appears to be a response to a systematic error in the computer mouse sensor that reduces the sensitivity of this sensor system under certain conditions. Although the large perceived inertia and mass of the trackball relative to the honeybee is a limitation of tethered walking paradigms, observing differences depending on the sensor system used to measure bee behaviour was not expected. This study suggests that bees are capable of fine-tuning their motor control to improve the outcome of the task they are performing. Further, our findings show that caution is required when designing virtual-reality experiments, as animals can potentially respond to the artificial scenario in unexpected and unintended ways. © 2015. Published by The Company of Biologists Ltd.
Real-time 3D visualization of volumetric video motion sensor data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, J.; Stansfield, S.; Shawver, D.
1996-11-01
This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less
a New ER Fluid Based Haptic Actuator System for Virtual Reality
NASA Astrophysics Data System (ADS)
Böse, H.; Baumann, M.; Monkman, G. J.; Egersdörfer, S.; Tunayar, A.; Freimuth, H.; Ermert, H.; Khaled, W.
The concept and some steps in the development of a new actuator system which enables the haptic perception of mechanically inhomogeneous virtual objects are introduced. The system consists of a two-dimensional planar array of actuator elements containing an electrorheological (ER) fluid. When a user presses his fingers onto the surface of the actuator array, he perceives locally variable resistance forces generated by vertical pistons which slide in the ER fluid through the gaps between electrode pairs. The voltage in each actuator element can be individually controlled by a novel sophisticated switching technology based on optoelectric gallium arsenide elements. The haptic information which is represented at the actuator array can be transferred from a corresponding sensor system based on ultrasonic elastography. The combined sensor-actuator system may serve as a technology platform for various applications in virtual reality, like telemedicine where the information on the consistency of tissue of a real patient is detected by the sensor part and recorded by the actuator part at a remote location.
Valdivieso Caraguay, Ángel Leonardo; García Villalba, Luis Javier
2017-01-01
This paper presents the Monitoring and Discovery Framework of the Self-Organized Network Management in Virtualized and Software Defined Networks SELFNET project. This design takes into account the scalability and flexibility requirements needed by 5G infrastructures. In this context, the present framework focuses on gathering and storing the information (low-level metrics) related to physical and virtual devices, cloud environments, flow metrics, SDN traffic and sensors. Similarly, it provides the monitoring data as a generic information source in order to allow the correlation and aggregation tasks. Our design enables the collection and storing of information provided by all the underlying SELFNET sublayers, including the dynamically onboarded and instantiated SDN/NFV Apps, also known as SELFNET sensors. PMID:28362346
Caraguay, Ángel Leonardo Valdivieso; Villalba, Luis Javier García
2017-03-31
This paper presents the Monitoring and Discovery Framework of the Self-Organized Network Management in Virtualized and Software Defined Networks SELFNET project. This design takes into account the scalability and flexibility requirements needed by 5G infrastructures. In this context, the present framework focuses on gathering and storing the information (low-level metrics) related to physical and virtual devices, cloud environments, flow metrics, SDN traffic and sensors. Similarly, it provides the monitoring data as a generic information source in order to allow the correlation and aggregation tasks. Our design enables the collection and storing of information provided by all the underlying SELFNET sublayers, including the dynamically onboarded and instantiated SDN/NFV Apps, also known as SELFNET sensors.
Design of virtual three-dimensional instruments for sound control
NASA Astrophysics Data System (ADS)
Mulder, Axel Gezienus Elith
An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object parameters. While the virtual instruments can be adapted to exploit many manipulation gestures, further work is required to reduce the need for technical expertise to realize adaptations. Better virtual object simulation techniques and faster sensor data acquisition will improve the performance of virtual instruments. The design environment which has been developed should prove useful as a (musical) instrument prototyping tool and as a tool for researching the optimal adaptation of machines to humans.
Virtual Sensors in a Web 2.0 Digital Watershed
NASA Astrophysics Data System (ADS)
Liu, Y.; Hill, D. J.; Marini, L.; Kooper, R.; Rodriguez, A.; Myers, J. D.
2008-12-01
The lack of rainfall data in many watersheds is one of the major barriers for modeling and studying many environmental and hydrological processes and supporting decision making. There are just not enough rain gages on the ground. To overcome this data scarcity issue, a Web 2.0 digital watershed is developed at NCSA(National Center for Supercomputing Applications), where users can point-and-click on a web-based google map interface and create new precipitation virtual sensors at any location within the same coverage region as a NEXRAD station. A set of scientific workflows are implemented to perform spatial, temporal and thematic transformations to the near-real-time NEXRAD Level II data. Such workflows can be triggered by the users' actions and generate either rainfall rate or rainfall accumulation streaming data at a user-specified time interval. We will discuss some underlying components of this digital watershed, which consists of a semantic content management middleware, a semantically enhanced streaming data toolkit, virtual sensor management functionality, and RESTful (REpresentational State Transfer) web service that can trigger the workflow execution. Such loosely coupled architecture presents a generic framework for constructing a Web 2.0 style digital watershed. An implementation of this architecture at the Upper Illinois Rive Basin will be presented. We will also discuss the implications of the virtual sensor concept for the broad environmental observatory community and how such concept will help us move towards a participatory digital watershed.
Live Aircraft Encounter Visualization at FutureFlight Central
NASA Technical Reports Server (NTRS)
Murphy, James R.; Chinn, Fay; Monheim, Spencer; Otto, Neil; Kato, Kenji; Archdeacon, John
2018-01-01
Researchers at the National Aeronautics and Space Administration (NASA) have developed an aircraft data streaming capability that can be used to visualize live aircraft in near real-time. During a joint Federal Aviation Administration (FAA)/NASA Airborne Collision Avoidance System flight series, test sorties between unmanned aircraft and manned intruder aircraft were shown in real-time at NASA Ames' FutureFlight Central tower facility as a virtual representation of the encounter. This capability leveraged existing live surveillance, video, and audio data streams distributed through a Live, Virtual, Constructive test environment, then depicted the encounter from the point of view of any aircraft in the system showing the proximity of the other aircraft. For the demonstration, position report data were sent to the ground from on-board sensors on the unmanned aircraft. The point of view can be change dynamically, allowing encounters from all angles to be observed. Visualizing the encounters in real-time provides a safe and effective method for observation of live flight testing and a strong alternative to travel to the remote test range.
NASA Technical Reports Server (NTRS)
1990-01-01
While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.
Novel Virtual Environment for Alternative Treatment of Children with Cerebral Palsy
de Oliveira, Juliana M.; Fernandes, Rafael Carneiro G.; Pinto, Cristtiano S.; Pinheiro, Plácido R.; Ribeiro, Sidarta
2016-01-01
Cerebral palsy is a severe condition usually caused by decreased brain oxygenation during pregnancy, at birth or soon after birth. Conventional treatments for cerebral palsy are often tiresome and expensive, leading patients to quit treatment. In this paper, we describe a virtual environment for patients to engage in a playful therapeutic game for neuropsychomotor rehabilitation, based on the experience of the occupational therapy program of the Nucleus for Integrated Medical Assistance (NAMI) at the University of Fortaleza, Brazil. Integration between patient and virtual environment occurs through the hand motion sensor “Leap Motion,” plus the electroencephalographic sensor “MindWave,” responsible for measuring attention levels during task execution. To evaluate the virtual environment, eight clinical experts on cerebral palsy were subjected to a questionnaire regarding the potential of the experimental virtual environment to promote cognitive and motor rehabilitation, as well as the potential of the treatment to enhance risks and/or negatively influence the patient's development. Based on the very positive appraisal of the experts, we propose that the experimental virtual environment is a promising alternative tool for the rehabilitation of children with cerebral palsy. PMID:27403154
Napolitano, Rebecca; Blyth, Anna; Glisic, Branko
2018-01-16
Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included.
Napolitano, Rebecca; Blyth, Anna; Glisic, Branko
2018-01-01
Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included. PMID:29337877
The use of combined thermal/pressure polyvinylidene fluoride film airflow sensor in polysomnography.
Kryger, Meir; Eiken, Todd; Qin, Li
2013-12-01
The technologies recommended by the American Academy of Sleep Medicine (AASM) to monitor airflow in polysomnography (PSG) include the simultaneous monitoring of two physical variables: air temperature (for thermal airflow) and air pressure (for nasal pressure). To comply with airflow monitoring standards in the sleep lab setting thus often requires the patient to wear two sensors under the nose during testing. We hypothesized that a single combined thermal/pressure sensor using polyvinylidene fluoride (PVDF) film responsive to both airflow temperature and pressure would be effective in documenting abnormal breathing events during sleep. Sixty patients undergoing routine PSG testing to rule out obstructive sleep apnea at two different sleep laboratories were asked to wear a third PVDF airflow sensor in addition to the traditional thermal sensor and pressure sensor. Apnea and hypopnea events were scored by the sleep lab technologists using the AASM guidelines (CMS option) using the thermal sensor for apnea and the pressure sensor for hypopnea (scorer 1). The digital PSG data were also forwarded to an outside registered polysomnographic technologist for scoring of respiratory events detected in the PVDF airflow channels (scorer 2). The Pearson correlation coefficient, r, between apnea and hypopnea indices obtained using the AASM sensors and the combined PVDF sensor was almost unity for the four calculated indices: apnea-hypopnea index (0.990), obstructive apnea index (0.992), hypopnea index (0.958), and central apnea index (1.0). The slope of the four relationships was virtually unity and the coefficient of determination (r (2)) was also close to 1. The results of intraclass correlation coefficients (>0.95) and Bland-Altman plots also provide excellent agreement between the combined PVDF sensor and the AASM sensors. The indices used to calculate apnea severity obtained with the combined PVDF thermal and pressure sensor were equivalent to those obtained using AASM-recommended sensors.
Development of low cost and accurate homemade sensor system based on Surface Plasmon Resonance (SPR)
NASA Astrophysics Data System (ADS)
Laksono, F. D.; Supardianningsih; Arifin, M.; Abraha, K.
2018-04-01
In this paper, we developed homemade and computerized sensor system based on Surface Plasmon Resonance (SPR). The developed systems consist of mechanical system instrument, laser power sensor, and user interface. The mechanical system development that uses anti-backlash gear design was successfully able to enhance the angular resolution angle of incidence laser up to 0.01°. In this system, the laser detector acquisition system and stepper motor controller utilizing Arduino Uno which is easy to program, flexible, and low cost, was used. Furthermore, we employed LabView’s user interface as the virtual instrument for facilitating the sample measurement and for transforming the data recording directly into the digital form. The test results using gold-deposited half-cylinder prism showed the Total Internal Reflection (TIR) angle of 41,34°± 0,01° and SPR angle of 44,20°± 0,01°, respectively. The result demonstrated that the developed system managed to reduce the measurement duration and data recording errors caused by human error. Also, the test results also concluded that the system’s measurement is repeatable and accurate.
Antolín, Diego; Calvo, Belén; Martínez, Pedro A.
2017-01-01
This paper presents a low-cost high-efficiency solar energy harvesting system to power outdoor wireless sensor nodes. It is based on a Voltage Open Circuit (VOC) algorithm that estimates the open-circuit voltage by means of a multilayer perceptron neural network model trained using local experimental characterization data, which are acquired through a novel low cost characterization system incorporated into the deployed node. Both units—characterization and modelling—are controlled by the same low-cost microcontroller, providing a complete solution which can be understood as a virtual pilot cell, with identical characteristics to those of the specific small solar cell installed on the sensor node, that besides allows an easy adaptation to changes in the actual environmental conditions, panel aging, etc. Experimental comparison to a classical pilot panel based VOC algorithm show better efficiency under the same tested conditions. PMID:28777330
Antolín, Diego; Medrano, Nicolás; Calvo, Belén; Martínez, Pedro A
2017-08-04
This paper presents a low-cost high-efficiency solar energy harvesting system to power outdoor wireless sensor nodes. It is based on a Voltage Open Circuit (VOC) algorithm that estimates the open-circuit voltage by means of a multilayer perceptron neural network model trained using local experimental characterization data, which are acquired through a novel low cost characterization system incorporated into the deployed node. Both units-characterization and modelling-are controlled by the same low-cost microcontroller, providing a complete solution which can be understood as a virtual pilot cell, with identical characteristics to those of the specific small solar cell installed on the sensor node, that besides allows an easy adaptation to changes in the actual environmental conditions, panel aging, etc. Experimental comparison to a classical pilot panel based VOC algorithm show better efficiency under the same tested conditions.
Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks.
Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue
2017-06-06
Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.
Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks
Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue
2017-01-01
Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304
Experimental Verification of Buffet Calculation Procedure Using Unsteady PSP
NASA Technical Reports Server (NTRS)
Panda, Jayanta
2016-01-01
Typically a limited number of dynamic pressure sensors are employed to determine the unsteady aerodynamic forces on large, slender aerospace structures. The estimated forces are known to be very sensitive to the number of the dynamic pressure sensors and the details of the integration scheme. This report describes a robust calculation procedure, based on frequency-specific correlation lengths, that is found to produce good estimation of fluctuating forces from a few dynamic pressure sensors. The validation test was conducted on a flat panel, placed on the floor of a wind tunnel, and was subjected to vortex shedding from a rectangular bluff-body. The panel was coated with fast response Pressure Sensitive Paint (PSP), which allowed time-resolved measurements of unsteady pressure fluctuations on a dense grid of spatial points. The first part of the report describes the detail procedure used to analyze the high-speed, PSP camera images. The procedure includes steps to reduce contamination by electronic shot noise, correction for spatial non-uniformities, and lamp brightness variation, and finally conversion of fluctuating light intensity to fluctuating pressure. The latter involved applying calibration constants from a few dynamic pressure sensors placed at selective points on the plate. Excellent comparison in the spectra, coherence and phase, calculated via PSP and dynamic pressure sensors validated the PSP processing steps. The second part of the report describes the buffet validation process, for which the first step was to use pressure histories from all PSP points to determine the "true" force fluctuations. In the next step only a selected number of pixels were chosen as "virtual sensors" and a correlation-length based buffet calculation procedure was applied to determine "modeled" force fluctuations. By progressively decreasing the number of virtual sensors it was observed that the present calculation procedure was able to make a close estimate of the "true" unsteady forces only from four sensors. It is believed that the present work provides the first validation of the buffet calculation procedure which has been used for the development of many space vehicles.
A task scheduler framework for self-powered wireless sensors.
Nordman, Mikael M
2003-10-01
The cost and inconvenience of cabling is a factor limiting widespread use of intelligent sensors. Recent developments in short-range, low-power radio seem to provide an opening to this problem, making development of wireless sensors feasible. However, for these sensors the energy availability is a main concern. The common solution is either to use a battery or to harvest ambient energy. The benefit of harvested ambient energy is that the energy feeder can be considered as lasting a lifetime, thus it saves the user from concerns related to energy management. The problem is, however, the unpredictability and unsteady behavior of ambient energy sources. This becomes a main concern for sensors that run multiple tasks at different priorities. This paper proposes a new scheduler framework that enables the reliable assignment of task priorities and scheduling in sensors powered by ambient energy. The framework being based on environment parameters, virtual queues, and a state machine with transition conditions, dynamically manages task execution according to priorities. The framework is assessed in a test system powered by a solar panel. The results show the functionality of the framework and how task execution reliably is handled without violating the priority scheme that has been assigned to it.
Practical design and evaluation methods of omnidirectional vision sensors
NASA Astrophysics Data System (ADS)
Ohte, Akira; Tsuzuki, Osamu
2012-01-01
A practical omnidirectional vision sensor, consisting of a curved mirror, a mirror-supporting structure, and a megapixel digital imaging system, can view a field of 360 deg horizontally and 135 deg vertically. The authors theoretically analyzed and evaluated several curved mirrors, namely, a spherical mirror, an equidistant mirror, and a single viewpoint mirror (hyperboloidal mirror). The focus of their study was mainly on the image-forming characteristics, position of the virtual images, and size of blur spot images. The authors propose here a practical design method that satisfies the required characteristics. They developed image-processing software for converting circular images to images of the desired characteristics in real time. They also developed several prototype vision sensors using spherical mirrors. Reports dealing with virtual images and blur-spot size of curved mirrors are few; therefore, this paper will be very useful for the development of omnidirectional vision sensors.
Avatar - a multi-sensory system for real time body position monitoring.
Jovanov, E; Hanish, N; Courson, V; Stidham, J; Stinson, H; Webb, C; Denny, K
2009-01-01
Virtual reality and computer assisted physical rehabilitation applications require an unobtrusive and inexpensive real time monitoring systems. Existing systems are usually complex and expensive and based on infrared monitoring. In this paper we propose Avatar, a hybrid system consisting of off-the-shelf components and sensors. Absolute positioning of a few reference points is determined using infrared diode on subject's body and a set of Wii Remotes as optical sensors. Individual body segments are monitored by intelligent inertial sensor nodes iSense. A network of inertial nodes is controlled by a master node that serves as a gateway for communication with a capture device. Each sensor features a 3D accelerometer and a 2 axis gyroscope. Avatar system is used for control of avatars in Virtual Reality applications, but could be used in a variety of augmented reality, gaming, and computer assisted physical rehabilitation applications.
Open core control software for surgical robots.
Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo
2010-05-01
In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a "force guide" for supporting operators to perform precise manipulation by using a master-slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. The Open Core Control software was implemented on a surgical master-slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a "force guide" on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement "General Principles of Software Validation" or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field.
Monitoring and Control Interface Based on Virtual Sensors
Escobar, Ricardo F.; Adam-Medina, Manuel; García-Beltrán, Carlos D.; Olivares-Peregrino, Víctor H.; Juárez-Romero, David; Guerrero-Ramírez, Gerardo V.
2014-01-01
In this article, a toolbox based on a monitoring and control interface (MCI) is presented and applied in a heat exchanger. The MCI was programed in order to realize sensor fault detection and isolation and fault tolerance using virtual sensors. The virtual sensors were designed from model-based high-gain observers. To develop the control task, different kinds of control laws were included in the monitoring and control interface. These control laws are PID, MPC and a non-linear model-based control law. The MCI helps to maintain the heat exchanger under operation, even if a temperature outlet sensor fault occurs; in the case of outlet temperature sensor failure, the MCI will display an alarm. The monitoring and control interface is used as a practical tool to support electronic engineering students with heat transfer and control concepts to be applied in a double-pipe heat exchanger pilot plant. The method aims to teach the students through the observation and manipulation of the main variables of the process and by the interaction with the monitoring and control interface (MCI) developed in LabVIEW©. The MCI provides the electronic engineering students with the knowledge of heat exchanger behavior, since the interface is provided with a thermodynamic model that approximates the temperatures and the physical properties of the fluid (density and heat capacity). An advantage of the interface is the easy manipulation of the actuator for an automatic or manual operation. Another advantage of the monitoring and control interface is that all algorithms can be manipulated and modified by the users. PMID:25365462
An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor
NASA Astrophysics Data System (ADS)
Liscombe, Michael
3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.
NASA Technical Reports Server (NTRS)
Schmalzel, John L.; Morris, Jon; Turowski, Mark; Figueroa, Fernando; Oostdyk, Rebecca
2008-01-01
There are a number of architecture models for implementing Integrated Systems Health Management (ISHM) capabilities. For example, approaches based on the OSA-CBM and OSA-EAI models, or specific architectures developed in response to local needs. NASA s John C. Stennis Space Center (SSC) has developed one such version of an extensible architecture in support of rocket engine testing that integrates a palette of functions in order to achieve an ISHM capability. Among the functional capabilities that are supported by the framework are: prognostic models, anomaly detection, a data base of supporting health information, root cause analysis, intelligent elements, and integrated awareness. This paper focuses on the role that intelligent elements can play in ISHM architectures. We define an intelligent element as a smart element with sufficient computing capacity to support anomaly detection or other algorithms in support of ISHM functions. A smart element has the capabilities of supporting networked implementations of IEEE 1451.x smart sensor and actuator protocols. The ISHM group at SSC has been actively developing intelligent elements in conjunction with several partners at other Centers, universities, and companies as part of our ISHM approach for better supporting rocket engine testing. We have developed several implementations. Among the key features for these intelligent sensors is support for IEEE 1451.1 and incorporation of a suite of algorithms for determination of sensor health. Regardless of the potential advantages that can be achieved using intelligent sensors, existing large-scale systems are still based on conventional sensors and data acquisition systems. In order to bring the benefits of intelligent sensors to these environments, we have also developed virtual implementations of intelligent sensors.
The Virtual Environment for Rapid Prototyping of the Intelligent Environment
Bouzouane, Abdenour; Gaboury, Sébastien
2017-01-01
Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants’ behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs. PMID:29112175
The Virtual Environment for Rapid Prototyping of the Intelligent Environment.
Francillette, Yannick; Boucher, Eric; Bouzouane, Abdenour; Gaboury, Sébastien
2017-11-07
Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants' behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs.
Intelligent approach to prognostic enhancements of diagnostic systems
NASA Astrophysics Data System (ADS)
Vachtsevanos, George; Wang, Peng; Khiripet, Noppadon; Thakker, Ash; Galie, Thomas R.
2001-07-01
This paper introduces a novel methodology to prognostics based on a dynamic wavelet neural network construct and notions from the virtual sensor area. This research has been motivated and supported by the U.S. Navy's active interest in integrating advanced diagnostic and prognostic algorithms in existing Naval digital control and monitoring systems. A rudimentary diagnostic platform is assumed to be available providing timely information about incipient or impending failure conditions. We focus on the development of a prognostic algorithm capable of predicting accurately and reliably the remaining useful lifetime of a failing machine or component. The prognostic module consists of a virtual sensor and a dynamic wavelet neural network as the predictor. The virtual sensor employs process data to map real measurements into difficult to monitor fault quantities. The prognosticator uses a dynamic wavelet neural network as a nonlinear predictor. Means to manage uncertainty and performance metrics are suggested for comparison purposes. An interface to an available shipboard Integrated Condition Assessment System is described and applications to shipboard equipment are discussed. Typical results from pump failures are presented to illustrate the effectiveness of the methodology.
A virtual environment for modeling and testing sensemaking with multisensor information
NASA Astrophysics Data System (ADS)
Nicholson, Denise; Bartlett, Kathleen; Hoppenfeld, Robert; Nolan, Margaret; Schatz, Sae
2014-05-01
Given today's challenging Irregular Warfare, members of small infantry units must be able to function as highly sensitized perceivers throughout large operational areas. Improved Situation Awareness (SA) in rapidly changing fields of operation may also save lives of law enforcement personnel and first responders. Critical competencies for these individuals include sociocultural sensemaking, the ability to assess a situation through the perception of essential salient environmental and behavioral cues, and intuitive sensemaking, which allows experts to act with the utmost agility. Intuitive sensemaking and intuitive decision making (IDM), which involve processing information at a subconscious level, have been cited as playing a critical role in saving lives and enabling mission success. This paper discusses the development of a virtual environment for modeling, analysis and human-in-the-loop testing of perception, sensemaking, intuitive sensemaking, decision making (DM), and IDM performance, using state-of-the-art scene simulation and modeled imagery from multi-source systems, under the "Intuition and Implicit Learning" Basic Research Challenge (I2BRC) sponsored by the Office of Naval Research (ONR). We present results from our human systems engineering approach including 1) development of requirements and test metrics for individual and integrated system components, 2) the system architecture design 3) images of the prototype virtual environment testing system and 4) a discussion of the system's current and future testing capabilities. In particular, we examine an Enhanced Interaction Suite testbed to model, test, and analyze the impact of advances in sensor spatial, and temporal resolution to a user's intuitive sensemaking and decision making capabilities.
User-Centered Design of a Controller-Free Game for Hand Rehabilitation.
Proffitt, Rachel; Sevick, Marisa; Chang, Chien-Yen; Lange, Belinda
2015-08-01
The purpose of this study was to develop and test a hand therapy game using the Microsoft (Redmond, WA) Kinect(®) sensor with a customized videogame. Using the Microsoft Kinect sensor as an input device, a customized game for hand rehabilitation was developed that required players to perform various gestures to accomplish a virtual cooking task. Over the course of two iterative sessions, 11 participants with different levels of wrist, hand, and finger injuries interacted with the game in a single session, and user perspectives and feedback were obtained via a questionnaire and semistructured interviews. Participants reported high levels of enjoyment, specifically related to the challenging nature of the game and the visuals. Participant feedback from the first iterative round of testing was incorporated to produce a second prototype for the second round of testing. Additionally, participants expressed the desire to have the game adapt and be customized to their unique hand therapy needs. The game tested in this study has the potential to be a unique and cutting edge method for the delivery of hand rehabilitation for a diverse population.
Development of a novel haptic glove for improving finger dexterity in poststroke rehabilitation.
Lin, Chi-Ying; Tsai, Chia-Min; Shih, Pei-Cheng; Wu, Hsiao-Ching
2015-01-01
Almost all stroke patients experience a certain degree of fine motor impairment, and impeded finger movement may limit activities in daily life. Thus, to improve the quality of life of stroke patients, designing an efficient training device for fine motor rehabilitation is crucial. This study aimed to develop a novel fine motor training glove that integrates a virtual-reality based interactive environment with vibrotactile feedback for more effective post stroke hand rehabilitation. The proposed haptic rehabilitation device is equipped with small DC vibration motors for vibrotactile feedback stimulation and piezoresistive thin-film force sensors for motor function evaluation. Two virtual-reality based games ``gopher hitting'' and ``musical note hitting'' were developed as a haptic interface. According to the designed rehabilitation program, patients intuitively push and practice their fingers to improve the finger isolation function. Preliminary tests were conducted to assess the feasibility of the developed haptic rehabilitation system and to identify design concerns regarding the practical use in future clinical testing.
Real-Time Mapping: Contemporary Challenges and the Internet of Things as the Way Forward
NASA Astrophysics Data System (ADS)
Bęcek, Kazimierz
2016-12-01
The Internet of Things (IoT) is an emerging technology that was conceived in 1999. The key components of the IoT are intelligent sensors, which represent objects of interest. The adjective `intelligent' is used here in the information gathering sense, not the psychological sense. Some 30 billion sensors that `know' the current status of objects they represent are already connected to the Internet. Various studies indicate that the number of installed sensors will reach 212 billion by 2020. Various scenarios of IoT projects show sensors being able to exchange data with the network as well as between themselves. In this contribution, we discuss the possibility of deploying the IoT in cartography for real-time mapping. A real-time map is prepared using data harvested through querying sensors representing geographical objects, and the concept of a virtual sensor for abstract objects, such as a land parcel, is presented. A virtual sensor may exist as a data record in the cloud. Sensors are identified by an Internet Protocol address (IP address), which implies that geographical objects through their sensors would also have an IP address. This contribution is an updated version of a conference paper presented by the author during the International Federation of Surveyors 2014 Congress in Kuala Lumpur. The author hopes that the use of the IoT for real-time mapping will be considered by the mapmaking community.
NASA Technical Reports Server (NTRS)
Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)
1998-01-01
Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.
A Miniature System for Separating Aerosol Particles and Measuring Mass Concentrations
Liang, Dao; Shih, Wen-Pin; Chen, Chuin-Shan; Dai, Chi-An
2010-01-01
We designed and fabricated a new sensing system which consists of two virtual impactors and two quartz-crystal microbalance (QCM) sensors for measuring particle mass concentration and size distribution. The virtual impactors utilized different inertial forces of particles in air flow to classify different particle sizes. They were designed to classify particle diameter, d, into three different ranges: d < 2.28 μm, 2.28 μm ≤ d ≤ 3.20 μm, d > 3.20 μm. The QCM sensors were coated with a hydrogel, which was found to be a reliable adhesive for capturing aerosol particles. The QCM sensor coated with hydrogel was used to measure the mass loading of particles by utilizing its characteristic of resonant frequency shift. An integrated system has been demonstrated. PMID:22319317
Virtual Passive Controller for Robot Systems Using Joint Torque Sensors
NASA Technical Reports Server (NTRS)
Aldridge, Hal A.; Juang, Jer-Nan
1997-01-01
This paper presents a control method based on virtual passive dynamic control that will stabilize a robot manipulator using joint torque sensors and a simple joint model. The method does not require joint position or velocity feedback for stabilization. The proposed control method is stable in the sense of Lyaponov. The control method was implemented on several joints of a laboratory robot. The controller showed good stability robustness to system parameter error and to the exclusion of nonlinear dynamic effects on the joints. The controller enhanced position tracking performance and, in the absence of position control, dissipated joint energy.
Design and Development of Card-Sized Virtual Keyboard Using Permanent Magnets and Hall Sensors
NASA Astrophysics Data System (ADS)
Demachi, Kazuyuki; Ohyama, Makoto; Kanemoto, Yoshiki; Masaie, Issei
This paper proposes a method to distinguish the key-type of human fingers attached with the small permanent magnets. The Hall sensors arrayed in the credit card size area feel the distribution of the magnetic field due to the key-typing movement of the human fingers as if the keyboard exists, and the signal is analyzed using the generic algorithm or the neural network algorism to distinguish the typed keys. By this method, the keyboard can be miniaturized to the credit card size (54mm×85mm). We called this system `The virtual keyboard system'.
Intelligent Sensors: An Integrated Systems Approach
NASA Technical Reports Server (NTRS)
Mahajan, Ajay; Chitikeshi, Sanjeevi; Bandhil, Pavan; Utterbach, Lucas; Figueroa, Fernando
2005-01-01
The need for intelligent sensors as a critical component for Integrated System Health Management (ISHM) is fairly well recognized by now. Even the definition of what constitutes an intelligent sensor (or smart sensor) is well documented and stems from an intuitive desire to get the best quality measurement data that forms the basis of any complex health monitoring and/or management system. If the sensors, i.e. the elements closest to the measurand, are unreliable then the whole system works with a tremendous handicap. Hence, there has always been a desire to distribute intelligence down to the sensor level, and give it the ability to assess its own health thereby improving the confidence in the quality of the data at all times. This paper proposes the development of intelligent sensors as an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the NASA Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Intelligent Systems Health Monitoring (ISHM) vision. This paper outlines some fundamental issues in the development of intelligent sensors under the following two categories: Physical Intelligent Sensors (PIS) and Virtual Intelligent Sensors (VIS).
Implementation of a Virtual Microphone Array to Obtain High Resolution Acoustic Images
Izquierdo, Alberto; Suárez, Luis; Suárez, David
2017-01-01
Using arrays with digital MEMS (Micro-Electro-Mechanical System) microphones and FPGA-based (Field Programmable Gate Array) acquisition/processing systems allows building systems with hundreds of sensors at a reduced cost. The problem arises when systems with thousands of sensors are needed. This work analyzes the implementation and performance of a virtual array with 6400 (80 × 80) MEMS microphones. This virtual array is implemented by changing the position of a physical array of 64 (8 × 8) microphones in a grid with 10 × 10 positions, using a 2D positioning system. This virtual array obtains an array spatial aperture of 1 × 1 m2. Based on the SODAR (SOund Detection And Ranging) principle, the measured beampattern and the focusing capacity of the virtual array have been analyzed, since beamforming algorithms assume to be working with spherical waves, due to the large dimensions of the array in comparison with the distance between the target (a mannequin) and the array. Finally, the acoustic images of the mannequin, obtained for different frequency and range values, have been obtained, showing high angular resolutions and the possibility to identify different parts of the body of the mannequin. PMID:29295485
A miniature disposable radio (MiDR) for unattended ground sensor systems (UGSS) and munitions
NASA Astrophysics Data System (ADS)
Wells, Jeffrey S.; Wurth, Timothy J.
2004-09-01
Unattended and tactical sensors are used by the U.S. Army"s Future Combat Systems (FCS) and Objective Force Warrior (OFW) to detect and identify enemy targets on the battlefield. The radios being developed as part of the Networked Sensors for the Objective Force (NSOF) are too costly and too large to deploy in missions requiring throw-away hardware. A low-cost miniature radio is required to satisfy the communication needs for unmanned sensor and munitions systems that are deployed in a disposable manner. A low cost miniature disposable communications suite is leveraged using the commercial off-the-shelf market and employing a miniature universal frequency conversion architecture. Employing the technology of universal frequency architecture in a commercially available communication unit delivers a robust disposable transceiver that can operate at virtually any frequency. A low-cost RF communication radio has applicability in the commercial, homeland defense, military, and other government markets. Specific uses include perimeter monitoring, infrastructure defense, unattended ground sensors, tactical sensors, and border patrol. This paper describes a low-cost radio architecture to meet the requirements of throw-away radios that can be easily modified or tuned to virtually any operating frequency required for the specific mission.
Jung, Eui-Hyun; Park, Yong-Jin
2008-01-01
In recent years, a few protocol bridge research projects have been announced to enable a seamless integration of Wireless Sensor Networks (WSNs) with the TCP/IP network. These studies have ensured the transparent end-to-end communication between two network sides in the node-centric manner. Researchers expect this integration will trigger the development of various application domains. However, prior research projects have not fully explored some essential features for WSNs, especially the reusability of sensing data and the data-centric communication. To resolve these issues, we suggested a new protocol bridge system named TinyONet. In TinyONet, virtual sensors play roles as virtual counterparts of physical sensors and they dynamically group to make a functional entity, Slice. Instead of direct interaction with individual physical sensors, each sensor application uses its own WSN service provided by Slices. If a new kind of service is required in TinyONet, the corresponding function can be dynamically added at runtime. Beside the data-centric communication, it also supports the node-centric communication and the synchronous access. In order to show the effectiveness of the system, we implemented TinyONet on an embedded Linux machine and evaluated it with several experimental scenarios. PMID:27873968
Self-localization of wireless sensor networks using self-organizing maps
NASA Astrophysics Data System (ADS)
Ertin, Emre; Priddy, Kevin L.
2005-03-01
Recently there has been a renewed interest in the notion of deploying large numbers of networked sensors for applications ranging from environmental monitoring to surveillance. In a typical scenario a number of sensors are distributed in a region of interest. Each sensor is equipped with sensing, processing and communication capabilities. The information gathered from the sensors can be used to detect, track and classify objects of interest. For a number of locations the sensors location is crucial in interpreting the data collected from those sensors. Scalability requirements dictate sensor nodes that are inexpensive devices without a dedicated localization hardware such as GPS. Therefore the network has to rely on information collected within the network to self-localize. In the literature a number of algorithms has been proposed for network localization which uses measurements informative of range, angle, proximity between nodes. Recent work by Patwari and Hero relies on sensor data without explicit range estimates. The assumption is that the correlation structure in the data is a monotone function of the intersensor distances. In this paper we propose a new method based on unsupervised learning techniques to extract location information from the sensor data itself. We consider a grid consisting of virtual nodes and try to fit grid in the actual sensor network data using the method of self organizing maps. Then known sensor network geometry can be used to rotate and scale the grid to a global coordinate system. Finally, we illustrate how the virtual nodes location information can be used to track a target.
Robust Online Monitoring for Calibration Assessment of Transmitters and Instrumentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Coble, Jamie B.; Shumaker, Brent
Robust online monitoring (OLM) technologies are expected to enable the extension or elimination of periodic sensor calibration intervals in operating and new reactors. These advances in OLM technologies will improve the safety and reliability of current and planned nuclear power systems through improved accuracy and increased reliability of sensors used to monitor key parameters. In this article, we discuss an overview of research being performed within the Nuclear Energy Enabling Technologies (NEET)/Advanced Sensors and Instrumentation (ASI) program, for the development of OLM algorithms to use sensor outputs and, in combination with other available information, 1) determine whether one or moremore » sensors are out of calibration or failing and 2) replace a failing sensor with reliable, accurate sensor outputs. Algorithm development is focused on the following OLM functions: • Signal validation • Virtual sensing • Sensor response-time assessment These algorithms incorporate, at their base, a Gaussian Process-based uncertainty quantification (UQ) method. Various plant models (using kernel regression, GP, or hierarchical models) may be used to predict sensor responses under various plant conditions. These predicted responses can then be applied in fault detection (sensor output and response time) and in computing the correct value (virtual sensing) of a failing physical sensor. The methods being evaluated in this work can compute confidence levels along with the predicted sensor responses, and as a result, may have the potential for compensating for sensor drift in real-time (online recalibration). Evaluation was conducted using data from multiple sources (laboratory flow loops and plant data). Ongoing research in this project is focused on further evaluation of the algorithms, optimization for accuracy and computational efficiency, and integration into a suite of tools for robust OLM that are applicable to monitoring sensor calibration state in nuclear power plants.« less
Software as a service approach to sensor simulation software deployment
NASA Astrophysics Data System (ADS)
Webster, Steven; Miller, Gordon; Mayott, Gregory
2012-05-01
Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.
Sensor supervision and multiagent commanding by means of projective virtual reality
NASA Astrophysics Data System (ADS)
Rossmann, Juergen
1998-10-01
When autonomous systems with multiple agents are considered, conventional control- and supervision technologies are often inadequate because the amount of information available is often presented in a way that the user is effectively overwhelmed by the displayed data. New virtual reality (VR) techniques can help to cope with this problem, because VR offers the chance to convey information in an intuitive manner and can combine supervision capabilities and new, intuitive approaches to the control of autonomous systems. In the approach taken, control and supervision issues were equally stressed and finally led to the new ideas and the general framework for Projective Virtual Reality. The key idea of this new approach for an intuitively operable man machine interface for decentrally controlled multi-agent systems is to let the user act in the virtual world, detect the changes and have an action planning component automatically generate task descriptions for the agents involved to project actions that have been carried out by users in the virtual world into the physical world, e.g. with the help of robots. Thus the Projective Virtual Reality approach is to split the job between the task deduction in the VR and the task `projection' onto the physical automation components by the automatic action planning component. Besides describing the realized projective virtual reality system, the paper will also describe in detail the metaphors and visualization aids used to present different types of (e.g. sensor-) information in an intuitively comprehensible manner.
Migrating EO/IR sensors to cloud-based infrastructure as service architectures
NASA Astrophysics Data System (ADS)
Berglie, Stephen T.; Webster, Steven; May, Christopher M.
2014-06-01
The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for providing virtual machines with direct access to hardware resources. The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased technologies to various extents in order to streamline infrastructure and service management. This paper details the challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and, ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG. A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics APls required by the NVIG and similar GPU-bound tools.
Integrated Sensor Architecture (ISA) for Live Virtual Constructive (LVC) Environments
2014-03-01
connect, publish their needs and capabilities, and interact with other systems even on disadvantaged networks. Within the ISA project, three levels of...constructive, disadvantaged network, sensor 1. INTRODUCTION In 2003 the Networked Sensors for the Future Force (NSFF) Advanced Technology Demonstration...While this combination is less optimal over disadvantaged networks, and we do not recommend it there, TCP and TLS perform adequately over networks with
ERIC Educational Resources Information Center
Chao, Jie; Chiu, Jennifer L.; DeJaegher, Crystal J.; Pan, Edward A.
2016-01-01
Deep learning of science involves integration of existing knowledge and normative science concepts. Past research demonstrates that combining physical and virtual labs sequentially or side by side can take advantage of the unique affordances each provides for helping students learn science concepts. However, providing simultaneously connected…
Air-condition Control System of Weaving Workshop Based on LabVIEW
NASA Astrophysics Data System (ADS)
Song, Jian
The project of air-condition measurement and control system based on LabVIEW is put forward for the sake of controlling effectively the environmental targets in the weaving workshop. In this project, which is based on the virtual instrument technology and in which LabVIEW development platform by NI is adopted, the system is constructed on the basis of the virtual instrument technology. It is composed of the upper PC, central control nodes based on CC2530, sensor nodes, sensor modules and executive device. Fuzzy control algorithm is employed to achieve the accuracy control of the temperature and humidity. A user-friendly man-machine interaction interface is designed with virtual instrument technology at the core of the software. It is shown by experiments that the measurement and control system can run stably and reliably and meet the functional requirements for controlling the weaving workshop.
Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery.
Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell
2011-06-01
This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information.
Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery
Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell
2013-01-01
This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information. PMID:24398557
A convertor and user interface to import CAD files into worldtoolkit virtual reality systems
NASA Technical Reports Server (NTRS)
Wang, Peter Hor-Ching
1996-01-01
Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.
Secure, Autonomous, Intelligent Controller for Integrating Distributed Sensor Webs
NASA Technical Reports Server (NTRS)
Ivancic, William D.
2007-01-01
This paper describes the infrastructure and protocols necessary to enable near-real-time commanding, access to space-based assets, and the secure interoperation between sensor webs owned and controlled by various entities. Select terrestrial and aeronautics-base sensor webs will be used to demonstrate time-critical interoperability between integrated, intelligent sensor webs both terrestrial and between terrestrial and space-based assets. For this work, a Secure, Autonomous, Intelligent Controller and knowledge generation unit is implemented using Virtual Mission Operation Center technology.
The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine
NASA Astrophysics Data System (ADS)
Liu, Yuan; Zhang, Xin; Zhang, Tianhong
2017-11-01
A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.
Determining Spinal Posture for Encumbered Airmen in Crewstations Using the Luna Positioning Sensor
to characterize design -relevant body size and shape variation as it applies to our service personnel. Of particular interest is cockpit accommodation...confidence in virtual assessments. For this effort, the Luna, Inc. fiber optic positioning sensor was evaluated to determine the utility of this
NASA Astrophysics Data System (ADS)
Milner, G. Martin
2005-05-01
ChemSentry is a portable system used to detect, identify, and quantify chemical warfare (CW) agents. Electro chemical (EC) cell sensor technology is used for blood agents and an array of surface acoustic wave (SAW) sensors is used for nerve and blister agents. The combination of the EC cell and the SAW array provides sufficient sensor information to detect, classify and quantify all CW agents of concern using smaller, lighter, lower cost units. Initial development of the SAW array and processing was a key challenge for ChemSentry requiring several years of fundamental testing of polymers and coating methods to finalize the sensor array design in 2001. Following the finalization of the SAW array, nearly three (3) years of intensive testing in both laboratory and field environments were required in order to gather sufficient data to fully understand the response characteristics. Virtually unbounded permutations of agent characteristics and environmental characteristics must be considered in order to operate against all agents and all environments of interest to the U.S. military and other potential users of ChemSentry. The resulting signal processing design matched to this extensive body of measured data (over 8,000 agent challenges and 10,000 hours of ambient data) is considered to be a significant advance in state-of-the-art for CW agent detection.
2004-02-09
FINAL 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE VIRTUAL COLLABORATION: 5a. CONTRACT NUMBER ADVANTAGES AND DISADVANTAGES IN THE PLANNING AND...warfare is not one system; it is a system of systems from sensors to information flow. In analyzing the specific advantages and disadvantages of one of...Standard Form 298 (Rev. 8-98) NAVAL WAR COLLEGE Newport, R.I. VIRTUAL COLLABORATION: ADVANTAGES AND DISADVANTAGES IN THE PLANNING AND EXECUTION OF OPERATIONS
In-home virtual reality videogame telerehabilitation in adolescents with hemiplegic cerebral palsy.
Golomb, Meredith R; McDonald, Brenna C; Warden, Stuart J; Yonkman, Janell; Saykin, Andrew J; Shirley, Bridget; Huber, Meghan; Rabin, Bryan; Abdelbaky, Moustafa; Nwosu, Michelle E; Barkat-Masih, Monica; Burdea, Grigore C
2010-01-01
Golomb MR, McDonald BC, Warden SJ, Yonkman J, Saykin AJ, Shirley B, Huber M, Rabin B, AbdelBaky M, Nwosu ME, Barkat-Masih M, Burdea GC. In-home virtual reality videogame telerehabilitation in adolescents with hemiplegic cerebral palsy. To investigate whether in-home remotely monitored virtual reality videogame-based telerehabilitation in adolescents with hemiplegic cerebral palsy can improve hand function and forearm bone health, and demonstrate alterations in motor circuitry activation. A 3-month proof-of-concept pilot study. Virtual reality videogame-based rehabilitation systems were installed in the homes of 3 participants and networked via secure Internet connections to the collaborating engineering school and children's hospital. Adolescents (N=3) with severe hemiplegic cerebral palsy. Participants were asked to exercise the plegic hand 30 minutes a day, 5 days a week using a sensor glove fitted to the plegic hand and attached to a remotely monitored videogame console installed in their home. Games were custom developed, focused on finger movement, and included a screen avatar of the hand. Standardized occupational therapy assessments, remote assessment of finger range of motion (ROM) based on sensor glove readings, assessment of plegic forearm bone health with dual-energy x-ray absorptiometry (DXA) and peripheral quantitative computed tomography (pQCT), and functional magnetic resonance imaging (fMRI) of hand grip task. All 3 adolescents showed improved function of the plegic hand on occupational therapy testing, including increased ability to lift objects, and improved finger ROM based on remote measurements. The 2 adolescents who were most compliant showed improvements in radial bone mineral content and area in the plegic arm. For all 3 adolescents, fMRI during grip task contrasting the plegic and nonplegic hand showed expanded spatial extent of activation at posttreatment relative to baseline in brain motor circuitry (eg, primary motor cortex and cerebellum). Use of remotely monitored virtual reality videogame telerehabilitation appears to produce improved hand function and forearm bone health (as measured by DXA and pQCT) in adolescents with chronic disability who practice regularly. Improved hand function appears to be reflected in functional brain changes. Copyright (c) 2010 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Cyber-Physical System Security With Deceptive Virtual Hosts for Industrial Control Networks
Vollmer, Todd; Manic, Milos
2014-05-01
A challenge facing industrial control network administrators is protecting the typically large number of connected assets for which they are responsible. These cyber devices may be tightly coupled with the physical processes they control and human induced failures risk dire real-world consequences. Dynamic virtual honeypots are effective tools for observing and attracting network intruder activity. This paper presents a design and implementation for self-configuring honeypots that passively examine control system network traffic and actively adapt to the observed environment. In contrast to prior work in the field, six tools were analyzed for suitability of network entity information gathering. Ettercap, anmore » established network security tool not commonly used in this capacity, outperformed the other tools and was chosen for implementation. Utilizing Ettercap XML output, a novel four-step algorithm was developed for autonomous creation and update of a Honeyd configuration. This algorithm was tested on an existing small campus grid and sensor network by execution of a collaborative usage scenario. Automatically created virtual hosts were deployed in concert with an anomaly behavior (AB) system in an attack scenario. Virtual hosts were automatically configured with unique emulated network stack behaviors for 92% of the targeted devices. The AB system alerted on 100% of the monitored emulated devices.« less
Evolving a Neural Olfactorimotor System in Virtual and Real Olfactory Environments
Rhodes, Paul A.; Anderson, Todd O.
2012-01-01
To provide a platform to enable the study of simulated olfactory circuitry in context, we have integrated a simulated neural olfactorimotor system with a virtual world which simulates both computational fluid dynamics as well as a robotic agent capable of exploring the simulated plumes. A number of the elements which we developed for this purpose have not, to our knowledge, been previously assembled into an integrated system, including: control of a simulated agent by a neural olfactorimotor system; continuous interaction between the simulated robot and the virtual plume; the inclusion of multiple distinct odorant plumes and background odor; the systematic use of artificial evolution driven by olfactorimotor performance (e.g., time to locate a plume source) to specify parameter values; the incorporation of the realities of an imperfect physical robot using a hybrid model where a physical robot encounters a simulated plume. We close by describing ongoing work toward engineering a high dimensional, reversible, low power electronic olfactory sensor which will allow olfactorimotor neural circuitry evolved in the virtual world to control an autonomous olfactory robot in the physical world. The platform described here is intended to better test theories of olfactory circuit function, as well as provide robust odor source localization in realistic environments. PMID:23112772
Virtual sensors for on-line wheel wear and part roughness measurement in the grinding process.
Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A; Cabanes, Itziar; Pombo, Iñigo
2014-05-19
Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations.
A strategy for computer-assisted mental practice in stroke rehabilitation.
Gaggioli, Andrea; Meneghini, Andrea; Morganti, Francesca; Alcaniz, Mariano; Riva, Giuseppe
2006-12-01
To investigate the technical and clinical viability of using computer-facilitated mental practice in the rehabilitation of upper-limb hemiparesis following stroke. A single-case study. Academic-affiliated rehabilitation center. A 46-year-old man with stable motor deficit of the upper right limb following subcortical ischemic stroke. Three computer-enhanced mental practice sessions per week at the rehabilitation center, in addition to usual physical therapy. A custom-made virtual reality system equipped with arm-tracking sensors was used to guide mental practice. The system was designed to superimpose over the (unseen) paretic arm a virtual reconstruction of the movement registered from the nonparetic arm. The laboratory intervention was followed by a 1-month home-rehabilitation program, making use of a portable display device. Pretreatment and posttreatment clinical assessment measures were the upper-extremity scale of the Fugl-Meyer Assessment of Sensorimotor Impairment and the Action Research Arm Test. Performance of the affected arm was evaluated using the healthy arm as the control condition. The patient's paretic limb improved after the first phase of intervention, with modest increases after home rehabilitation, as indicated by functional assessment scores and sensors data. Results suggest that technology-supported mental training is a feasible and potentially effective approach for improving motor skills after stroke.
Learning a detection map for a network of unattended ground sensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Hung D.; Koch, Mark William
2010-03-01
We have developed algorithms to automatically learn a detection map of a deployed sensor field for a virtual presence and extended defense (VPED) system without apriori knowledge of the local terrain. The VPED system is an unattended network of sensor pods, with each pod containing acoustic and seismic sensors. Each pod has the ability to detect and classify moving targets at a limited range. By using a network of pods we can form a virtual perimeter with each pod responsible for a certain section of the perimeter. The site's geography and soil conditions can affect the detection performance of themore » pods. Thus, a network in the field may not have the same performance as a network designed in the lab. To solve this problem we automatically estimate a network's detection performance as it is being installed at a site by a mobile deployment unit (MDU). The MDU will wear a GPS unit, so the system not only knows when it can detect the MDU, but also the MDU's location. In this paper, we demonstrate how to handle anisotropic sensor-configurations, geography, and soil conditions.« less
Study on Impact Acoustic—Visual Sensor-Based Sorting of ELV Plastic Materials
Huang, Jiu; Tian, Chuyuan; Ren, Jingwei; Bian, Zhengfu
2017-01-01
This paper concentrates on a study of a novel multi-sensor aided method by using acoustic and visual sensors for detection, recognition and separation of End-of Life vehicles’ (ELVs) plastic materials, in order to optimize the recycling rate of automotive shredder residues (ASRs). Sensor-based sorting technologies have been utilized for material recycling for the last two decades. One of the problems still remaining results from black and dark dyed plastics which are very difficult to recognize using visual sensors. In this paper a new multi-sensor technology for black plastic recognition and sorting by using impact resonant acoustic emissions (AEs) and laser triangulation scanning was introduced. A pilot sorting system which consists of a 3-dimensional visual sensor and an acoustic sensor was also established; two kinds commonly used vehicle plastics, polypropylene (PP) and acrylonitrile-butadiene-styrene (ABS) and two kinds of modified vehicle plastics, polypropylene/ethylene-propylene-diene-monomer (PP-EPDM) and acrylonitrile-butadiene-styrene/polycarbonate (ABS-PC) were tested. In this study the geometrical features of tested plastic scraps were measured by the visual sensor, and their corresponding impact acoustic emission (AE) signals were acquired by the acoustic sensor. The signal processing and feature extraction of visual data as well as acoustic signals were realized by virtual instruments. Impact acoustic features were recognized by using FFT based power spectral density analysis. The results shows that the characteristics of the tested PP and ABS plastics were totally different, but similar to their respective modified materials. The probability of scrap material recognition rate, i.e., the theoretical sorting efficiency between PP and PP-EPDM, could reach about 50%, and between ABS and ABS-PC it could reach about 75% with diameters ranging from 14 mm to 23 mm, and with exclusion of abnormal impacts, the actual separation rates were 39.2% for PP, 41.4% for PP/EPDM scraps as well as 62.4% for ABS, and 70.8% for ABS/PC scraps. Within the diameter range of 8-13 mm, only 25% of PP and 27% of PP/EPDM scraps, as well as 43% of ABS, and 47% of ABS/PC scraps were finally separated. This research proposes a new approach for sensor-aided automatic recognition and sorting of black plastic materials, it is an effective method for ASR reduction and recycling. PMID:28594341
Study on Impact Acoustic-Visual Sensor-Based Sorting of ELV Plastic Materials.
Huang, Jiu; Tian, Chuyuan; Ren, Jingwei; Bian, Zhengfu
2017-06-08
This paper concentrates on a study of a novel multi-sensor aided method by using acoustic and visual sensors for detection, recognition and separation of End-of Life vehicles' (ELVs) plastic materials, in order to optimize the recycling rate of automotive shredder residues (ASRs). Sensor-based sorting technologies have been utilized for material recycling for the last two decades. One of the problems still remaining results from black and dark dyed plastics which are very difficult to recognize using visual sensors. In this paper a new multi-sensor technology for black plastic recognition and sorting by using impact resonant acoustic emissions (AEs) and laser triangulation scanning was introduced. A pilot sorting system which consists of a 3-dimensional visual sensor and an acoustic sensor was also established; two kinds commonly used vehicle plastics, polypropylene (PP) and acrylonitrile-butadiene-styrene (ABS) and two kinds of modified vehicle plastics, polypropylene/ethylene-propylene-diene-monomer (PP-EPDM) and acrylonitrile-butadiene-styrene/polycarbonate (ABS-PC) were tested. In this study the geometrical features of tested plastic scraps were measured by the visual sensor, and their corresponding impact acoustic emission (AE) signals were acquired by the acoustic sensor. The signal processing and feature extraction of visual data as well as acoustic signals were realized by virtual instruments. Impact acoustic features were recognized by using FFT based power spectral density analysis. The results shows that the characteristics of the tested PP and ABS plastics were totally different, but similar to their respective modified materials. The probability of scrap material recognition rate, i.e., the theoretical sorting efficiency between PP and PP-EPDM, could reach about 50%, and between ABS and ABS-PC it could reach about 75% with diameters ranging from 14 mm to 23 mm, and with exclusion of abnormal impacts, the actual separation rates were 39.2% for PP, 41.4% for PP/EPDM scraps as well as 62.4% for ABS, and 70.8% for ABS/PC scraps. Within the diameter range of 8-13 mm, only 25% of PP and 27% of PP/EPDM scraps, as well as 43% of ABS, and 47% of ABS/PC scraps were finally separated. This research proposes a new approach for sensor-aided automatic recognition and sorting of black plastic materials, it is an effective method for ASR reduction and recycling.
Autonomous Satellite Operations Via Secure Virtual Mission Operations Center
NASA Technical Reports Server (NTRS)
Miller, Eric; Paulsen, Phillip E.; Pasciuto, Michael
2011-01-01
The science community is interested in improving their ability to respond to rapidly evolving, transient phenomena via autonomous rapid reconfiguration, which derives from the ability to assemble separate but collaborating sensors and data forecasting systems to meet a broad range of research and application needs. Current satellite systems typically require human intervention to respond to triggers from dissimilar sensor systems. Additionally, satellite ground services often need to be coordinated days or weeks in advance. Finally, the boundaries between the various sensor systems that make up such a Sensor Web are defined by such things as link delay and connectivity, data and error rate asymmetry, data reliability, quality of service provisions, and trust, complicating autonomous operations. Over the past ten years, researchers from the NASA Glenn Research Center (GRC), General Dynamics, Surrey Satellite Technology Limited (SSTL), Cisco, Universal Space Networks (USN), the U.S. Geological Survey (USGS), the Naval Research Laboratory, the DoD Operationally Responsive Space (ORS) Office, and others have worked collaboratively to develop a virtual mission operations capability. Called VMOC (Virtual Mission Operations Center), this new capability allows cross-system queuing of dissimilar mission unique systems through the use of a common security scheme and published application programming interfaces (APIs). Collaborative VMOC demonstrations over the last several years have supported the standardization of spacecraft to ground interfaces needed to reduce costs, maximize space effects to the user, and allow the generation of new tactics, techniques and procedures that lead to responsive space employment.
Sensing sheets based on large area electronics for fatigue crack detection
NASA Astrophysics Data System (ADS)
Yao, Yao; Glisic, Branko
2015-03-01
Reliable early-stage damage detection requires continuous structural health monitoring (SHM) over large areas of structure, and with high spatial resolution of sensors. This paper presents the development stage of prototype strain sensing sheets based on Large Area Electronics (LAE), in which thin-film strain gauges and control circuits are integrated on the flexible electronics and deposited on a polyimide sheet that can cover large areas. These sensing sheets were applied for fatigue crack detection on small-scale steel plates. Two types of sensing-sheet interconnects were designed and manufactured, and dense arrays of strain gauge sensors were assembled onto the interconnects. In total, four (two for each design type) strain sensing sheets were created and tested, which were sensitive to strain at virtually every point over the whole sensing sheet area. The sensing sheets were bonded to small-scale steel plates, which had a notch on the boundary so that fatigue cracks could be generated under cyclic loading. The fatigue tests were carried out at the Carleton Laboratory of Columbia University, and the steel plates were attached through a fixture to the loading machine that applied cyclic fatigue load. Fatigue cracks then occurred and propagated across the steel plates, leading to the failure of these test samples. The strain sensor that was close to the notch successfully detected the initialization of fatigue crack and localized the damage on the plate. The strain sensor that was away from the crack successfully detected the propagation of fatigue crack based on the time history of measured strain. Overall, the results of the fatigue tests validated general principles of the strain sensing sheets for crack detection.
Open core control software for surgical robots
Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B.; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo
2010-01-01
Object In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge “intelligent surgical robot” will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are “home-made” in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. Materials and methods In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a “force guide” for supporting operators to perform precise manipulation by using a master–slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. Results The Open Core Control software was implemented on a surgical master–slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a “force guide” on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. Conclusion In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement “General Principles of Software Validation” or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field. PMID:20033506
Sensor data fusion for textured reconstruction and virtual representation of alpine scenes
NASA Astrophysics Data System (ADS)
Häufel, Gisela; Bulatov, Dimitri; Solbrig, Peter
2017-10-01
The concept of remote sensing is to provide information about a wide-range area without making physical contact with this area. If, additionally to satellite imagery, images and videos taken by drones provide a more up-to-date data at a higher resolution, or accurate vector data is downloadable from the Internet, one speaks of sensor data fusion. The concept of sensor data fusion is relevant for many applications, such as virtual tourism, automatic navigation, hazard assessment, etc. In this work, we describe sensor data fusion aiming to create a semantic 3D model of an extremely interesting yet challenging dataset: An alpine region in Southern Germany. A particular challenge of this work is that rock faces including overhangs are present in the input airborne laser point cloud. The proposed procedure for identification and reconstruction of overhangs from point clouds comprises four steps: Point cloud preparation, filtering out vegetation, mesh generation and texturing. Further object types are extracted in several interesting subsections of the dataset: Building models with textures from UAV (Unmanned Aerial Vehicle) videos, hills reconstructed as generic surfaces and textured by the orthophoto, individual trees detected by the watershed algorithm, as well as the vector data for roads retrieved from openly available shapefiles and GPS-device tracks. We pursue geo-specific reconstruction by assigning texture and width to roads of several pre-determined types and modeling isolated trees and rocks using commercial software. For visualization and simulation of the area, we have chosen the simulation system Virtual Battlespace 3 (VBS3). It becomes clear that the proposed concept of sensor data fusion allows a coarse reconstruction of a large scene and, at the same time, an accurate and up-to-date representation of its relevant subsections, in which simulation can take place.
Subscale Test Methods for Combustion Devices
NASA Technical Reports Server (NTRS)
Anderson, W. E.; Sisco, J. C.; Long, M. R.; Sung, I.-K.
2005-01-01
Stated goals for long-life LRE s have been between 100 and 500 cycles: 1) Inherent technical difficulty of accurately defining the transient and steady state thermochemical environments and structural response (strain); 2) Limited statistical basis on failure mechanisms and effects of design and operational variability; and 3) Very high test costs and budget-driven need to protect test hardware (aversion to test-to-failure). Ambitious goals will require development of new databases: a) Advanced materials, e.g., tailored composites with virtually unlimited property variations; b) Innovative functional designs to exploit full capabilities of advanced materials; and c) Different cycles/operations. Subscale testing is one way to address technical and budget challenges: 1) Prototype subscale combustors exposed to controlled simulated conditions; 2) Complementary to conventional laboratory specimen database development; 3) Instrumented with sensors to measure thermostructural response; and 4) Coupled with analysis
González, Fernando Cornelio Jimènez; Villegas, Osslan Osiris Vergara; Ramírez, Dulce Esperanza Torres; Sánchez, Vianey Guadalupe Cruz; Domínguez, Humberto Ochoa
2014-01-01
Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. One of the main advances is the development of real-time monitors that use intelligent and wireless communication technology. In this paper, a system is presented for the remote monitoring of the body temperature and heart rate of a patient by means of a wireless sensor network (WSN) and mobile augmented reality (MAR). The combination of a WSN and MAR provides a novel alternative to remotely measure body temperature and heart rate in real time during patient care. The system is composed of (1) hardware such as Arduino microcontrollers (in the patient nodes), personal computers (for the nurse server), smartphones (for the mobile nurse monitor and the virtual patient file) and sensors (to measure body temperature and heart rate), (2) a network layer using WiFly technology, and (3) software such as LabView, Android SDK, and DroidAR. The results obtained from tests show that the system can perform effectively within a range of 20 m and requires ten minutes to stabilize the temperature sensor to detect hyperthermia, hypothermia or normal body temperature conditions. Additionally, the heart rate sensor can detect conditions of tachycardia and bradycardia. PMID:25230306
González, Fernando Cornelio Jiménez; Villegas, Osslan Osiris Vergara; Ramírez, Dulce Esperanza Torres; Sánchez, Vianey Guadalupe Cruz; Domínguez, Humberto Ochoa
2014-09-16
Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. One of the main advances is the development of real-time monitors that use intelligent and wireless communication technology. In this paper, a system is presented for the remote monitoring of the body temperature and heart rate of a patient by means of a wireless sensor network (WSN) and mobile augmented reality (MAR). The combination of a WSN and MAR provides a novel alternative to remotely measure body temperature and heart rate in real time during patient care. The system is composed of (1) hardware such as Arduino microcontrollers (in the patient nodes), personal computers (for the nurse server), smartphones (for the mobile nurse monitor and the virtual patient file) and sensors (to measure body temperature and heart rate), (2) a network layer using WiFly technology, and (3) software such as LabView, Android SDK, and DroidAR. The results obtained from tests show that the system can perform effectively within a range of 20 m and requires ten minutes to stabilize the temperature sensor to detect hyperthermia, hypothermia or normal body temperature conditions. Additionally, the heart rate sensor can detect conditions of tachycardia and bradycardia.
Robust controller designs for second-order dynamic system: A virtual passive approach
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1990-01-01
A robust controller design is presented for second-order dynamic systems. The controller is model-independent and itself is a virtual second-order dynamic system. Conditions on actuator and sensor placements are identified for controller designs that guarantee overall closed-loop stability. The dynamic controller can be viewed as a virtual passive damping system that serves to stabilize the actual dynamic system. The control gains are interpreted as virtual mass, spring, and dashpot elements that play the same roles as actual physical elements in stability analysis. Position, velocity, and acceleration feedback are considered. Simple examples are provided to illustrate the physical meaning of this controller design.
Virtual Simulation Capability for Deployable Force Protection Analysis (VSCDFP) FY 15 Plan
2014-07-30
Unmanned Aircraft Systems ( SUAS ) outfitted with a baseline two-axis steerable “Infini-spin” electro- optic/infrared (EO/IR) sensor payload. The current...Payload (EPRP) enhanced sensor system to the Puma SUAS will be beneficial for Soldiers executing RCP mission sets. • Develop the RCP EPRP Concept of
New optical sensor systems for high-resolution satellite, airborne and terrestrial imaging systems
NASA Astrophysics Data System (ADS)
Eckardt, Andreas; Börner, Anko; Lehmann, Frank
2007-10-01
The department of Optical Information Systems (OS) at the Institute of Robotics and Mechatronics of the German Aerospace Center (DLR) has more than 25 years experience with high-resolution imaging technology. The technology changes in the development of detectors, as well as the significant change of the manufacturing accuracy in combination with the engineering research define the next generation of spaceborne sensor systems focusing on Earth observation and remote sensing. The combination of large TDI lines, intelligent synchronization control, fast-readable sensors and new focal-plane concepts open the door to new remote-sensing instruments. This class of instruments is feasible for high-resolution sensor systems regarding geometry and radiometry and their data products like 3D virtual reality. Systemic approaches are essential for such designs of complex sensor systems for dedicated tasks. The system theory of the instrument inside a simulated environment is the beginning of the optimization process for the optical, mechanical and electrical designs. Single modules and the entire system have to be calibrated and verified. Suitable procedures must be defined on component, module and system level for the assembly test and verification process. This kind of development strategy allows the hardware-in-the-loop design. The paper gives an overview about the current activities at DLR in the field of innovative sensor systems for photogrammetric and remote sensing purposes.
Virtual reality and telepresence for military medicine.
Satava, R M
1995-03-01
The profound changes brought about by technology in the past few decades are leading to a total revolution in medicine. The advanced technologies of telepresence and virtual reality are but two of the manifestations emerging from our new information age; now all of medicine can be empowered because of this digital technology. The leading edge is on the digital battlefield, where an entire new concept in military medicine is evolving. Using remote sensors, intelligent systems, telepresence surgery and virtual reality surgical simulations, combat casualty care is prepared for the 21st century.
The Evolution of Sonic Ecosystems
NASA Astrophysics Data System (ADS)
McCormack, Jon
This chapter describes a novel type of artistic artificial life software environment. Agents that have the ability to make and listen to sound populate a synthetic world. An evolvable, rule-based classifier system drives agent behavior. Agents compete for limited resources in a virtual environment that is influenced by the presence and movement of people observing the system. Electronic sensors create a link between the real and virtual spaces, virtual agents evolve implicitly to try to maintain the interest of the human audience, whose presence provides them with life-sustaining food.
Virtual groups for patient WBAN monitoring in medical environments.
Ivanov, Stepan; Foley, Christopher; Balasubramaniam, Sasitharan; Botvich, Dmitri
2012-11-01
Wireless body area networks (WBAN) provide a tremendous opportunity for remote health monitoring. However, engineering WBAN health monitoring systems encounters a number of challenges including efficient WBAN monitoring information extraction, dynamically fine tuning the monitoring process to suit the quality of data, and to allow the translation of high-level requirements of medical officers to low-level sensor reconfiguration. This paper addresses these challenges, by proposing an architecture that allows virtual groups to be formed between devices of patients, nurses, and doctors in order to enable remote analysis of WBAN data. Group formation and modification is performed with respect to patients' conditions and medical officers' requirements, which could be easily adjusted through high-level policies. We also propose, a new metric called the Quality of Health Monitoring, which allows medical officers to provide feedback on the quality of WBAN data received. The WBAN data gathered are transmitted to the virtual group members through an underlying environmental sensor network. The proposed approach is evaluated through a series of simulation.
Embodied collaboration support system for 3D shape evaluation in virtual space
NASA Astrophysics Data System (ADS)
Okubo, Masashi; Watanabe, Tomio
2005-12-01
Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.
Virtual Sensors for On-line Wheel Wear and Part Roughness Measurement in the Grinding Process
Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A.; Cabanes, Itziar; Pombo, Iñigo
2014-01-01
Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations. PMID:24854055
Open multi-agent control architecture to support virtual-reality-based man-machine interfaces
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel
2001-10-01
Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.
Robot Position Sensor Fault Tolerance
NASA Technical Reports Server (NTRS)
Aldridge, Hal A.
1997-01-01
Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. A new method is proposed that utilizes analytical redundancy to allow for continued operation during joint position sensor failure. Joint torque sensors are used with a virtual passive torque controller to make the robot joint stable without position feedback and improve position tracking performance in the presence of unknown link dynamics and end-effector loading. Two Cartesian accelerometer based methods are proposed to determine the position of the joint. The joint specific position determination method utilizes two triaxial accelerometers attached to the link driven by the joint with the failed position sensor. The joint specific method is not computationally complex and the position error is bounded. The system wide position determination method utilizes accelerometers distributed on different robot links and the end-effector to determine the position of sets of multiple joints. The system wide method requires fewer accelerometers than the joint specific method to make all joint position sensors fault tolerant but is more computationally complex and has lower convergence properties. Experiments were conducted on a laboratory manipulator. Both position determination methods were shown to track the actual position satisfactorily. A controller using the position determination methods and the virtual passive torque controller was able to servo the joints to a desired position during position sensor failure.
ERIC Educational Resources Information Center
Gendreau, Audrey
2014-01-01
Efficient self-organizing virtual clusterheads that supervise data collection based on their wireless connectivity, risk, and overhead costs, are an important element of Wireless Sensor Networks (WSNs). This function is especially critical during deployment when system resources are allocated to a subsequent application. In the presented research,…
Network-Capable Application Process and Wireless Intelligent Sensors for ISHM
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Morris, Jon; Turowski, Mark; Wang, Ray
2011-01-01
Intelligent sensor technology and systems are increasingly becoming attractive means to serve as frameworks for intelligent rocket test facilities with embedded intelligent sensor elements, distributed data acquisition elements, and onboard data acquisition elements. Networked intelligent processors enable users and systems integrators to automatically configure their measurement automation systems for analog sensors. NASA and leading sensor vendors are working together to apply the IEEE 1451 standard for adding plug-and-play capabilities for wireless analog transducers through the use of a Transducer Electronic Data Sheet (TEDS) in order to simplify sensor setup, use, and maintenance, to automatically obtain calibration data, and to eliminate manual data entry and error. A TEDS contains the critical information needed by an instrument or measurement system to identify, characterize, interface, and properly use the signal from an analog sensor. A TEDS is deployed for a sensor in one of two ways. First, the TEDS can reside in embedded, nonvolatile memory (typically flash memory) within the intelligent processor. Second, a virtual TEDS can exist as a separate file, downloadable from the Internet. This concept of virtual TEDS extends the benefits of the standardized TEDS to legacy sensors and applications where the embedded memory is not available. An HTML-based user interface provides a visual tool to interface with those distributed sensors that a TEDS is associated with, to automate the sensor management process. Implementing and deploying the IEEE 1451.1-based Network-Capable Application Process (NCAP) can achieve support for intelligent process in Integrated Systems Health Management (ISHM) for the purpose of monitoring, detection of anomalies, diagnosis of causes of anomalies, prediction of future anomalies, mitigation to maintain operability, and integrated awareness of system health by the operator. It can also support local data collection and storage. This invention enables wide-area sensing and employs numerous globally distributed sensing devices that observe the physical world through the existing sensor network. This innovation enables distributed storage, distributed processing, distributed intelligence, and the availability of DiaK (Data, Information, and Knowledge) to any element as needed. It also enables the simultaneous execution of multiple processes, and represents models that contribute to the determination of the condition and health of each element in the system. The NCAP (intelligent process) can configure data-collection and filtering processes in reaction to sensed data, allowing it to decide when and how to adapt collection and processing with regard to sophisticated analysis of data derived from multiple sensors. The user will be able to view the sensing device network as a single unit that supports a high-level query language. Each query would be able to operate over data collected from across the global sensor network just as a search query encompasses millions of Web pages. The sensor web can preserve ubiquitous information access between the querier and the queried data. Pervasive monitoring of the physical world raises significant data and privacy concerns. This innovation enables different authorities to control portions of the sensing infrastructure, and sensor service authors may wish to compose services across authority boundaries.
Understanding Evolutionary Potential in Virtual CPU Instruction Set Architectures
Bryson, David M.; Ofria, Charles
2013-01-01
We investigate fundamental decisions in the design of instruction set architectures for linear genetic programs that are used as both model systems in evolutionary biology and underlying solution representations in evolutionary computation. We subjected digital organisms with each tested architecture to seven different computational environments designed to present a range of evolutionary challenges. Our goal was to engineer a general purpose architecture that would be effective under a broad range of evolutionary conditions. We evaluated six different types of architectural features for the virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more precisely modify the function of genetic instructions, (2) memory: we provided an increased number of registers in the virtual CPUs, (3) decoupled sensors and actuators: we separated input and output operations to enable greater control over data flow. We also tested a variety of methods to regulate expression: (4) explicit labels that allow programs to dynamically refer to specific genome positions, (5) position-relative search instructions, and (6) multiple new flow control instructions, including conditionals and jumps. Each of these features also adds complication to the instruction set and risks slowing evolution due to epistatic interactions. Two features (multiple argument specification and separated I/O) demonstrated substantial improvements in the majority of test environments, along with versions of each of the remaining architecture modifications that show significant improvements in multiple environments. However, some tested modifications were detrimental, though most exhibit no systematic effects on evolutionary potential, highlighting the robustness of digital evolution. Combined, these observations enhance our understanding of how instruction architecture impacts evolutionary potential, enabling the creation of architectures that support more rapid evolution of complex solutions to a broad range of challenges. PMID:24376669
Interreality in the management of psychological stress: a clinical scenario.
Riva, Giuseppe; Raspelli, Simona; Pallavicini, Federica; Grassi, Alessandra; Algeri, Davide; Wiederhold, Brenda K; Gaggioli, Andrea
2010-01-01
The term "psychological stress" describes a situation in which a subject perceives that environmental demands tax or exceed his or her adaptive capacity. According to the Cochrane Database of Systematic Reviews, the best validated approach covering both stress management and stress treatment is the Cognitive Behavioral (CBT) approach. We aim to design, develop and test an advanced ICT based solution for the assessment and treatment of psychological stress that is able to improve the actual CBT approach. To reach this goal we will use the "interreality" paradigm integrating assessment and treatment within a hybrid environment, that creates a bridge between the physical and virtual worlds. Our claim is that bridging virtual experiences (fully controlled by the therapist, used to learn coping skills and emotional regulation) with real experiences (allowing both the identification of any critical stressors and the assessment of what has been learned) using advanced technologies (virtual worlds, advanced sensors and PDA/mobile phones) is the best way to address the above limitations. To illustrate the proposed concept, a clinical scenario is also presented and discussed: Paola, a 45 years old nurse, with a mother affected by progressive senile dementia.
Interreality: A New Paradigm for E-health.
Riva, Giuseppe
2009-01-01
"Interreality" is a personalized immersive e-therapy whose main novelty is a hybrid, closed-loop empowering experience bridging physical and virtual worlds. The main feature of interreality is a twofold link between the virtual and the real world: (a) behavior in the physical world influences the experience in the virtual one; (b) behavior in the virtual world influences the experience in the real one. This is achieved through: (1) 3D Shared Virtual Worlds: role-playing experiences in which one or more users interact with one another within a 3D world; (2) Bio and Activity Sensors (From the Real to the Virtual World): They are used to track the emotional/health/activity status of the user and to influence his/her experience in the virtual world (aspect, activity and access); (3) Mobile Internet Appliances (From the Virtual to the Real One): In interreality, the social and individual user activity in the virtual world has a direct link with the users' life through a mobile phone/digital assistant. The different technologies that are involved in the interreality vision and its clinical rationale are addressed and discussed.
Digital mammography, cancer screening: Factors important for image compression
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria
1993-01-01
The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.
A virtual robot to model the use of regenerated legs in a web-building spider.
Krink; Vollrath
1999-01-01
The garden cross orb-spider, Araneus diadematus, shows behavioural responses to leg loss and regeneration that are reflected in the geometry of the web's capture spiral. We created a virtual spider robot that mimicked the web construction behaviour of thus handicapped real spiders. We used this approach to test the correctness and consistency of hypotheses about orb web construction. The behaviour of our virtual robot was implemented in a rule-based system supervising behaviour patterns that communicated with the robot's sensors and motors. By building the typical web of a nonhandicapped spider our first model failed and led to new observations on real spiders. We realized that in addition to leg position, leg posture could also be of importance. The implementation of this new hypothesis greatly improved the results of our simulation of a handicapped spider. Now simulated webs, like the real webs of handicapped spiders, had significantly more gaps in successive spiral turns compared with webs of nonhandicapped spiders. Moreover, webs built by the improved virtual spiders intercepted prey as well as the digitized real webs. However, the main factors that affected web interception frequency were prey size, size of capture area and individual variance; having a regenerated leg, surprisingly, was relatively unimportant for this trait. Copyright 1999 The Association for the Study of Animal Behaviour.
Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines
NASA Astrophysics Data System (ADS)
Cao, Jinshan; Fu, Jianhong; Yuan, Xiuxiao; Gong, Jianya
2017-11-01
Like many high-resolution satellites such as the ALOS, MOMS-2P, QuickBird, and ZiYuan1-02C satellites, the ZiYuan-3 satellite suffers from different levels of attitude oscillations. As a result of such oscillations, the rational polynomial coefficients (RPCs) obtained using a terrain-independent scenario often have nonlinear biases. In the sensor orientation of ZiYuan-3 imagery based on a rational function model (RFM), these nonlinear biases cannot be effectively compensated by an affine transformation. The sensor orientation accuracy is thereby worse than expected. In order to eliminate the influence of attitude oscillations on the RFM-based sensor orientation, a feasible nonlinear bias compensation approach for ZiYuan-3 imagery with cubic splines is proposed. In this approach, no actual ground control points (GCPs) are required to determine the cubic splines. First, the RPCs are calculated using a three-dimensional virtual control grid generated based on a physical sensor model. Second, one cubic spline is used to model the residual errors of the virtual control points in the row direction and another cubic spline is used to model the residual errors in the column direction. Then, the estimated cubic splines are used to compensate the nonlinear biases in the RPCs. Finally, the affine transformation parameters are used to compensate the residual biases in the RPCs. Three ZiYuan-3 images were tested. The experimental results showed that before the nonlinear bias compensation, the residual errors of the independent check points were nonlinearly biased. Even if the number of GCPs used to determine the affine transformation parameters was increased from 4 to 16, these nonlinear biases could not be effectively compensated. After the nonlinear bias compensation with the estimated cubic splines, the influence of the attitude oscillations could be eliminated. The RFM-based sensor orientation accuracies of the three ZiYuan-3 images reached 0.981 pixels, 0.890 pixels, and 1.093 pixels, which were respectively 42.1%, 48.3%, and 54.8% better than those achieved before the nonlinear bias compensation.
Automatic building identification under bomb damage conditions
NASA Astrophysics Data System (ADS)
Woodley, Robert; Noll, Warren; Barker, Joseph; Wunsch, Donald C., II
2009-05-01
Given the vast amount of image intelligence utilized in support of planning and executing military operations, a passive automated image processing capability for target identification is urgently required. Furthermore, transmitting large image streams from remote locations would quickly use available band width (BW) precipitating the need for processing to occur at the sensor location. This paper addresses the problem of automatic target recognition for battle damage assessment (BDA). We utilize an Adaptive Resonance Theory approach to cluster templates of target buildings. The results show that the network successfully classifies targets from non-targets in a virtual test bed environment.
The application of smart sensor techniques to a solid-state array multispectral sensor
NASA Technical Reports Server (NTRS)
Mcfadin, L. W.
1978-01-01
The solid-state array spectroradiometer (SAS) developed at JSC for remote sensing applications is a multispectral sensor which has no moving parts, is virtually maintenance-free, and has the ability to provide data which requires a minimum of processing. The instrument is based on the 42 x 342 element charge injection device (CID) detector. This system allows the combination of spectral scanning and across-track spatial scanning along with its associated digitization electronics into a single detector.
Pereira, G. F.; Mikkelsen, L. P.; McGugan, M.
2015-01-01
In a fibre-reinforced polymer (FRP) structure designed using the emerging damage tolerance and structural health monitoring philosophy, sensors and models that describe crack propagation will enable a structure to operate despite the presence of damage by fully exploiting the material’s mechanical properties. When applying this concept to different structures, sensor systems and damage types, a combination of damage mechanics, monitoring technology, and modelling is required. The primary objective of this article is to demonstrate such a combination. This article is divided in three main topics: the damage mechanism (delamination of FRP), the structural health monitoring technology (fibre Bragg gratings to detect delamination), and the finite element method model of the structure that incorporates these concepts into a final and integrated damage-monitoring concept. A novel method for assessing a crack growth/damage event in fibre-reinforced polymer or structural adhesive-bonded structures using embedded fibre Bragg grating (FBG) sensors is presented by combining conventional measured parameters, such as wavelength shift, with parameters associated with measurement errors, typically ignored by the end-user. Conjointly, a novel model for sensor output prediction (virtual sensor) was developed using this FBG sensor crack monitoring concept and implemented in a finite element method code. The monitoring method was demonstrated and validated using glass fibre double cantilever beam specimens instrumented with an array of FBG sensors embedded in the material and tested using an experimental fracture procedure. The digital image correlation technique was used to validate the model prediction by correlating the specific sensor response caused by the crack with the developed model. PMID:26513653
In-vehicle group activity modeling and simulation in sensor-based virtual environment
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Telagamsetti, Durga; Poshtyar, Azin; Chan, Alex; Hu, Shuowen
2016-05-01
Human group activity recognition is a very complex and challenging task, especially for Partially Observable Group Activities (POGA) that occur in confined spaces with limited visual observability and often under severe occultation. In this paper, we present IRIS Virtual Environment Simulation Model (VESM) for the modeling and simulation of dynamic POGA. More specifically, we address sensor-based modeling and simulation of a specific category of POGA, called In-Vehicle Group Activities (IVGA). In VESM, human-alike animated characters, called humanoids, are employed to simulate complex in-vehicle group activities within the confined space of a modeled vehicle. Each articulated humanoid is kinematically modeled with comparable physical attributes and appearances that are linkable to its human counterpart. Each humanoid exhibits harmonious full-body motion - simulating human-like gestures and postures, facial impressions, and hands motions for coordinated dexterity. VESM facilitates the creation of interactive scenarios consisting of multiple humanoids with different personalities and intentions, which are capable of performing complicated human activities within the confined space inside a typical vehicle. In this paper, we demonstrate the efficiency and effectiveness of VESM in terms of its capabilities to seamlessly generate time-synchronized, multi-source, and correlated imagery datasets of IVGA, which are useful for the training and testing of multi-source full-motion video processing and annotation. Furthermore, we demonstrate full-motion video processing of such simulated scenarios under different operational contextual constraints.
Sensors and Algorithms for an Unmanned Surf-Zone Robot
2015-12-01
71 3. Data Fusion and Filtering................................................ 74 C. VIRTUAL POTENTIAL FIELD (VPF) PATH PLANNING ...iron effects are clearly seen: Soft iron de - calibration (sphere distortion) was caused by proximity of circuit boards. Offset of the center of the...information to perform global tasks such as path- planning , sensors and actuators commands, external communications, etc. Python3 is used as the primary
Oversampling in virtual visual sensors as a means to recover higher modes of vibration
NASA Astrophysics Data System (ADS)
Shariati, Ali; Schumacher, Thomas
2015-03-01
Vibration-based structural health monitoring (SHM) techniques require modal information from the monitored structure in order to estimate the location and severity of damage. Natural frequencies also provide useful information to calibrate finite element models. There are several types of physical sensors that can measure the response over a range of frequencies. For most of those sensors however, accessibility, limitation of measurement points, wiring, and high system cost represent major challenges. Recent optical sensing approaches offer advantages such as easy access to visible areas, distributed sensing capabilities, and comparatively inexpensive data recording while having no wiring issues. In this research we propose a novel methodology to measure natural frequencies of structures using digital video cameras based on virtual visual sensors (VVS). In our initial study where we worked with commercially available inexpensive digital video cameras we found that for multiple degrees of freedom systems it is difficult to detect all of the natural frequencies simultaneously due to low quantization resolution. In this study we show how oversampling enabled by the use of high-end high-frame-rate video cameras enable recovering all of the three natural frequencies from a three story lab-scale structure.
NASA Technical Reports Server (NTRS)
Ross, M. D.
2001-01-01
Safety of astronauts during long-term space exploration is a priority for NASA. This paper describes efforts to produce Earth-based models for providing expert medical advice when unforeseen medical emergencies occur on spacecraft. These models are Virtual Collaborative Clinics that reach into remote sites using telecommunications and emerging stereo-imaging and sensor technologies. c 2001. Elsevier Science Ltd. All rights reserved.
Virtual Instrumentation for a Fiber-Optics-Based Artificial Nerve
NASA Technical Reports Server (NTRS)
Lyons, Donald R.; Kyaw, Thet Mon; Griffin, DeVon (Technical Monitor)
2001-01-01
A LabView-based computer interface for fiber-optic artificial nerves has been devised as a Masters thesis project. This project involves the use of outputs from wavelength multiplexed optical fiber sensors (artificial nerves), which are capable of producing dense optical data outputs for physical measurements. The potential advantages of using optical fiber sensors for sensory function restoration is the fact that well defined WDM-modulated signals can be transmitted to and from the sensing region allowing networked units to replace low-level nerve functions for persons desirous of "intelligent artificial limbs." Various FO sensors can be designed with high sensitivity and the ability to be interfaced with a wide range of devices including miniature shielded electrical conversion units. Our Virtual Instrument (VI) interface software package was developed using LabView's "Laboratory Virtual Instrument Engineering Workbench" package. The virtual instrument has been configured to arrange and encode the data to develop an intelligent response in the form of encoded digitized signal outputs. The architectural layout of our nervous system is such that different touch stimuli from different artificial fiber-optic nerve points correspond to gratings of a distinct resonant wavelength and physical location along the optical fiber. Thus, when an automated, tunable diode laser sends scans, the wavelength spectrum of the artificial nerve, it triggers responses that are encoded with different touch stimuli by way wavelength shifts in the reflected Bragg resonances. The reflected light is detected and a resulting analog signal is fed into ADC1 board and DAQ card. Finally, the software has been written such that the experimenter is able to set the response range during data acquisition.
Yan, Jing; Li, Xiaolei; Luo, Xiaoyuan; Guan, Xinping
2017-01-01
Due to the lack of a physical line of defense, intrusion detection becomes one of the key issues in applications of underwater wireless sensor networks (UWSNs), especially when the confidentiality has prime importance. However, the resource-constrained property of UWSNs such as sparse deployment and energy constraint makes intrusion detection a challenging issue. This paper considers a virtual-lattice-based approach to the intrusion detection problem in UWSNs. Different from most existing works, the UWSNs consist of two kinds of nodes, i.e., sensor nodes (SNs), which cannot move autonomously, and actuator nodes (ANs), which can move autonomously according to the performance requirement. With the cooperation of SNs and ANs, the intruder detection probability is defined. Then, a virtual lattice-based monitor (VLM) algorithm is proposed to detect the intruder. In order to reduce the redundancy of communication links and improve detection probability, an optimal and coordinative lattice-based monitor patrolling (OCLMP) algorithm is further provided for UWSNs, wherein an equal price search strategy is given for ANs to find the shortest patrolling path. Under VLM and OCLMP algorithms, the detection probabilities are calculated, while the topology connectivity can be guaranteed. Finally, simulation results are presented to show that the proposed method in this paper can improve the detection accuracy and save the energy consumption compared with the conventional methods. PMID:28531127
Virtual reality: a reality for future military pilotage?
NASA Astrophysics Data System (ADS)
McIntire, John P.; Martinsen, Gary L.; Marasco, Peter L.; Havig, Paul R.
2009-05-01
Virtual reality (VR) systems provide exciting new ways to interact with information and with the world. The visual VR environment can be synthetic (computer generated) or be an indirect view of the real world using sensors and displays. With the potential opportunities of a VR system, the question arises about what benefits or detriments a military pilot might incur by operating in such an environment. Immersive and compelling VR displays could be accomplished with an HMD (e.g., imagery on the visor), large area collimated displays, or by putting the imagery on an opaque canopy. But what issues arise when, instead of viewing the world directly, a pilot views a "virtual" image of the world? Is 20/20 visual acuity in a VR system good enough? To deliver this acuity over the entire visual field would require over 43 megapixels (MP) of display surface for an HMD or about 150 MP for an immersive CAVE system, either of which presents a serious challenge with current technology. Additionally, the same number of sensor pixels would be required to drive the displays to this resolution (and formidable network architectures required to relay this information), or massive computer clusters are necessary to create an entirely computer-generated virtual reality with this resolution. Can we presently implement such a system? What other visual requirements or engineering issues should be considered? With the evolving technology, there are many technological issues and human factors considerations that need to be addressed before a pilot is placed within a virtual cockpit.
Gerber, Stephan M; Jeitziner, Marie-Madlen; Wyss, Patric; Chesham, Alvin; Urwyler, Prabitha; Müri, René M; Jakob, Stephan M; Nef, Tobias
2017-10-16
After prolonged stay in an intensive care unit (ICU) patients often complain about cognitive impairments that affect health-related quality of life after discharge. The aim of this proof-of-concept study was to test the feasibility and effects of controlled visual and acoustic stimulation in a virtual reality (VR) setup in the ICU. The VR setup consisted of a head-mounted display in combination with an eye tracker and sensors to assess vital signs. The stimulation consisted of videos featuring natural scenes and was tested in 37 healthy participants in the ICU. The VR stimulation led to a reduction of heart rate (p = 0. 049) and blood pressure (p = 0.044). Fixation/saccade ratio (p < 0.001) was increased when a visual target was presented superimposed on the videos (reduced search activity), reflecting enhanced visual processing. Overall, the VR stimulation had a relaxing effect as shown in vital markers of physical stress and participants explored less when attending the target. Our study indicates that VR stimulation in ICU settings is feasible and beneficial for critically ill patients.
Fixed Base Modal Survey of the MPCV Orion European Service Module Structural Test Article
NASA Technical Reports Server (NTRS)
Winkel, James P.; Akers, J. C.; Suarez, Vicente J.; Staab, Lucas D.; Napolitano, Kevin L.
2017-01-01
Recently, the MPCV Orion European Service Module Structural Test Article (E-STA) underwent sine vibration testing using the multi-axis shaker system at NASA GRC Plum Brook Station Mechanical Vibration Facility (MVF). An innovative approach using measured constraint shapes at the interface of E-STA to the MVF allowed high-quality fixed base modal parameters of the E-STA to be extracted, which have been used to update the E-STA finite element model (FEM), without the need for a traditional fixed base modal survey. This innovative approach provided considerable program cost and test schedule savings. This paper documents this modal survey, which includes the modal pretest analysis sensor selection, the fixed base methodology using measured constraint shapes as virtual references and measured frequency response functions, and post-survey comparison between measured and analysis fixed base modal parameters.
Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel
2015-01-01
Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315
On requirements for a satellite mission to measure tropical rainfall
NASA Technical Reports Server (NTRS)
Thiele, Otto W. (Editor)
1987-01-01
Tropical rainfall data are crucial in determining the role of tropical latent heating in driving the circulation of the global atmosphere. Also, the data are particularly important for testing the realism of climate models, and their ability to simulate and predict climate accurately on the seasonal time scale. Other scientific issues such as the effects of El Nino on climate could be addressed with a reliable, extended time series of tropical rainfall observations. A passive microwave sensor is planned to provide information on the integrated column precipitation content, its areal distribution, and its intensity. An active microwave sensor (radar) will define the layer depth of the precipitation and provide information about the intensity of rain reaching the surface, the key to determining the latent heat input to the atmosphere. A visible/infrared sensor will provide very high resolution information on cloud coverage, type, and top temperatures and also serve as the link between these data and the long and virtually continuous coverage by the geosynchronous meteorological satellites. The unique combination of sensor wavelengths, coverages, and resolving capabilities together with the low-altitude, non-Sun synchronous orbit provide a sampling capability that should yield monthly precipitation amounts to a reasonable accuracy over a 500- by 500-km grid.
Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar
2014-12-05
This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.
Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar
2014-01-01
This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595
Elderly Healthcare Monitoring Using an Avatar-Based 3D Virtual Environment
Pouke, Matti; Häkkilä, Jonna
2013-01-01
Homecare systems for elderly people are becoming increasingly important due to both economic reasons as well as patients’ preferences. Sensor-based surveillance technologies are an expected future trend, but research so far has devoted little attention to the User Interface (UI) design of such systems and the user-centric design approach. In this paper, we explore the possibilities of an avatar-based 3D visualization system, which exploits wearable sensors and human activity simulations. We present a technical prototype and the evaluation of alternative concept designs for UIs based on a 3D virtual world. The evaluation was conducted with homecare providers through focus groups and an online survey. Our results show firstly that systems taking advantage of 3D virtual world visualization techniques have potential especially due to the privacy preserving and simplified information presentation style, and secondly that simple representations and glancability should be emphasized in the design. The identified key use cases highlight that avatar-based 3D presentations can be helpful if they provide an overview as well as details on demand. PMID:24351747
Virtual Reality Simulation of the International Space Welding Experiment
NASA Technical Reports Server (NTRS)
Phillips, James A.
1996-01-01
Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) It involves 3-dimensional computer graphics; (2) It includes real-time feedback and response to user actions; and (3) It must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, and we have therefore implemented a VR trainer for the International Space Welding Experiment. My role in the development of the ISWE trainer consisted of the following: (1) created texture-mapped models of the ISWE's rotating sample drum, technology block, tool stowage assembly, sliding foot restraint, and control panel; (2) developed C code for control panel button selection and rotation of the sample drum; (3) In collaboration with Tim Clark (Antares Virtual Reality Systems), developed a serial interface box for the PC and the SGI Indigo so that external control devices, similar to ones actually used on the ISWE, could be used to control virtual objects in the ISWE simulation; (4) In collaboration with Peter Wang (SFFP) and Mark Blasingame (Boeing), established the interference characteristics of the VIM 1000 head-mounted-display and tested software filters to correct the problem; (5) In collaboration with Peter Wang and Mark Blasingame, established software and procedures for interfacing the VPL DataGlove and the Polhemus 6DOF position sensors to the SGI Indigo serial ports. The majority of the ISWE modeling effort was conducted on a PC-based VR Workstation, described below.
Measurements by A LEAP-Based Virtual Glove for the Hand Rehabilitation
Cinque, Luigi; Polsinelli, Matteo; Spezialetti, Matteo
2018-01-01
Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications. PMID:29534448
Measurements by A LEAP-Based Virtual Glove for the Hand Rehabilitation.
Placidi, Giuseppe; Cinque, Luigi; Polsinelli, Matteo; Spezialetti, Matteo
2018-03-10
Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications.
A Self-Referenced Optical Intensity Sensor Network Using POFBGs for Biomedical Applications
Moraleda, Alberto Tapetado; Montero, David Sánchez; Webb, David J.; García, Carmen Vázquez
2014-01-01
This work bridges the gap between the remote interrogation of multiple optical sensors and the advantages of using inherently biocompatible low-cost polymer optical fiber (POF)-based photonic sensing. A novel hybrid sensor network combining both silica fiber Bragg gratings (FBG) and polymer FBGs (POFBG) is analyzed. The topology is compatible with WDM networks so multiple remote sensors can be addressed providing high scalability. A central monitoring unit with virtual data processing is implemented, which could be remotely located up to units of km away. The feasibility of the proposed solution for potential medical environments and biomedical applications is shown. PMID:25615736
A self-referenced optical intensity sensor network using POFBGs for biomedical applications.
Tapetado Moraleda, Alberto; Sánchez Montero, David; Webb, David J; Vázquez García, Carmen
2014-12-12
This work bridges the gap between the remote interrogation of multiple optical sensors and the advantages of using inherently biocompatible low-cost polymer optical fiber (POF)-based photonic sensing. A novel hybrid sensor network combining both silica fiber Bragg gratings (FBG) and polymer FBGs (POFBG) is analyzed. The topology is compatible with WDM networks so multiple remote sensors can be addressed providing high scalability. A central monitoring unit with virtual data processing is implemented, which could be remotely located up to units of km away. The feasibility of the proposed solution for potential medical environments and biomedical applications is shown.
Park, Jung Jin; Hyun, Woo Jin; Mun, Sung Cik; Park, Yong Tae; Park, O Ok
2015-03-25
Because of their outstanding electrical and mechanical properties, graphene strain sensors have attracted extensive attention for electronic applications in virtual reality, robotics, medical diagnostics, and healthcare. Although several strain sensors based on graphene have been reported, the stretchability and sensitivity of these sensors remain limited, and also there is a pressing need to develop a practical fabrication process. This paper reports the fabrication and characterization of new types of graphene strain sensors based on stretchable yarns. Highly stretchable, sensitive, and wearable sensors are realized by a layer-by-layer assembly method that is simple, low-cost, scalable, and solution-processable. Because of the yarn structures, these sensors exhibit high stretchability (up to 150%) and versatility, and can detect both large- and small-scale human motions. For this study, wearable electronics are fabricated with implanted sensors that can monitor diverse human motions, including joint movement, phonation, swallowing, and breathing.
Autonomic Intelligent Cyber Sensor to Support Industrial Control Network Awareness
Vollmer, Todd; Manic, Milos; Linda, Ondrej
2013-06-01
The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of Autonomic computing and a SOAP based IF-MAP external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, self-managed framework. The contribution of this paper is two-fold: 1) A flexible two level communication layer based on Autonomic computing and Service Oriented Architecture is detailed and 2) Three complementary modules that dynamically reconfiguremore » in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific Operating System and port configurations. Additionally the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure.« less
Daamen, Ruby C.; Edwin A. Roehl, Jr.; Conrads, Paul
2010-01-01
A technology often used for industrial applications is “inferential sensor.” Rather than installing a redundant sensor to measure a process, such as an additional waterlevel gage, an inferential sensor, or virtual sensor, is developed that estimates the processes measured by the physical sensor. The advantage of an inferential sensor is that it provides a redundant signal to the sensor in the field but without exposure to environmental threats. In the event that a gage does malfunction, the inferential sensor provides an estimate for the period of missing data. The inferential sensor also can be used in the quality assurance and quality control of the data. Inferential sensors for gages in the EDEN network are currently (2010) under development. The inferential sensors will be automated so that the real-time EDEN data will continuously be compared to the inferential sensor signal and digital reports of the status of the real-time data will be sent periodically to the appropriate support personnel. The development and application of inferential sensors is easily transferable to other real-time hydrologic monitoring networks.
2014-03-01
56 1. Motivation ...83 1. Motivation ...........................................................................................83 2. Environment Requirements...ENVIRONMENT SYSTEMS ......................................................97 A. BACKGROUND AND MOTIVATION
Development of real-time motion capture system for 3D on-line games linked with virtual character
NASA Astrophysics Data System (ADS)
Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck
2004-10-01
Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.
Huang, Ping-Tzan; Jong, Tai-Lang; Li, Chien-Ming; Chen, Wei-Ling; Lin, Chia-Hung
2017-08-01
Blood leakage and blood loss are serious complications during hemodialysis. From the hemodialysis survey reports, these life-threatening events occur to attract nephrology nurses and patients themselves. When the venous needle and blood line are disconnected, it takes only a few minutes for an adult patient to lose over 40% of his / her blood, which is a sufficient amount of blood loss to cause the patient to die. Therefore, we propose integrating a flexible sensor and self-organizing algorithm to design a cloud computing-based warning device for blood leakage detection. The flexible sensor is fabricated via a screen-printing technique using metallic materials on a soft substrate in an array configuration. The self-organizing algorithm constructs a virtual direct current grid-based alarm unit in an embedded system. This warning device is employed to identify blood leakage levels via a wireless network and cloud computing. It has been validated experimentally, and the experimental results suggest specifications for its commercial designs. The proposed model can also be implemented in an embedded system.
Creation of 3D Multi-Body Orthodontic Models by Using Independent Imaging Sensors
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-01-01
In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning. PMID:23385416
Creation of 3D multi-body orthodontic models by using independent imaging sensors.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-02-05
In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.
Sea-Based Automated Launch and Recovery System Virtual Testbed
2013-12-02
integrated with an Extended Kalman Filter to study sensor fusion in a fixed wing aircraft shipboard recovery scenario. 15. SUBJECT TERMS...the sensors and filter performance are graded both on pure estimation error, and by examining the touchdown performance of the aircraft on the ship...v, and w body-axis velocity components of the aircraft , while the velocities applied to the extremities are used to calculate estimated rotational
2016-12-01
based complementary filter developed at the Naval Postgraduate School, is developed. The performance of a consumer-grade nine-degrees-of-freedom IMU...measurement unit, complementary filter , gait phase detection, zero velocity update, MEMS, IMU, AHRS, GPS denied, distributed sensor, virtual sensor...algorithm and quaternion-based complementary filter developed at the Naval Postgraduate School, is developed. The performance of a consumer-grade nine
Pose and Wind Estimation for Autonomous Parafoils
2014-09-01
Communications GT Georgia Institute of Technology IDVD Inverse Dynamics in the Virtual Domain IMU inertial measurement unit INRIA Institut National de Recherche en...sensor. The method used is a nonlinear estimator that combines the visual sensor measurements with those of an inertial measurement unit ( IMU ) on... isolated on the left side of the equation. On the other hand, when the measurement equation of (3.27) is implemented, the probabil- 58 ity
Fernández Peruchena, Carlos M; Prado-Velasco, Manuel
2010-01-01
Diabetes mellitus (DM) has a growing incidence and prevalence in modern societies, pushed by the aging and change of life styles. Despite the huge resources dedicated to improve their quality of life, mortality and morbidity rates, these are still very poor. In this work, DM pathology is revised from clinical and metabolic points of view, as well as mathematical models related to DM, with the aim of justifying an evolution of DM therapies towards the correction of the physiological metabolic loops involved. We analyze the reliability of mathematical models, under the perspective of virtual physiological human (VPH) initiatives, for generating and integrating customized knowledge about patients, which is needed for that evolution. Wearable smart sensors play a key role in this frame, as they provide patient's information to the models.A telehealthcare computational architecture based on distributed smart sensors (first processing layer) and personalized physiological mathematical models integrated in Human Physiological Images (HPI) computational components (second processing layer), is presented. This technology was designed for a renal disease telehealthcare in earlier works and promotes crossroads between smart sensors and the VPH initiative. We suggest that it is able to support a truly personalized, preventive, and predictive healthcare model for the delivery of evolved DM therapies.
Fernández Peruchena, Carlos M; Prado-Velasco, Manuel
2010-01-01
Diabetes mellitus (DM) has a growing incidence and prevalence in modern societies, pushed by the aging and change of life styles. Despite the huge resources dedicated to improve their quality of life, mortality and morbidity rates, these are still very poor. In this work, DM pathology is revised from clinical and metabolic points of view, as well as mathematical models related to DM, with the aim of justifying an evolution of DM therapies towards the correction of the physiological metabolic loops involved. We analyze the reliability of mathematical models, under the perspective of virtual physiological human (VPH) initiatives, for generating and integrating customized knowledge about patients, which is needed for that evolution. Wearable smart sensors play a key role in this frame, as they provide patient’s information to the models. A telehealthcare computational architecture based on distributed smart sensors (first processing layer) and personalized physiological mathematical models integrated in Human Physiological Images (HPI) computational components (second processing layer), is presented. This technology was designed for a renal disease telehealthcare in earlier works and promotes crossroads between smart sensors and the VPH initiative. We suggest that it is able to support a truly personalized, preventive, and predictive healthcare model for the delivery of evolved DM therapies. PMID:21625646
Guzsvinecz, Tibor; Szucs, Veronika; Sik Lányi, Cecília
2015-01-01
Nowadays the development of virtual reality-based application is one of the most dynamically growing areas. These applications have a wide user base, more and more devices which are providing several kinds of user interactions and are available on the market. In the applications where the not-handheld devices are not necessary, the potential is that these can be used in educational, entertainment and rehabilitation applications. The purpose of this paper is to examine the precision and the efficiency of the not-handheld devices with user interaction in the virtual reality-based applications. The first task of the developed application is to support the rehabilitation process of stroke patients in their homes. A newly developed application will be introduced in this paper, which uses the two popular devices, the Shimmer sensor and the Microsoft Kinect sensor. To identify and to validate the actions of the user these sensors are working together in parallel mode. For the problem solving, the application is available to record an educational pattern, and then the software compares this pattern to the action of the user. The goal of the current research is to examine the extent of the difference in the recognition of the gestures, how precisely the two sensors are identifying the predefined actions. This could affect the rehabilitation process of the stroke patients and influence the efficiency of the rehabilitation. This application was developed in C# programming language and uses the original Shimmer connecting application as a base. During the working of this application it is possible to teach five-five different movements with the use of the Shimmer and the Microsoft Kinect sensors. The application can recognize these actions at any later time. This application uses a file-based database and the runtime memory of the application to store the saved data in order to reach the actions easier. The conclusion is that much more precise data were collected from the Microsoft Kinect sensor than the Shimmer sensors.
Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas
2008-01-01
PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.
Symphony: A Framework for Accurate and Holistic WSN Simulation
Riliskis, Laurynas; Osipov, Evgeny
2015-01-01
Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN) systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles. PMID:25723144
Grüt: A Gardening Sensor Kit for Children
Valpreda, Fabrizio; Zonda, Ilaria
2016-01-01
Food waste is one of the main problems in our society. This is mainly caused by people’s behaviors and attitudes, which influence the whole food chain, from production to final consumption. In fact, food is generally perceived as a commodity by adults, who transmit this behavior to children, who in turn do not develop any consciousness about food’s source. One way to reduce the problem seems to be by changing consumers’ attitudes, which develop during the early years of childhood. Research has shown that after attending school garden classes, children’s food-related behavior changes. Growing crops is not always easy—it can’t be done in the domestic space, and this lead to a loss of the long term positive effects. This paper presents a project that tries to teach children how to grow their own food indoors and outdoors, mixing real and virtual reality, connecting something natural like a plant to the Internet of Things (or IOT, a network of physical objects virtually connected to each other and to the web). The use of sensors related to an app makes this process more fun and useful for educational purposes. The aim of the project is to change children’s attitude towards food, increasing their knowledge about production and consumption, in order to reduce waste on a long term basis. The research has been developed in collaboration with Cisco NL and MediaLAB Amsterdam. The user testing has been executed with Dutch children in Amsterdam. PMID:26891301
Grüt: A Gardening Sensor Kit for Children.
Valpreda, Fabrizio; Zonda, Ilaria
2016-02-16
Food waste is one of the main problems in our society. This is mainly caused by people's behaviors and attitudes, which influence the whole food chain, from production to final consumption. In fact, food is generally perceived as a commodity by adults, who transmit this behavior to children, who in turn do not develop any consciousness about food's source. One way to reduce the problem seems to be by changing consumers' attitudes, which develop during the early years of childhood. Research has shown that after attending school garden classes, children's food-related behavior changes. Growing crops is not always easy--it can't be done in the domestic space, and this lead to a loss of the long term positive effects. This paper presents a project that tries to teach children how to grow their own food indoors and outdoors, mixing real and virtual reality, connecting something natural like a plant to the Internet of Things (or IOT, a network of physical objects virtually connected to each other and to the web). The use of sensors related to an app makes this process more fun and useful for educational purposes. The aim of the project is to change children's attitude towards food, increasing their knowledge about production and consumption, in order to reduce waste on a long term basis. The research has been developed in collaboration with Cisco NL and MediaLAB Amsterdam. The user testing has been executed with Dutch children in Amsterdam.
A Cluster-Based Architecture to Structure the Topology of Parallel Wireless Sensor Networks
Lloret, Jaime; Garcia, Miguel; Bri, Diana; Diaz, Juan R.
2009-01-01
A wireless sensor network is a self-configuring network of mobile nodes connected by wireless links where the nodes have limited capacity and energy. In many cases, the application environment requires the design of an exclusive network topology for a particular case. Cluster-based network developments and proposals in existence have been designed to build a network for just one type of node, where all nodes can communicate with any other nodes in their coverage area. Let us suppose a set of clusters of sensor nodes where each cluster is formed by different types of nodes (e.g., they could be classified by the sensed parameter using different transmitting interfaces, by the node profile or by the type of device: laptops, PDAs, sensor etc.) and exclusive networks, as virtual networks, are needed with the same type of sensed data, or the same type of devices, or even the same type of profiles. In this paper, we propose an algorithm that is able to structure the topology of different wireless sensor networks to coexist in the same environment. It allows control and management of the topology of each network. The architecture operation and the protocol messages will be described. Measurements from a real test-bench will show that the designed protocol has low bandwidth consumption and also demonstrates the viability and the scalability of the proposed architecture. Our ccluster-based algorithm is compared with other algorithms reported in the literature in terms of architecture and protocol measurements. PMID:22303185
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-12-17
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.
Nesaratnam, N; Thomas, P; Vivian, A
2017-10-01
IntroductionDissociated tests of strabismus provide valuable information for diagnosis and monitoring of ocular misalignment in patients with normal retinal correspondence. However, they are vulnerable to operator error and rely on a fixed head position. Virtual reality headsets obviate the need for head fixation, while providing other clear theoretical advantages, including complete control over the illumination and targets presented for the patient's interaction.PurposeWe compared the performance of a virtual reality-based test of ocular misalignment to that of the traditional Lees screen, to establish the feasibility of using virtual reality technology in ophthalmic settings in the future.MethodsThree patients underwent a traditional Lees screen test, and a virtual reality headset-based test of ocular motility. The virtual reality headset-based programme consisted of an initial test to measure horizontal and vertical deviation, followed by a test for torsion.ResultsThe pattern of deviation obtained using the virtual reality-based test showed agreement with that obtained from the Lees screen for patients with a fourth nerve palsy, comitant esotropia, and restrictive thyroid eye disease.ConclusionsThis study reports the first use of a virtual reality headset in assessing ocular misalignment, and demonstrates that it is a feasible dissociative test of strabismus.
Vibration sensing in smart machine rotors using internal MEMS accelerometers
NASA Astrophysics Data System (ADS)
Jiménez, Samuel; Cole, Matthew O. T.; Keogh, Patrick S.
2016-09-01
This paper presents a novel topology for enhanced vibration sensing in which wireless MEMS accelerometers embedded within a hollow rotor measure vibration in a synchronously rotating frame of reference. Theoretical relations between rotor-embedded accelerometer signals and the vibration of the rotor in an inertial reference frame are derived. It is thereby shown that functionality as a virtual stator-mounted displacement transducer can be achieved through appropriate signal processing. Experimental tests on a prototype rotor confirm that both magnitude and phase information of synchronous vibration can be measured directly without additional stator-mounted key-phasor sensors. Displacement amplitudes calculated from accelerometer signals will become erroneous at low rotational speeds due to accelerometer zero-g offsets, hence a corrective procedure is introduced. Impact tests are also undertaken to examine the ability of the internal accelerometers to measure transient vibration. A further capability is demonstrated, whereby the accelerometer signals are used to measure rotational speed of the rotor by analysing the signal component due to gravity. The study highlights the extended functionality afforded by internal accelerometers and demonstrates the feasibility of internal sensor topologies, which can provide improved observability of rotor vibration at externally inaccessible rotor locations.
Design and application of a small size SAFT imaging system for concrete structure
NASA Astrophysics Data System (ADS)
Shao, Zhixue; Shi, Lihua; Shao, Zhe; Cai, Jian
2011-07-01
A method of ultrasonic imaging detection is presented for quick non-destructive testing (NDT) of concrete structures using synthesized aperture focusing technology (SAFT). A low cost ultrasonic sensor array consisting of 12 market available low frequency ultrasonic transducers is designed and manufactured. A channel compensation method is proposed to improve the consistency of different transducers. The controlling devices for array scan as well as the virtual instrument for SAFT imaging are designed. In the coarse scan mode with the scan step of 50 mm, the system can quickly give an image display of a cross section of 600 mm (L) × 300 mm (D) by one measurement. In the refined scan model, the system can reduce the scan step and give an image display of the same cross section by moving the sensor array several times. Experiments on staircase specimen, concrete slab with embedded target, and building floor with underground pipe line all verify the efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
van Aardt, J. A.; van Leeuwen, M.; Kelbe, D.; Kampe, T.; Krause, K.
2015-12-01
Remote sensing is widely accepted as a useful technology for characterizing the Earth surface in an objective, reproducible, and economically feasible manner. To date, the calibration and validation of remote sensing data sets and biophysical parameter estimates remain challenging due to the requirements to sample large areas for ground-truth data collection, and restrictions to sample these data within narrow temporal windows centered around flight campaigns or satellite overpasses. The computer graphics community have taken significant steps to ameliorate some of these challenges by providing an ability to generate synthetic images based on geometrically and optically realistic representations of complex targets and imaging instruments. These synthetic data can be used for conceptual and diagnostic tests of instrumentation prior to sensor deployment or to examine linkages between biophysical characteristics of the Earth surface and at-sensor radiance. In the last two decades, the use of image generation techniques for remote sensing of the vegetated environment has evolved from the simulation of simple homogeneous, hypothetical vegetation canopies, to advanced scenes and renderings with a high degree of photo-realism. Reported virtual scenes comprise up to 100M surface facets; however, due to the tighter coupling between hardware and software development, the full potential of image generation techniques for forestry applications yet remains to be fully explored. In this presentation, we examine the potential computer graphics techniques have for the analysis of forest structure-function relationships and demonstrate techniques that provide for the modeling of extremely high-faceted virtual forest canopies, comprising billions of scene elements. We demonstrate the use of ray tracing simulations for the analysis of gap size distributions and characterization of foliage clumping within spatial footprints that allow for a tight matching between characteristics derived from these virtual scenes and typical pixel resolutions of remote sensing imagery.
Gao, Xiang; Yan, Shenggang; Li, Bin
2017-01-01
Magnetic detection techniques have been widely used in many fields, such as virtual reality, surgical robotics systems, and so on. A large number of methods have been developed to obtain the position of a ferromagnetic target. However, the angular rotation of the target relative to the sensor is rarely studied. In this paper, a new method for localization of moving object to determine both the position and rotation angle with three magnetic sensors is proposed. Trajectory localization estimation of three magnetic sensors, which are collinear and noncollinear, were obtained by the simulations, and experimental results demonstrated that the position and rotation angle of ferromagnetic target having roll, pitch or yaw in its movement could be calculated accurately and effectively with three noncollinear vector sensors. PMID:28892006
Villiger, Michael; Bohli, Dominik; Kiper, Daniel; Pyk, Pawel; Spillmann, Jeremy; Meilick, Bruno; Curt, Armin; Hepp-Reymond, Marie-Claude; Hotz-Boendermaker, Sabina; Eng, Kynan
2013-10-01
Neurorehabilitation interventions to improve lower limb function and neuropathic pain have had limited success in people with chronic, incomplete spinal cord injury (iSCI). We hypothesized that intense virtual reality (VR)-augmented training of observed and executed leg movements would improve limb function and neuropathic pain. Patients used a VR system with a first-person view of virtual lower limbs, controlled via movement sensors fitted to the patient's own shoes. Four tasks were used to deliver intensive training of individual muscles (tibialis anterior, quadriceps, leg ad-/abductors). The tasks engaged motivation through feedback of task success. Fourteen chronic iSCI patients were treated over 4 weeks in 16 to 20 sessions of 45 minutes. Outcome measures were 10 Meter Walking Test, Berg Balance Scale, Lower Extremity Motor Score, Spinal Cord Independence Measure, Locomotion and Neuropathic Pain Scale (NPS), obtained at the start and at 4 to 6 weeks before intervention. In addition to positive changes reported by the patients (Patients' Global Impression of Change), measures of walking capacity, balance, and strength revealed improvements in lower limb function. Intensity and unpleasantness of neuropathic pain in half of the affected participants were reduced on the NPS test. Overall findings remained stable 12 to 16 weeks after termination of the training. In a pretest/posttest, uncontrolled design, VR-augmented training was associated with improvements in motor function and neuropathic pain in persons with chronic iSCI, several of which reached the level of a minimal clinically important change. A controlled trial is needed to compare this intervention to active training alone or in combination.
Wang, Wen-Bin; Li, Jang-Yuan; Wu, Qi-Jun
2007-01-01
A LabVIEW-based self-constructed chemical virtual instrument (VI) has been developed for determining temperatures and pressures. It can be put together easily and quickly by selecting hardware modules, such as the PCI-DAQ card or serial port method, different kinds of sensors, signal-conditioning circuits or finished chemical instruments, and software modules such as data acquisition, saving, proceeding. The VI system provides individual and extremely flexible solutions for automatic measurements in physical chemistry research.
Wang, Wen-Bin; Li, Jang-Yuan; Wu, Qi-Jun
2007-01-01
A LabVIEW-based self-constructed chemical virtual instrument (VI) has been developed for determining temperatures and pressures. It can be put together easily and quickly by selecting hardware modules, such as the PCI-DAQ card or serial port method, different kinds of sensors, signal-conditioning circuits or finished chemical instruments, and software modules such as data acquisition, saving, proceeding. The VI system provides individual and extremely flexible solutions for automatic measurements in physical chemistry research. PMID:17671611
Schwenk, Michael; Grewal, Gurtej S; Honarvar, Bahareh; Schwenk, Stefanie; Mohler, Jane; Khalsa, Dharma S; Najafi, Bijan
2014-12-13
Wearable sensor technology can accurately measure body motion and provide incentive feedback during exercising. The aim of this pilot study was to evaluate the effectiveness and user experience of a balance training program in older adults integrating data from wearable sensors into a human-computer interface designed for interactive training. Senior living community residents (mean age 84.6) with confirmed fall risk were randomized to an intervention (IG, n = 17) or control group (CG, n = 16). The IG underwent 4 weeks (twice a week) of balance training including weight shifting and virtual obstacle crossing tasks with visual/auditory real-time joint movement feedback using wearable sensors. The CG received no intervention. Outcome measures included changes in center of mass (CoM) sway, ankle and hip joint sway measured during eyes open (EO) and eyes closed (EC) balance test at baseline and post-intervention. Ankle-hip postural coordination was quantified by a reciprocal compensatory index (RCI). Physical performance was quantified by the Alternate-Step-Test (AST), Timed-up-and-go (TUG), and gait assessment. User experience was measured by a standardized questionnaire. After the intervention sway of CoM, hip, and ankle were reduced in the IG compared to the CG during both EO and EC condition (p = .007-.042). Improvement was obtained for AST (p = .037), TUG (p = .024), fast gait speed (p = . 010), but not normal gait speed (p = .264). Effect sizes were moderate for all outcomes. RCI did not change significantly. Users expressed a positive training experience including fun, safety, and helpfulness of sensor-feedback. Results of this proof-of-concept study suggest that older adults at risk of falling can benefit from the balance training program. Study findings may help to inform future exercise interventions integrating wearable sensors for guided game-based training in home- and community environments. Future studies should evaluate the added value of the proposed sensor-based training paradigm compared to traditional balance training programs and commercial exergames. http://www.clinicaltrials.govNCT02043834.
Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto
2013-01-01
In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape. PMID:24113680
Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto
2013-10-09
In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape.
Ubiquitous health in practice: the interreality paradigm.
Gaggioli, Andrea; Raspelli, Simona; Grassi, Alessandra; Pallavicini, Federica; Cipresso, Pietro; Wiederhold, Brenda K; Riva, Giuseppe
2011-01-01
In this paper we introduce a new ubiquitous computing paradigm for behavioral health care: "Interreality". Interreality integrates assessment and treatment within a hybrid environment, that creates a bridge between the physical and virtual worlds. Our claim is that bridging virtual experiences (fully controlled by the therapist, used to learn coping skills and emotional regulation) with real experiences (allowing both the identification of any critical stressors and the assessment of what has been learned) using advanced technologies (virtual worlds, advanced sensors and PDA/mobile phones) may improve existing psychological treatment. To illustrate the proposed concept, a clinical scenario is also presented and discussed: Daniela, a 40 years old teacher, with a mother affected by Alzheimer's disease.
Novel graphical environment for virtual and real-world operations of tracked mobile manipulators
NASA Astrophysics Data System (ADS)
Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.
1993-08-01
A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
Investigation of HV/HR-CMOS technology for the ATLAS Phase-II Strip Tracker Upgrade
NASA Astrophysics Data System (ADS)
Fadeyev, V.; Galloway, Z.; Grabas, H.; Grillo, A. A.; Liang, Z.; Martinez-Mckinney, F.; Seiden, A.; Volk, J.; Affolder, A.; Buckland, M.; Meng, L.; Arndt, K.; Bortoletto, D.; Huffman, T.; John, J.; McMahon, S.; Nickerson, R.; Phillips, P.; Plackett, R.; Shipsey, I.; Vigani, L.; Bates, R.; Blue, A.; Buttar, C.; Kanisauskas, K.; Maneuski, D.; Benoit, M.; Di Bello, F.; Caragiulo, P.; Dragone, A.; Grenier, P.; Kenney, C.; Rubbo, F.; Segal, J.; Su, D.; Tamma, C.; Das, D.; Dopke, J.; Turchetta, R.; Wilson, F.; Worm, S.; Ehrler, F.; Peric, I.; Gregor, I. M.; Stanitzki, M.; Hoeferkamp, M.; Seidel, S.; Hommels, L. B. A.; Kramberger, G.; Mandić, I.; Mikuž, M.; Muenstermann, D.; Wang, R.; Zhang, J.; Warren, M.; Song, W.; Xiu, Q.; Zhu, H.
2016-09-01
ATLAS has formed strip CMOS project to study the use of CMOS MAPS devices as silicon strip sensors for the Phase-II Strip Tracker Upgrade. This choice of sensors promises several advantages over the conventional baseline design, such as better resolution, less material in the tracking volume, and faster construction speed. At the same time, many design features of the sensors are driven by the requirement of minimizing the impact on the rest of the detector. Hence the target devices feature long pixels which are grouped to form a virtual strip with binary-encoded z position. The key performance aspects are radiation hardness compatibility with HL-LHC environment, as well as extraction of the full hit position with full-reticle readout architecture. To date, several test chips have been submitted using two different CMOS technologies. The AMS 350 nm is a high voltage CMOS process (HV-CMOS), that features the sensor bias of up to 120 V. The TowerJazz 180 nm high resistivity CMOS process (HR-CMOS) uses a high resistivity epitaxial layer to provide the depletion region on top of the substrate. We have evaluated passive pixel performance, and charge collection projections. The results strongly support the radiation tolerance of these devices to radiation dose of the HL-LHC in the strip tracker region. We also describe design features for the next chip submission that are motivated by our technology evaluation.
NASA Technical Reports Server (NTRS)
Delin, K. A.; Harvey, R. P.; Chabot, N. A.; Jackson, S. P.; Adams, Mike; Johnson, D. W.; Britton, J. T.
2003-01-01
The most rigorous tests of the ability to detect extant life will occur where biotic activity is limited by severe environmental conditions. Cryogenic environments are among the most severe-the energy and nutrients needed for biological activity are in short supply while the climate itself is actively destructive to biological mechanisms. In such settings biological activity is often limited to brief flourishes, occurring only when and where conditions are at their most favorable. The closer that typical regional conditions approach conditions that are actively hostile , the more widely distributed biological blooms will be in both time and space. On a spatial dimension of a few meters or a time dimension of a few days, biological activity becomes much more difficult to detect. One way to overcome this difficulty is to establish a Sensor Web that can monitor microclimates over appropriate scales of time and distance, allowing a continuous virtual presence for instant recognition of favorable conditions. A more sophisticated Sensor Web, incorporating metabolic sensors, can effectively meet the challenge to be in "the right place in the right time". This is particularly of value in planetary surface missions, where limited mobility and mission timelines require extremely efficient sample and data acquisition. Sensor Webs can be an effective way to fill the gap between broad scale orbital data collection and fine-scale surface lander science. We are in the process of developing an intelligent, distributed and autonomous Sensor Web that will allow us to monitor microclimate under severe cryogenic conditions, approaching those extant on the surface of Mars. Ultimately this Sensor Web will include the ability to detect and/or establish limits on extant microbiological activity through incorporation of novel metabolic gas sensors. Here we report the results of our first deployment of a Sensor Web prototype in a previously unexplored high altitude East Antarctic Plateau "micro-oasis" at the MacAlpine Hills, Law Glacier, Antarctica.
A novel vibration structure for dynamic balancing measurement
NASA Astrophysics Data System (ADS)
Qin, Peng; Cai, Ping; Hu, Qinghan; Li, Yingxia
2006-11-01
Based on the conception of instantaneous motion center in theoretical mechanics, the paper presents a novel virtual vibration structure for dynamic balancing measurement with high precision. The structural features and the unbalancing response characteristics of this vibration structure are analyzed in depth. The relation between the real measuring system and the virtual one is emphatically expounded. Theoretical analysis indicates that the flexibly hinged integrative plate spring sets holds fixed vibration center, with the result that this vibration system has the most excellent effect of plane separation. In addition, the sensors are mounted on the same longitudinal section. Thus the influence of phase error on the primary unbalance reduction ratio is eliminated. Furthermore, the performance changes in sensors caused by environmental factor have less influence on the accuracy of the measurement. The result for this system is more accurate measurement with lower requirement for a second correction run.
Method of the Determination of Exterior Orientation of Sensors in Hilbert Type Space.
Stępień, Grzegorz
2018-03-17
The following article presents a new isometric transformation algorithm based on the transformation in the newly normed Hilbert type space. The presented method is based on so-called virtual translations, already known in advance, of two relative oblique orthogonal coordinate systems-interior and exterior orientation of sensors-to a common, known in both systems, point. Each of the systems is translated along its axis (the systems have common origins) and at the same time the angular relative orientation of both coordinate systems is constant. The translation of both coordinate systems is defined by the spatial norm determining the length of vectors in the new Hilbert type space. As such, the displacement of two relative oblique orthogonal systems is reduced to zero. This makes it possible to directly calculate the rotation matrix of the sensor. The next and final step is the return translation of the system along an already known track. The method can be used for big rotation angles. The method was verified in laboratory conditions for the test data set and measurement data (field data). The accuracy of the results in the laboratory test is on the level of 10 -6 of the input data. This confirmed the correctness of the assumed calculation method. The method is a further development of the author's 2017 Total Free Station (TFS) transformation to several centroids in Hilbert type space. This is the reason why the method is called Multi-Centroid Isometric Transformation-MCIT. MCIT is very fast and enables, by reducing to zero the translation of two relative oblique orthogonal coordinate systems, direct calculation of the exterior orientation of the sensors.
Chalil Madathil, Kapil; Greenstein, Joel S
2017-11-01
Collaborative virtual reality-based systems have integrated high fidelity voice-based communication, immersive audio and screen-sharing tools into virtual environments. Such three-dimensional collaborative virtual environments can mirror the collaboration among usability test participants and facilitators when they are physically collocated, potentially enabling moderated usability tests to be conducted effectively when the facilitator and participant are located in different places. We developed a virtual collaborative three-dimensional remote moderated usability testing laboratory and employed it in a controlled study to evaluate the effectiveness of moderated usability testing in a collaborative virtual reality-based environment with two other moderated usability testing methods: the traditional lab approach and Cisco WebEx, a web-based conferencing and screen sharing approach. Using a mixed methods experimental design, 36 test participants and 12 test facilitators were asked to complete representative tasks on a simulated online shopping website. The dependent variables included the time taken to complete the tasks; the usability defects identified and their severity; and the subjective ratings on the workload index, presence and satisfaction questionnaires. Remote moderated usability testing methodology using a collaborative virtual reality system performed similarly in terms of the total number of defects identified, the number of high severity defects identified and the time taken to complete the tasks with the other two methodologies. The overall workload experienced by the test participants and facilitators was the least with the traditional lab condition. No significant differences were identified for the workload experienced with the virtual reality and the WebEx conditions. However, test participants experienced greater involvement and a more immersive experience in the virtual environment than in the WebEx condition. The ratings for the virtual environment condition were not significantly different from those for the traditional lab condition. The results of this study suggest that participants were productive and enjoyed the virtual lab condition, indicating the potential of a virtual world based approach as an alternative to conventional approaches for synchronous usability testing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimating Three-Dimensional Orientation of Human Body Parts by Inertial/Magnetic Sensing
Sabatini, Angelo Maria
2011-01-01
User-worn sensing units composed of inertial and magnetic sensors are becoming increasingly popular in various domains, including biomedical engineering, robotics, virtual reality, where they can also be applied for real-time tracking of the orientation of human body parts in the three-dimensional (3D) space. Although they are a promising choice as wearable sensors under many respects, the inertial and magnetic sensors currently in use offer measuring performance that are critical in order to achieve and maintain accurate 3D-orientation estimates, anytime and anywhere. This paper reviews the main sensor fusion and filtering techniques proposed for accurate inertial/magnetic orientation tracking of human body parts; it also gives useful recipes for their actual implementation. PMID:22319365
Estimating three-dimensional orientation of human body parts by inertial/magnetic sensing.
Sabatini, Angelo Maria
2011-01-01
User-worn sensing units composed of inertial and magnetic sensors are becoming increasingly popular in various domains, including biomedical engineering, robotics, virtual reality, where they can also be applied for real-time tracking of the orientation of human body parts in the three-dimensional (3D) space. Although they are a promising choice as wearable sensors under many respects, the inertial and magnetic sensors currently in use offer measuring performance that are critical in order to achieve and maintain accurate 3D-orientation estimates, anytime and anywhere. This paper reviews the main sensor fusion and filtering techniques proposed for accurate inertial/magnetic orientation tracking of human body parts; it also gives useful recipes for their actual implementation.
Kansei Biosensor and IT Society
NASA Astrophysics Data System (ADS)
Toko, Kiyoshi
A taste sensor with global selectivity is composed of several kinds of lipid/polymer membranes for transforming information of taste substances into electric signal. The sensor output shows different patterns for chemical substances which have different taste qualities such as saltiness and sourness. Taste interactions such as suppression effect, which occurs between bitterness and sweetness, can be detected and quantified using the taste sensor. The taste and also smell of foodstuffs such as beer, coffee, mineral water, soup and milk can be discussed quantitatively. The taste sensor provides the objective scale for the human sensory expression. Multi-modal communication becomes possible using a taste/smell recognition microchip, which produces virtual taste. We are now standing at the beginning of a new age of communication using digitized taste.
Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation
2011-01-01
This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces. PMID:21791054
Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation.
Boulos, Maged N Kamel; Blanchard, Bryan J; Walker, Cory; Montero, Julio; Tripathy, Aalap; Gutierrez-Osuna, Ricardo
2011-07-26
This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces.
NASA Astrophysics Data System (ADS)
Selker, J. S.; Roques, C.; Higgins, C. W.; Good, S. P.; Hut, R.; Selker, A.
2015-12-01
The confluence of 3-Dimensional printing, low-cost solid-state-sensors, low-cost, low-power digital controllers (e.g., Arduinos); and open-source publishing (e.g., Github) is poised to transform environmental sensing. The Open-Source Published Environmental Sensing (OPENS) laboratory has launched and is available for all to use. OPENS combines cutting edge technologies and makes them available to the global environmental sensing community. OPENS includes a Maker lab space where people may collaborate in person or virtually via on-line forum for the publication and discussion of environmental sensing technology (Corvallis, Oregon, USA, please feel free to request a free reservation for space and equipment use). The physical lab houses a test-bed for sensors, as well as a complete classical machine shop, 3-D printers, electronics development benches, and workstations for code development. OPENS will provide a web-based formal publishing framework wherein global students and scientists can peer-review publish (with DOI) novel and evolutionary advancements in environmental sensor systems. This curated and peer-reviewed digital collection will include complete sets of "printable" parts and operating computer code for sensing systems. The physical lab will include all of the machines required to produce these sensing systems. These tools can be addressed in person or virtually, creating a truly global venue for advancement in monitoring earth's environment and agricultural systems. In this talk we will present an example of the process of design and publication the design and data from the OPENS-Permeameter. The publication includes 3-D printing code, Arduino (or other control/logging platform) operational code; sample data sets, and a full discussion of the design set in the scientific context of previous related devices. Editors for the peer-review process are currently sought - contact John.Selker@Oregonstate.edu or Clement.Roques@Oregonstate.edu.
Distributed Sensor Fusion for Scalar Field Mapping Using Mobile Sensor Networks.
La, Hung Manh; Sheng, Weihua
2013-04-01
In this paper, autonomous mobile sensor networks are deployed to measure a scalar field and build its map. We develop a novel method for multiple mobile sensor nodes to build this map using noisy sensor measurements. Our method consists of two parts. First, we develop a distributed sensor fusion algorithm by integrating two different distributed consensus filters to achieve cooperative sensing among sensor nodes. This fusion algorithm has two phases. In the first phase, the weighted average consensus filter is developed, which allows each sensor node to find an estimate of the value of the scalar field at each time step. In the second phase, the average consensus filter is used to allow each sensor node to find a confidence of the estimate at each time step. The final estimate of the value of the scalar field is iteratively updated during the movement of the mobile sensors via weighted average. Second, we develop the distributed flocking-control algorithm to drive the mobile sensors to form a network and track the virtual leader moving along the field when only a small subset of the mobile sensors know the information of the leader. Experimental results are provided to demonstrate our proposed algorithms.
Design of an Intelligent Front-End Signal Conditioning Circuit for IR Sensors
NASA Astrophysics Data System (ADS)
de Arcas, G.; Ruiz, M.; Lopez, J. M.; Gutierrez, R.; Villamayor, V.; Gomez, L.; Montojo, Mª. T.
2008-02-01
This paper presents the design of an intelligent front-end signal conditioning system for IR sensors. The system has been developed as an interface between a PbSe IR sensor matrix and a TMS320C67x digital signal processor. The system architecture ensures its scalability so it can be used for sensors with different matrix sizes. It includes an integrator based signal conditioning circuit, a data acquisition converter block, and a FPGA based advanced control block that permits including high level image preprocessing routines such as faulty pixel detection and sensor calibration in the signal conditioning front-end. During the design phase virtual instrumentation technologies proved to be a very valuable tool for prototyping when choosing the best A/D converter type for the application. Development time was significantly reduced due to the use of this technology.
Secure Autonomous Automated Scheduling (SAAS). Rev. 1.1
NASA Technical Reports Server (NTRS)
Walke, Jon G.; Dikeman, Larry; Sage, Stephen P.; Miller, Eric M.
2010-01-01
This report describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the UK-DMC, is used as the space-based sensor. The UK-DMC's availability is determined via machine-to-machine communications using SSTL's mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL's and Universal Space Network's (USN) ground assets. The availability and scheduling of USN's assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards
NASA Astrophysics Data System (ADS)
DeVries, B.; Huang, W.; Huang, C.; Jones, J. W.; Lang, M. W.; Creed, I. F.; Carroll, M.
2017-12-01
The function of wetlandscapes in hydrological and biogeochemical cycles is largely governed by surface inundation, with small wetlands that experience periodic inundation playing a disproportionately large role in these processes. However, the spatial distribution and temporal dynamics of inundation in these wetland systems are still poorly understood, resulting in large uncertainties in global water, carbon and greenhouse gas budgets. Satellite imagery provides synoptic and repeat views of the Earth's surface and presents opportunities to fill this knowledge gap. Despite the proliferation of Earth Observation satellite missions in the past decade, no single satellite sensor can simultaneously provide the spatial and temporal detail needed to adequately characterize inundation in small, dynamic wetland systems. Surface water data products must therefore integrate observations from multiple satellite sensors in order to address this objective, requiring the development of improved and coordinated algorithms to generate consistent estimates of surface inundation. We present a suite of algorithms designed to detect surface inundation in wetlands using data from a virtual constellation of optical and radar sensors comprising the Landsat and Sentinel missions (DeVries et al., 2017). Both optical and radar algorithms were able to detect inundation in wetlands without the need for external training data, allowing for high-efficiency monitoring of wetland inundation at large spatial and temporal scales. Applying these algorithms across a gradient of wetlands in North America, preliminary findings suggest that while these fully automated algorithms can detect wetland inundation at higher spatial and temporal resolutions than currently available surface water data products, limitations specific to the satellite sensors and their acquisition strategies are responsible for uncertainties in inundation estimates. Further research is needed to investigate strategies for integrating optical and radar data from virtual constellations, with a focus on reducing uncertainties, maximizing spatial and temporal detail, and establishing consistent records of wetland inundation over time. The findings and conclusions in this article do not necessarily represent the views of the U.S. Government.
Yi, Meng; Chen, Qingkui; Xiong, Neal N
2016-11-03
This paper considers the distributed access and control problem of massive wireless sensor networks' data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate.
Open Source Dataturbine (OSDT) Android Sensorpod in Environmental Observing Systems
NASA Astrophysics Data System (ADS)
Fountain, T. R.; Shin, P.; Tilak, S.; Trinh, T.; Smith, J.; Kram, S.
2014-12-01
The OSDT Android SensorPod is a custom-designed mobile computing platform for assembling wireless sensor networks for environmental monitoring applications. Funded by an award from the Gordon and Betty Moore Foundation, the OSDT SensorPod represents a significant technological advance in the application of mobile and cloud computing technologies to near-real-time applications in environmental science, natural resources management, and disaster response and recovery. It provides a modular architecture based on open standards and open-source software that allows system developers to align their projects with industry best practices and technology trends, while avoiding commercial vendor lock-in to expensive proprietary software and hardware systems. The integration of mobile and cloud-computing infrastructure represents a disruptive technology in the field of environmental science, since basic assumptions about technology requirements are now open to revision, e.g., the roles of special purpose data loggers and dedicated site infrastructure. The OSDT Android SensorPod was designed with these considerations in mind, and the resulting system exhibits the following characteristics - it is flexible, efficient and robust. The system was developed and tested in the three science applications: 1) a fresh water limnology deployment in Wisconsin, 2) a near coastal marine science deployment at the UCSD Scripps Pier, and 3) a terrestrial ecological deployment in the mountains of Taiwan. As part of a public education and outreach effort, a Facebook page with daily ocean pH measurements from the UCSD Scripps pier was developed. Wireless sensor networks and the virtualization of data and network services is the future of environmental science infrastructure. The OSDT Android SensorPod was designed and developed to harness these new technology developments for environmental monitoring applications.
Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco
2009-01-01
3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications. PMID:22389618
Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco
2009-01-01
3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a "sensor fusion" approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications.
Fiber-Optic Continuous Liquid Sensor for Cryogenic Propellant Gauging
NASA Technical Reports Server (NTRS)
Xu. Wei
2010-01-01
An innovative fiber-optic sensor has been developed for low-thrust-level settled mass gauging with measurement uncertainty <0.5 percent over cryogenic propellant tank fill levels from 2 to 98 percent. The proposed sensor uses a single optical fiber to measure liquid level and liquid distribution of cryogenic propellants. Every point of the sensing fiber is a point sensor that not only distinguishes liquid and vapor, but also measures temperature. This sensor is able to determine the physical location of each point sensor with 1-mm spatial resolution. Acting as a continuous array of numerous liquid/vapor point sensors, the truly distributed optical sensing fiber can be installed in a propellant tank in the same manner as silicon diode point sensor stripes using only a single feedthrough to connect to an optical signal interrogation unit outside the tank. Either water or liquid nitrogen levels can be measured within 1-mm spatial resolution up to a distance of 70 meters from the optical interrogation unit. This liquid-level sensing technique was also compared to the pressure gauge measurement technique in water and liquid nitrogen contained in a vertical copper pipe with a reasonable degree of accuracy. It has been demonstrated that the sensor can measure liquid levels in multiple containers containing water or liquid nitrogen with one signal interrogation unit. The liquid levels measured by the multiple fiber sensors were consistent with those virtually measured by a ruler. The sensing performance of various optical fibers has been measured, and has demonstrated that they can survive after immersion at cryogenic temperatures. The fiber strength in liquid nitrogen has also been measured. Multiple water level tests were also conducted under various actual and theoretical vibration conditions, and demonstrated that the signal-to-noise ratio under these vibration conditions, insofar as it affects measurement accuracy, is manageable and robust enough for a wide variety of spacecraft applications. A simple solution has been developed to absorb optical energy at the termination of the optical sensor, thereby avoiding any feedback to the optical interrogation unit
Providing a virtual tour of a glacial watershed
NASA Astrophysics Data System (ADS)
Berner, L.; Habermann, M.; Hood, E.; Fatland, R.; Heavner, M.; Knuth, E.
2007-12-01
SEAMONSTER, a NASA funded sensor web project, is the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research. Seamonster is leveraging existing open-source software and is an implementation of existing sensor web technologies intended to act as a sensor web testbed, an educational tool, a scientific resource, and a public resource. The primary focus area of initial SEAMONSTER deployment is the Lemon Creek watershed, which includes the Lemon Creek Glacier studied as part of the 1957-58 IPY. This presentation describes our year one efforts to maximize education and public outreach activities of SEAMONSTER. During the first summer, 37 sensors were deployed throughout two partially glaciated watersheds and facilitated data acquisition in temperate rain forest, alpine, lacustrine, and glacial environments. Understanding these environments are important for public understanding of climate change. These environments are geographically isolated, limiting public access to, and understanding of, such locales. In an effort to inform the general public and primary educators about the basic processes occurring in these unique natural systems, we are developing an interactive website. This web portal will supplement and enhance environmental science primary education by providing educators and students with interactive access to basic information from the glaciological, hydrological, and meteorological systems we are studying. In addition, we are developing an interactive virtual tour of the Lemon Glacier and its watershed. This effort will include Google Earth as a means of real-time data visualization and will take advantage of time-lapse movies, photographs, maps, and satellite imagery to promote an understanding of these unique natural systems and the role of sensor webs in education.
Inertial Head-Tracker Sensor Fusion by a Complementary Separate-Bias Kalman Filter
NASA Technical Reports Server (NTRS)
Foxlin, Eric
1996-01-01
Current virtual environment and teleoperator applications are hampered by the need for an accurate, quick-responding head-tracking system with a large working volume. Gyroscopic orientation sensors can overcome problems with jitter, latency, interference, line-of-sight obscurations, and limited range, but suffer from slow drift. Gravimetric inclinometers can detect attitude without drifting, but are slow and sensitive to transverse accelerations. This paper describes the design of a Kalman filter to integrate the data from these two types of sensors in order to achieve the excellent dynamic response of an inertial system without drift, and without the acceleration sensitivity of inclinometers.
Integrating Fiber Optic Strain Sensors into Metal Using Ultrasonic Additive Manufacturing
NASA Astrophysics Data System (ADS)
Hehr, Adam; Norfolk, Mark; Wenning, Justin; Sheridan, John; Leser, Paul; Leser, Patrick; Newman, John A.
2018-03-01
Ultrasonic additive manufacturing, a rather new three-dimensional (3D) printing technology, uses ultrasonic energy to produce metallurgical bonds between layers of metal foils near room temperature. This low temperature attribute of the process enables integration of temperature sensitive components, such as fiber optic strain sensors, directly into metal structures. This may be an enabling technology for Digital Twin applications, i.e., virtual model interaction and feedback with live load data. This study evaluates the consolidation quality, interface robustness, and load sensing limits of commercially available fiber optic strain sensors embedded into aluminum alloy 6061. Lastly, an outlook on the technology and its applications is described.
Inertial head-tracker sensor fusion by a complementary separate-bias Kalman filter
NASA Technical Reports Server (NTRS)
Foxlin, Eric
1996-01-01
Current virtual environment and teleoperator applications are hampered by the need for an accurate, quick responding head-tracking system with a large working volume. Gyroscopic orientation sensors can overcome problems with jitter, latency, interference, line-of-sight obscurations, and limited range, but suffer from slow drift. Gravimetric inclinometers can detect attitude without drifting, but are slow and sensitive to transverse accelerations. This paper describes the design of a Kalman filter to integrate the data from these two types of sensors in order to achieve the excellent dynamic response of an inertial system without drift, and without the acceleration sensitivity of inclinometers.
NASA Astrophysics Data System (ADS)
Da Silva, A.; Sánchez Prieto, S.; Polo, O.; Parra Espada, P.
2013-05-01
Because of the tough robustness requirements in space software development, it is imperative to carry out verification tasks at a very early development stage to ensure that the implemented exception mechanisms work properly. All this should be done long time before the real hardware is available. But even if real hardware is available the verification of software fault tolerance mechanisms can be difficult since real faulty situations must be systematically and artificially brought about which can be imposible on real hardware. To solve this problem the Alcala Space Research Group (SRG) has developed a LEON2 virtual platform (Leon2ViP) with fault injection capabilities. This way it is posible to run the exact same target binary software as runs on the physical system in a more controlled and deterministic environment, allowing a more strict requirements verification. Leon2ViP enables unmanned and tightly focused fault injection campaigns, not possible otherwise, in order to expose and diagnose flaws in the software implementation early. Furthermore, the use of a virtual hardware-in-the-loop approach makes it possible to carry out preliminary integration tests with the spacecraft emulator or the sensors. The use of Leon2ViP has meant a signicant improvement, in both time and cost, in the development and verification processes of the Instrument Control Unit boot software on board Solar Orbiter's Energetic Particle Detector.
Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio
2016-01-01
Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318
NASA Astrophysics Data System (ADS)
Blöcher, Johanna; Kuraz, Michal
2017-04-01
In this contribution we propose implementations of the dual permeability model with different inter-domain exchange descriptions and metaheuristic optimization algorithms for parameter identification and mesh optimization. We compare variants of the coupling term with different numbers of parameters to test if a reduction of parameters is feasible. This can reduce parameter uncertainty in inverse modeling, but also allow for different conceptual models of the domain and matrix coupling. The different variants of the dual permeability model are implemented in the open-source objective library DRUtES written in FORTRAN 2003/2008 in 1D and 2D. For parameter identification we use adaptations of the particle swarm optimization (PSO) and Teaching-learning-based optimization (TLBO), which are population-based metaheuristics with different learning strategies. These are high-level stochastic-based search algorithms that don't require gradient information or a convex search space. Despite increasing computing power and parallel processing, an overly fine mesh is not feasible for parameter identification. This creates the need to find a mesh that optimizes both accuracy and simulation time. We use a bi-objective PSO algorithm to generate a Pareto front of optimal meshes to account for both objectives. The dual permeability model and the optimization algorithms were tested on virtual data and field TDR sensor readings. The TDR sensor readings showed a very steep increase during rapid rainfall events and a subsequent steep decrease. This was theorized to be an effect of artificial macroporous envelopes surrounding TDR sensors creating an anomalous region with distinct local soil hydraulic properties. One of our objectives is to test how well the dual permeability model can describe this infiltration behavior and what coupling term would be most suitable.
NASA Technical Reports Server (NTRS)
Roscoe, Stanley N.
1989-01-01
For better or worse, virtual imaging displays are with us in the form of narrow-angle combining-glass presentations, head-up displays (HUD), and head-mounted projections of wide-angle sensor-generated or computer-animated imagery (HMD). All military and civil aviation services and a large number of aerospace companies are involved in one way or another in a frantic competition to develop the best virtual imaging display system. The success or failure of major weapon systems hangs in the balance, and billions of dollars in potential business are at stake. Because of the degree to which national defense is committed to the perfection of virtual imaging displays, a brief consideration of their status, an investigation and analysis of their problems, and a search for realistic alternatives are long overdue.
Application of Virtual, Augmented, and Mixed Reality to Urology.
Hamacher, Alaric; Kim, Su Jin; Cho, Sung Tae; Pardeshi, Sunil; Lee, Seung Hyun; Eun, Sung-Jong; Whangbo, Taeg Keun
2016-09-01
Recent developments in virtual, augmented, and mixed reality have introduced a considerable number of new devices into the consumer market. This momentum is also affecting the medical and health care sector. Although many of the theoretical and practical foundations of virtual reality (VR) were already researched and experienced in the 1980s, the vastly improved features of displays, sensors, interactivity, and computing power currently available in devices offer a new field of applications to the medical sector and also to urology in particular. The purpose of this review article is to review the extent to which VR technology has already influenced certain aspects of medicine, the applications that are currently in use in urology, and the future development trends that could be expected.
Application of Virtual, Augmented, and Mixed Reality to Urology
2016-01-01
Recent developments in virtual, augmented, and mixed reality have introduced a considerable number of new devices into the consumer market. This momentum is also affecting the medical and health care sector. Although many of the theoretical and practical foundations of virtual reality (VR) were already researched and experienced in the 1980s, the vastly improved features of displays, sensors, interactivity, and computing power currently available in devices offer a new field of applications to the medical sector and also to urology in particular. The purpose of this review article is to review the extent to which VR technology has already influenced certain aspects of medicine, the applications that are currently in use in urology, and the future development trends that could be expected. PMID:27706017
de Vries, Aijse W; Faber, Gert; Jonkers, Ilse; Van Dieen, Jaap H; Verschueren, Sabine M P
2018-01-01
Virtual Reality (VR) balance training may have advantages over regular exercise training in older adults. However, results so far are conflicting potentially due to the lack of challenge imposed by the movements in those games. Therefore, the aim of this study was to assess to which extent two similar skiing games challenge balance, as reflected in center of mass (COM) movements relative to their Functional Limits of Stability (FLOS). Thirty young and elderly participants performed two skiing games, one on the Wii Balance board (Wiiski), which uses a force plate, and one with the Kinect sensor (Kinski), which performs motion tracking. During gameplay, kinematics were captured using seven opto-electronical cameras. FLOS were obtained for eight directions. The influence of games and trials on COM displacement in each of the eight directions, and maximal COM speed, were tested with Generalized Estimated Equations. In all directions with anterior and medio-lateral, but not with a posterior component, subjects showed significantly larger maximal %FLOS displacements during the Kinski game than during the Wiiski game. Furthermore, maximal COM displacement, and COM speed in Kinski remained similar or increased over trials, whereas for Wiiski it decreased. Our results show the importance of assessing the movement challenge in games used for balance training. Similar games impose different challenges, with the control sensors and their gain settings playing an important role. Furthermore, adaptations led to a decrease in challenge in Wiiski, which might limit the effectiveness of the game as a balance-training tool. Copyright © 2017 Elsevier B.V. All rights reserved.
Comparison of methods of temperature measurement in swine.
Hanneman, S K; Jesurum-Urbaitis, J T; Bickel, D R
2004-07-01
The purpose of these experiments was to test the equivalence of pulmonary artery, urinary bladder, tympanic, rectal and femoral artery methods of temperature measurement in healthy and critically ill swine under clinical intensive care unit (ICU) conditions using a prospective, time series design. First, sensors were tested for error and sensitivity to change in temperature with a precision-controlled water bath and a laboratory-certified digital thermometer for temperatures 34-42 degrees C. There was virtually no systematic (bias) or random (precision) error (<0.2 degrees C). The bladder sensor had the slowest response time to change in temperature (105-120 s). Next, testing was done in an experimental porcine ICU in a non-profit research institution with four male, sedated, and mechanically ventilated domestic farm pigs. The in vivo experiments were conducted over periods of 41-168 h with temperatures measured every 1-5 s. The bladder, tympanic and rectal methods had unacceptable bias (>or=0.5 degrees C) and/or precision (>or=0.2 degrees C). Response time varied from 7 s with the femoral artery method to 280 s (4.7 min) with the tympanic method. We concluded that equivalence of the methods was insufficient for them to be used interchangeably in the porcine ICU. Intravascular monitoring of core body temperature produces optimal measurement of porcine temperature under varying conditions of physiological stability.
A Prototype Land Information Sensor Web: Design, Implementation and Implication for the SMAP Mission
NASA Astrophysics Data System (ADS)
Su, H.; Houser, P.; Tian, Y.; Geiger, J. K.; Kumar, S. V.; Gates, L.
2009-12-01
Land Surface Model (LSM) predictions are regular in time and space, but these predictions are influenced by errors in model structure, input variables, parameters and inadequate treatment of sub-grid scale spatial variability. Consequently, LSM predictions are significantly improved through observation constraints made in a data assimilation framework. Several multi-sensor satellites are currently operating which provide multiple global observations of the land surface, and its related near-atmospheric properties. However, these observations are not optimal for addressing current and future land surface environmental problems. To meet future earth system science challenges, NASA will develop constellations of smart satellites in sensor web configurations which provide timely on-demand data and analysis to users, and can be reconfigured based on the changing needs of science and available technology. A sensor web is more than a collection of satellite sensors. That means a sensor web is a system composed of multiple platforms interconnected by a communication network for the purpose of performing specific observations and processing data required to support specific science goals. Sensor webs can eclipse the value of disparate sensor components by reducing response time and increasing scientific value, especially when the two-way interaction between the model and the sensor web is enabled. The study of a prototype Land Information Sensor Web (LISW) is sponsored by NASA, trying to integrate the Land Information System (LIS) in a sensor web framework which allows for optimal 2-way information flow that enhances land surface modeling using sensor web observations, and in turn allows sensor web reconfiguration to minimize overall system uncertainty. This prototype is based on a simulated interactive sensor web, which is then used to exercise and optimize the sensor web modeling interfaces. The Land Information Sensor Web Service-Oriented Architecture (LISW-SOA) has been developed and it is the very first sensor web framework developed especially for the land surface studies. Synthetic experiments based on the LISW-SOA and the virtual sensor web provide a controlled environment in which to examine the end-to-end performance of the prototype, the impact of various sensor web design trade-offs and the eventual value of sensor webs for a particular prediction or decision support. In this paper, the design, implementation of the LISW-SOA and the implication for the Soil Moisture Active and Passive (SMAP) mission is presented. Particular attention is focused on examining the relationship between the economic investment on a sensor web (space and air borne, ground based) and the accuracy of the model predicted soil moisture, which can be achieved by using such sensor observations. The Study of Virtual Land Information Sensor Web (LISW) is expected to provide some necessary a priori knowledge for designing and deploying the next generation Global Earth Observing System of systems (GEOSS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woo, L Y; Glass, R S; Novak, R F
2009-09-23
Solid-state electrochemical sensors using two different sensing electrode compositions, gold and strontium-doped lanthanum manganite (LSM), were evaluated for gas phase sensing of NO{sub x} (NO and NO{sub 2}) using an impedance-metric technique. An asymmetric cell design utilizing porous YSZ electrolyte exposed both electrodes to the test gas (i.e., no reference gas). Sensitivity to less than 5 ppm NO and response/recovery times (10-90%) less than 10 s were demonstrated. Using an LSM sensing electrode, virtual identical sensitivity towards NO and NO{sub 2} was obtained, indicating that the equilibrium gas concentration was measured by the sensing electrode. In contrast, for cells employingmore » a gold sensing electrode the NO{sub x} sensitivity varied depending on the cell design: increasing the amount of porous YSZ electrolyte on the sensor surface produced higher NO{sub 2} sensitivity compared to NO. In order to achieve comparable sensitivity for both NO and NO{sub 2}, the cell with the LSM sensing electrode required operation at a lower temperature (575 C) than the cell with the gold sensing electrode (650 C). The role of surface reactions are proposed to explain the differences in NO and NO{sub 2} selectivity using the two different electrode materials.« less
Concordance of Motion Sensor and Clinician-Rated Fall Risk Scores in Older Adults.
Elledge, Julie
2017-12-01
As the older adult population in the United States continues to grow, developing reliable, valid, and practical methods for identifying fall risk is a high priority. Falls are prevalent in older adults and contribute significantly to morbidity and mortality rates and rising health costs. Identifying at-risk older adults and intervening in a timely manner can reduce falls. Conventional fall risk assessment tools require a health professional trained in the use of each tool for administration and interpretation. Motion sensor technology, which uses three-dimensional cameras to measure patient movements, is promising for assessing older adults' fall risk because it could eliminate or reduce the need for provider oversight. The purpose of this study was to assess the concordance of fall risk scores as measured by a motion sensor device, the OmniVR Virtual Rehabilitation System, with clinician-rated fall risk scores in older adult outpatients undergoing physical rehabilitation. Three standardized fall risk assessments were administered by the OmniVR and by a clinician. Validity of the OmniVR was assessed by measuring the concordance between the two assessment methods. Stability of the OmniVR fall risk ratings was assessed by measuring test-retest reliability. The OmniVR scores showed high concordance with the clinician-rated scores and high stability over time, demonstrating comparability with provider measurements.
Knock probability estimation through an in-cylinder temperature model with exogenous noise
NASA Astrophysics Data System (ADS)
Bares, P.; Selmanaj, D.; Guardiola, C.; Onder, C.
2018-01-01
This paper presents a new knock model which combines a deterministic knock model based on the in-cylinder temperature and an exogenous noise disturbing this temperature. The autoignition of the end-gas is modelled by an Arrhenius-like function and the knock probability is estimated by propagating a virtual error probability distribution. Results show that the random nature of knock can be explained by uncertainties at the in-cylinder temperature estimation. The model only has one parameter for calibration and thus can be easily adapted online. In order to reduce the measurement uncertainties associated with the air mass flow sensor, the trapped mass is derived from the in-cylinder pressure resonance, which improves the knock probability estimation and reduces the number of sensors needed for the model. A four stroke SI engine was used for model validation. By varying the intake temperature, the engine speed, the injected fuel mass, and the spark advance, specific tests were conducted, which furnished data with various knock intensities and probabilities. The new model is able to predict the knock probability within a sufficient range at various operating conditions. The trapped mass obtained by the acoustical model was compared in steady conditions by using a fuel balance and a lambda sensor and differences below 1 % were found.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priedhorsky, Reid; Randles, Tim
Charliecloud is a set of scripts to let users run a virtual cluster of virtual machines (VMs) on a desktop or supercomputer. Key functions include: 1. Creating (typically by installing an operating system from vendor media) and updating VM images; 2. Running a single VM; 3. Running multiple VMs in a virtual cluster. The virtual machines can talk to one another over the network and (in some cases) the outside world. This is accomplished by calling external programs such as QEMU and the Virtual Distributed Ethernet (VDE) suite. The goal is to let users have a virtual cluster containing nodesmore » where they have privileged access, while isolating that privilege within the virtual cluster so it cannot affect the physical compute resources. Host configuration enforces security; this is not included in Charliecloud, though security guidelines are included in its documentation and Charliecloud is designed to facilitate such configuration. Charliecloud manages passing information from host computers into and out of the virtual machines, such as parameters of the virtual cluster, input data specified by the user, output data from virtual compute jobs, VM console display, and network connections (e.g., SSH or X11). Parameters for the virtual cluster (number of VMs, RAM and disk per VM, etc.) are specified by the user or gathered from the environment (e.g., SLURM environment variables). Example job scripts are included. These include computation examples (such as a "hello world" MPI job) as well as performance tests. They also include a security test script to verify that the virtual cluster is appropriately sandboxed. Tests include: 1. Pinging hosts inside and outside the virtual cluster to explore connectivity; 2. Port scans (again inside and outside) to see what services are available; 3. Sniffing tests to see what traffic is visible to running VMs; 4. IP address spoofing to test network functionality in this case; 5. File access tests to make sure host access permissions are enforced. This test script is not a comprehensive scanner and does not test for specific vulnerabilities. Importantly, no information about physical hosts or network topology is included in this script (or any of Charliecloud); while part of a sensible test, such information is specified by the user when the test is run. That is, one cannot learn anything about the LANL network or computing infrastructure by examining Charliecloud code.« less
NASA Astrophysics Data System (ADS)
Makki, Noaman; Pop-Iliev, Remon
2011-06-01
An in-wheel wireless and battery-less piezo-powered tire pressure sensor is developed. Where conventional battery powered Tire Pressure Monitoring Systems (TPMS) are marred by the limited battery life, TPMS based on power harvesting modules provide virtually unlimited sensor life. Furthermore, the elimination of a permanent energy reservoir simplifies the overall sensor design through the exclusion of extra circuitry required to sense vehicle motion and conserve precious battery capacity during vehicle idling periods. In this paper, two design solutions are presented, 1) with very low cost highly flexible piezoceramic (PZT) bender elements bonded directly to the tire to generate power required to run the sensor and, 2) a novel rim mounted PZT harvesting unit that can be used to power pressure sensors incorporated into the valve stem requiring minimal change to the presently used sensors. While both the designs eliminate the use of environmentally unfriendly battery from the TPMS design, they offer advantages of being very low cost, service free and easily replaceable during tire repair and replacement.
Simulation of Smart Home Activity Datasets
Synnott, Jonathan; Nugent, Chris; Jeffers, Paul
2015-01-01
A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation. PMID:26087371
Simulation of Smart Home Activity Datasets.
Synnott, Jonathan; Nugent, Chris; Jeffers, Paul
2015-06-16
A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.
DOT National Transportation Integrated Search
2016-05-01
As driving becomes more automated, vehicles are being equipped with more sensors generating even higher data rates. Radars (RAdio Detection and Ranging) are used for object detection, visual cameras as virtual mirrors, and LIDARs (LIght Detection and...
Virtual Proprioception for eccentric training.
LeMoyne, Robert; Mastroianni, Timothy
2017-07-01
Wireless inertial sensors enable quantified feedback, which can be applied to evaluate the efficacy of therapy and rehabilitation. In particular eccentric training promotes a beneficial rehabilitation and strength training strategy. Virtual Proprioception for eccentric training applies real-time feedback from a wireless gyroscope platform enabled through a software application for a smartphone. Virtual Proprioception for eccentric training is applied to the eccentric phase of a biceps brachii strength training and contrasted to a biceps brachii strength training scenario without feedback. During the operation of Virtual Proprioception for eccentric training the intent is to not exceed a prescribed gyroscope signal threshold based on the real-time presentation of the gyroscope signal, in order to promote the eccentric aspect of the strength training endeavor. The experimental trial data is transmitted wireless through connectivity to the Internet as an email attachment for remote post-processing. A feature set is derived from the gyroscope signal for machine learning classification of the two scenarios of Virtual Proprioception real-time feedback for eccentric training and eccentric training without feedback. Considerable classification accuracy is achieved through the application of a multilayer perceptron neural network for distinguishing between the Virtual Proprioception real-time feedback for eccentric training and eccentric training without feedback.
Virtualized Traffic: reconstructing traffic flows from discrete spatiotemporal data.
Sewall, Jason; van den Berg, Jur; Lin, Ming C; Manocha, Dinesh
2011-01-01
We present a novel concept, Virtualized Traffic, to reconstruct and visualize continuous traffic flows from discrete spatiotemporal data provided by traffic sensors or generated artificially to enhance a sense of immersion in a dynamic virtual world. Given the positions of each car at two recorded locations on a highway and the corresponding time instances, our approach can reconstruct the traffic flows (i.e., the dynamic motions of multiple cars over time) between the two locations along the highway for immersive visualization of virtual cities or other environments. Our algorithm is applicable to high-density traffic on highways with an arbitrary number of lanes and takes into account the geometric, kinematic, and dynamic constraints on the cars. Our method reconstructs the car motion that automatically minimizes the number of lane changes, respects safety distance to other cars, and computes the acceleration necessary to obtain a smooth traffic flow subject to the given constraints. Furthermore, our framework can process a continuous stream of input data in real time, enabling the users to view virtualized traffic events in a virtual world as they occur. We demonstrate our reconstruction technique with both synthetic and real-world input. © 2011 IEEE Published by the IEEE Computer Society
NASA Astrophysics Data System (ADS)
Packard, Corey D.; Viola, Timothy S.; Klein, Mark D.
2017-10-01
The ability to predict spectral electro-optical (EO) signatures for various targets against realistic, cluttered backgrounds is paramount for rigorous signature evaluation. Knowledge of background and target signatures, including plumes, is essential for a variety of scientific and defense-related applications including contrast analysis, camouflage development, automatic target recognition (ATR) algorithm development and scene material classification. The capability to simulate any desired mission scenario with forecast or historical weather is a tremendous asset for defense agencies, serving as a complement to (or substitute for) target and background signature measurement campaigns. In this paper, a systematic process for the physical temperature and visible-through-infrared radiance prediction of several diverse targets in a cluttered natural environment scene is presented. The ability of a virtual airborne sensor platform to detect and differentiate targets from a cluttered background, from a variety of sensor perspectives and across numerous wavelengths in differing atmospheric conditions, is considered. The process described utilizes the thermal and radiance simulation software MuSES and provides a repeatable, accurate approach for analyzing wavelength-dependent background and target (including plume) signatures in multiple band-integrated wavebands (multispectral) or hyperspectrally. The engineering workflow required to combine 3D geometric descriptions, thermal material properties, natural weather boundary conditions, all modes of heat transfer and spectral surface properties is summarized. This procedure includes geometric scene creation, material and optical property attribution, and transient physical temperature prediction. Radiance renderings, based on ray-tracing and the Sandford-Robertson BRDF model, are coupled with MODTRAN for the inclusion of atmospheric effects. This virtual hyperspectral/multispectral radiance prediction methodology has been extensively validated and provides a flexible process for signature evaluation and algorithm development.
Noncontact Measurement of Humidity and Temperature Using Airborne Ultrasound
NASA Astrophysics Data System (ADS)
Kon, Akihiko; Mizutani, Koichi; Wakatsuki, Naoto
2010-04-01
We describe a noncontact method for measuring humidity and dry-bulb temperature. Conventional humidity sensors are single-point measurement devices, so that a noncontact method for measuring the relative humidity is required. Ultrasonic temperature sensors are noncontact measurement sensors. Because water vapor in the air increases sound velocity, conventional ultrasonic temperature sensors measure virtual temperature, which is higher than dry-bulb temperature. We performed experiments using an ultrasonic delay line, an atmospheric pressure sensor, and either a thermometer or a relative humidity sensor to confirm the validity of our measurement method at relative humidities of 30, 50, 75, and 100% and at temperatures of 283.15, 293.15, 308.15, and 323.15 K. The results show that the proposed method measures relative humidity with an error rate of less than 16.4% and dry-bulb temperature with an error of less than 0.7 K. Adaptations of the measurement method for use in air-conditioning control systems are discussed.
Analysis of a ferrofluid core differential transformer tilt measurement sensor
NASA Astrophysics Data System (ADS)
Medvegy, T.; Molnár, Á.; Molnár, G.; Gugolya, Z.
2017-04-01
In our work, we developed a ferrofluid core differential transformer sensor, which can be used to measure tilt and acceleration. The proposed sensor consisted of three coils, from which the primary was excited with an alternating current. In the space surrounded by the coils was a cell half-filled with ferrofluid, therefore in the horizontal state of the sensor the fluid distributes equally in the three sections of the cell surrounded by the three coils. Nevertheless when the cell is being tilted or accelerated (in the direction of the axis of the coils), there is a different amount of ferrofluid in the three sections. The voltage induced in the secondary coils strongly depends on the amount of ferrofluid found in the core surrounded by them, so the tilt or the acceleration of the cell becomes measurable. We constructed the sensor in several layouts. The linearly coiled sensor had an excellent resolution. Another version with a toroidal cell had almost perfect linearity and a virtually infinite measuring range.
Optimal Deployment of Sensor Nodes Based on Performance Surface of Underwater Acoustic Communication
Choi, Jee Woong
2017-01-01
The underwater acoustic sensor network (UWASN) is a system that exchanges data between numerous sensor nodes deployed in the sea. The UWASN uses an underwater acoustic communication technique to exchange data. Therefore, it is important to design a robust system that will function even in severely fluctuating underwater communication conditions, along with variations in the ocean environment. In this paper, a new algorithm to find the optimal deployment positions of underwater sensor nodes is proposed. The algorithm uses the communication performance surface, which is a map showing the underwater acoustic communication performance of a targeted area. A virtual force-particle swarm optimization algorithm is then used as an optimization technique to find the optimal deployment positions of the sensor nodes, using the performance surface information to estimate the communication radii of the sensor nodes in each generation. The algorithm is evaluated by comparing simulation results between two different seasons (summer and winter) for an area located off the eastern coast of Korea as the selected targeted area. PMID:29053569
Bioulac, Stéphanie; Micoulaud-Franchi, Jean-Arthur; Maire, Jenna; Bouvard, Manuel P; Rizzo, Albert A; Sagaspe, Patricia; Philip, Pierre
2018-03-01
Virtual environments have been used to assess children with ADHD but have never been tested as therapeutic tools. We tested a new virtual classroom cognitive remediation program to improve symptoms in children with ADHD. In this randomized clinical trial, 51 children with ADHD (7-11 years) were assigned to a virtual cognitive remediation group, a methylphenidate group, or a psychotherapy group. All children were evaluated before and after therapy with an ADHD Rating Scale, a Continuous Performance Test (CPT), and a virtual classroom task. After therapy by virtual remediation, children exhibited significantly higher numbers of correct hits on the virtual classroom and CPT. These improvements were equivalent to those observed with methylphenidate treatment. Our study demonstrates for the first time that a cognitive remediation program delivered in a virtual classroom reduces distractibility in children with ADHD and could replace methylphenidate treatment in specific cases.
Li, Jisheng; Xin, Xiaohu; Luo, Yongfen; Ji, Haiying; Li, Yanming; Deng, Junbo
2013-11-01
A conformal combined sensor is designed and it is used in Partial Discharge (PD) location experiments in transformer oil. The sensor includes a cross-shaped ultrasonic phased array of 13 elements and an ultra-high-frequency (UHF) electromagnetic rectangle array of 2 × 2 elements. Virtual expansion with high order cumulants, the ultrasonic array can achieve the effect of array with 61 elements. This greatly improves the aperture and direction sharpness of original array and reduces the cost of follow-up hardware. With the cross-shaped ultrasonic array, the results of PD location experiments are precise and the maximum error of the direction of arrival (DOA) is less than 5°.
NASA Astrophysics Data System (ADS)
Ward, Dennis W.; Bennett, Kelly W.
2017-05-01
The Sensor Information Testbed COllaberative Research Environment (SITCORE) and the Automated Online Data Repository (AODR) are significant enablers of the U.S. Army Research Laboratory (ARL)'s Open Campus Initiative and together create a highly-collaborative research laboratory and testbed environment focused on sensor data and information fusion. SITCORE creates a virtual research development environment allowing collaboration from other locations, including DoD, industry, academia, and collation facilities. SITCORE combined with AODR provides end-toend algorithm development, experimentation, demonstration, and validation. The AODR enterprise allows the U.S. Army Research Laboratory (ARL), as well as other government organizations, industry, and academia to store and disseminate multiple intelligence (Multi-INT) datasets collected at field exercises and demonstrations, and to facilitate research and development (R and D), and advancement of analytical tools and algorithms supporting the Intelligence, Surveillance, and Reconnaissance (ISR) community. The AODR provides a potential central repository for standards compliant datasets to serve as the "go-to" location for lessons-learned and reference products. Many of the AODR datasets have associated ground truth and other metadata which provides a rich and robust data suite for researchers to develop, test, and refine their algorithms. Researchers download the test data to their own environments using a sophisticated web interface. The AODR allows researchers to request copies of stored datasets and for the government to process the requests and approvals in an automated fashion. Access to the AODR requires two-factor authentication in the form of a Common Access Card (CAC) or External Certificate Authority (ECA)
MEMS cantilever sensor for THz photoacoustic chemical sensing and pectroscopy
NASA Astrophysics Data System (ADS)
Glauvitz, Nathan E.
Sensitive Microelectromechanical System (MEMS) cantilever designs were modeled, fabricated, and tested to measure the photoacoustic (PA) response of gasses to terahertz (THz) radiation. Surface and bulk micromachining technologies were employed to create the extremely sensitive devices that could detect very small changes in pressure. Fabricated devices were then tested in a custom made THz PA vacuum test chamber where the cantilever deflections caused by the photoacoustic effect were measured with a laser interferometer and iris beam clipped methods. The sensitive cantilever designs achieved a normalized noise equivalent absorption coefficient of 2.83x10-10 cm-1 W Hz-½ using a 25 microW radiation source power and a 1 s sampling time. Traditional gas phase molecular spectroscopy absorption cells are large and bulky. The outcome of this research resulted was a photoacoustic detection method that was virtually independent of the absorption path-length, which allowed the chamber dimensions to be greatly reduced, leading to the possibility of a compact, portable chemical detection and spectroscopy system
Assessing sensor accuracy for non-adjunct use of continuous glucose monitoring.
Kovatchev, Boris P; Patek, Stephen D; Ortiz, Edward Andrew; Breton, Marc D
2015-03-01
The level of continuous glucose monitoring (CGM) accuracy needed for insulin dosing using sensor values (i.e., the level of accuracy permitting non-adjunct CGM use) is a topic of ongoing debate. Assessment of this level in clinical experiments is virtually impossible because the magnitude of CGM errors cannot be manipulated and related prospectively to clinical outcomes. A combination of archival data (parallel CGM, insulin pump, self-monitoring of blood glucose [SMBG] records, and meals for 56 pump users with type 1 diabetes) and in silico experiments was used to "replay" real-life treatment scenarios and relate sensor error to glycemic outcomes. Nominal blood glucose (BG) traces were extracted using a mathematical model, yielding 2,082 BG segments each initiated by insulin bolus and confirmed by SMBG. These segments were replayed at seven sensor accuracy levels (mean absolute relative differences [MARDs] of 3-22%) testing six scenarios: insulin dosing using sensor values, threshold, and predictive alarms, each without or with considering CGM trend arrows. In all six scenarios, the occurrence of hypoglycemia (frequency of BG levels ≤50 mg/dL and BG levels ≤39 mg/dL) increased with sensor error, displaying an abrupt slope change at MARD =10%. Similarly, hyperglycemia (frequency of BG levels ≥250 mg/dL and BG levels ≥400 mg/dL) increased and displayed an abrupt slope change at MARD=10%. When added to insulin dosing decisions, information from CGM trend arrows, threshold, and predictive alarms resulted in improvement in average glycemia by 1.86, 8.17, and 8.88 mg/dL, respectively. Using CGM for insulin dosing decisions is feasible below a certain level of sensor error, estimated in silico at MARD=10%. In our experiments, further accuracy improvement did not contribute substantively to better glycemic outcomes.
Assessing Sensor Accuracy for Non-Adjunct Use of Continuous Glucose Monitoring
Patek, Stephen D.; Ortiz, Edward Andrew; Breton, Marc D.
2015-01-01
Abstract Background: The level of continuous glucose monitoring (CGM) accuracy needed for insulin dosing using sensor values (i.e., the level of accuracy permitting non-adjunct CGM use) is a topic of ongoing debate. Assessment of this level in clinical experiments is virtually impossible because the magnitude of CGM errors cannot be manipulated and related prospectively to clinical outcomes. Materials and Methods: A combination of archival data (parallel CGM, insulin pump, self-monitoring of blood glucose [SMBG] records, and meals for 56 pump users with type 1 diabetes) and in silico experiments was used to “replay” real-life treatment scenarios and relate sensor error to glycemic outcomes. Nominal blood glucose (BG) traces were extracted using a mathematical model, yielding 2,082 BG segments each initiated by insulin bolus and confirmed by SMBG. These segments were replayed at seven sensor accuracy levels (mean absolute relative differences [MARDs] of 3–22%) testing six scenarios: insulin dosing using sensor values, threshold, and predictive alarms, each without or with considering CGM trend arrows. Results: In all six scenarios, the occurrence of hypoglycemia (frequency of BG levels ≤50 mg/dL and BG levels ≤39 mg/dL) increased with sensor error, displaying an abrupt slope change at MARD =10%. Similarly, hyperglycemia (frequency of BG levels ≥250 mg/dL and BG levels ≥400 mg/dL) increased and displayed an abrupt slope change at MARD=10%. When added to insulin dosing decisions, information from CGM trend arrows, threshold, and predictive alarms resulted in improvement in average glycemia by 1.86, 8.17, and 8.88 mg/dL, respectively. Conclusions: Using CGM for insulin dosing decisions is feasible below a certain level of sensor error, estimated in silico at MARD=10%. In our experiments, further accuracy improvement did not contribute substantively to better glycemic outcomes. PMID:25436913
Hardware Support for Malware Defense and End-to-End Trust
2017-02-01
IoT) sensors and actuators, mobile devices and servers; cloud based, stand alone, and traditional mainframes. The prototype developed demonstrated...virtual machines. For mobile platforms we developed and prototyped an architecture supporting separation of personalities on the same platform...4 3.1. MOBILE
Effective environmental stewardship requires timely geospatial information about ecology and
environment for informed environmental decision support. Unprecedented public access to high resolution
imagery from earth-looking sensors via online virtual earth browsers ...
DOT National Transportation Integrated Search
2014-04-01
Trip origin-destination (O-D) demand matrices are critical components in transportation network : modeling, and provide essential information on trip distributions and corresponding spatiotemporal : traffic patterns in traffic zones in vehicular netw...
A Novel Topology Link-Controlling Approach for Active Defense of a Node in a Network.
Li, Jun; Hu, HanPing; Ke, Qiao; Xiong, Naixue
2017-03-09
With the rapid development of virtual machine technology and cloud computing, distributed denial of service (DDoS) attacks, or some peak traffic, poses a great threat to the security of the network. In this paper, a novel topology link control technique and mitigation attacks in real-time environments is proposed. Firstly, a non-invasive method of deploying virtual sensors in the nodes is built, which uses the resource manager of each monitored node as a sensor. Secondly, a general topology-controlling approach of resisting the tolerant invasion is proposed. In the proposed approach, a prediction model is constructed by using copula functions for predicting the peak of a resource through another resource. The result of prediction determines whether or not to initiate the active defense. Finally, a minority game with incomplete strategy is employed to suppress attack flows and improve the permeability of the normal flows. The simulation results show that the proposed approach is very effective in protecting nodes.
A Novel Topology Link-Controlling Approach for Active Defense of Nodes in Networks
Li, Jun; Hu, HanPing; Ke, Qiao; Xiong, Naixue
2017-01-01
With the rapid development of virtual machine technology and cloud computing, distributed denial of service (DDoS) attacks, or some peak traffic, poses a great threat to the security of the network. In this paper, a novel topology link control technique and mitigation attacks in real-time environments is proposed. Firstly, a non-invasive method of deploying virtual sensors in the nodes is built, which uses the resource manager of each monitored node as a sensor. Secondly, a general topology-controlling approach of resisting the tolerant invasion is proposed. In the proposed approach, a prediction model is constructed by using copula functions for predicting the peak of a resource through another resource. The result of prediction determines whether or not to initiate the active defense. Finally, a minority game with incomplete strategy is employed to suppress attack flows and improve the permeability of the normal flows. The simulation results show that the proposed approach is very effective in protecting nodes. PMID:28282962
Terrain Model Registration for Single Cycle Instrument Placement
NASA Technical Reports Server (NTRS)
Deans, Matthew; Kunz, Clay; Sargent, Randy; Pedersen, Liam
2003-01-01
This paper presents an efficient and robust method for registration of terrain models created using stereo vision on a planetary rover. Our approach projects two surface models into a virtual depth map, rendering the models as they would be seen from a single range sensor. Correspondence is established based on which points project to the same location in the virtual range sensor. A robust norm of the deviations in observed depth is used as the objective function, and the algorithm searches for the rigid transformation which minimizes the norm. An initial coarse search is done using rover pose information from odometry and orientation sensing. A fine search is done using Levenberg-Marquardt. Our method enables a planetary rover to keep track of designated science targets as it moves, and to hand off targets from one set of stereo cameras to another. These capabilities are essential for the rover to autonomously approach a science target and place an instrument in contact in a single command cycle.
Drawing Inspiration from Human Brain Networks: Construction of Interconnected Virtual Networks
Kominami, Daichi; Leibnitz, Kenji; Murata, Masayuki
2018-01-01
Virtualization of wireless sensor networks (WSN) is widely considered as a foundational block of edge/fog computing, which is a key technology that can help realize next-generation Internet of things (IoT) networks. In such scenarios, multiple IoT devices and service modules will be virtually deployed and interconnected over the Internet. Moreover, application services are expected to be more sophisticated and complex, thereby increasing the number of modifications required for the construction of network topologies. Therefore, it is imperative to establish a method for constructing a virtualized WSN (VWSN) topology that achieves low latency on information transmission and high resilience against network failures, while keeping the topological construction cost low. In this study, we draw inspiration from inter-modular connectivity in human brain networks, which achieves high performance when dealing with large-scale networks composed of a large number of modules (i.e., regions) and nodes (i.e., neurons). We propose a method for assigning inter-modular links based on a connectivity model observed in the cerebral cortex of the brain, known as the exponential distance rule (EDR) model. We then choose endpoint nodes of these links by controlling inter-modular assortativity, which characterizes the topological connectivity of brain networks. We test our proposed methods using simulation experiments. The results show that the proposed method based on the EDR model can construct a VWSN topology with an optimal combination of communication efficiency, robustness, and construction cost. Regarding the selection of endpoint nodes for the inter-modular links, the results also show that high assortativity enhances the robustness and communication efficiency because of the existence of inter-modular links of two high-degree nodes. PMID:29642483
Drawing Inspiration from Human Brain Networks: Construction of Interconnected Virtual Networks.
Murakami, Masaya; Kominami, Daichi; Leibnitz, Kenji; Murata, Masayuki
2018-04-08
Virtualization of wireless sensor networks (WSN) is widely considered as a foundational block of edge/fog computing, which is a key technology that can help realize next-generation Internet of things (IoT) networks. In such scenarios, multiple IoT devices and service modules will be virtually deployed and interconnected over the Internet. Moreover, application services are expected to be more sophisticated and complex, thereby increasing the number of modifications required for the construction of network topologies. Therefore, it is imperative to establish a method for constructing a virtualized WSN (VWSN) topology that achieves low latency on information transmission and high resilience against network failures, while keeping the topological construction cost low. In this study, we draw inspiration from inter-modular connectivity in human brain networks, which achieves high performance when dealing with large-scale networks composed of a large number of modules (i.e., regions) and nodes (i.e., neurons). We propose a method for assigning inter-modular links based on a connectivity model observed in the cerebral cortex of the brain, known as the exponential distance rule (EDR) model. We then choose endpoint nodes of these links by controlling inter-modular assortativity, which characterizes the topological connectivity of brain networks. We test our proposed methods using simulation experiments. The results show that the proposed method based on the EDR model can construct a VWSN topology with an optimal combination of communication efficiency, robustness, and construction cost. Regarding the selection of endpoint nodes for the inter-modular links, the results also show that high assortativity enhances the robustness and communication efficiency because of the existence of inter-modular links of two high-degree nodes.
Walker, Martha L; Ringleb, Stacie I; Maihafer, George C; Walker, Robert; Crouch, Jessica R; Van Lunen, Bonnie; Morrison, Steven
2010-01-01
Walker ML, Ringleb SI, Maihafer GC, Walker R, Crouch JR, Van Lunen B, Morrison S. Virtual reality-enhanced partial body weight-supported treadmill training poststroke: feasibility and effectiveness in 6 subjects. To determine whether the use of a low-cost virtual reality (VR) system used in conjunction with partial body weight-supported treadmill training (BWSTT) was feasible and effective in improving the walking and balance abilities of patients poststroke. A before-after comparison of a single group with BWSTT intervention. University research laboratory. A convenience sample of 7 adults who were within 1 year poststroke and who had completed traditional rehabilitation but still exhibited gait deficits. Six participants completed the study. Twelve treatment sessions of BWSTT with VR. The VR system generated a virtual environment that showed on a television screen in front of the treadmill to give participants the sensation of walking down a city street. A head-mounted position sensor provided postural feedback. Functional Gait Assessment (FGA) score, Berg Balance Scale (BBS) score, and overground walking speed. One subject dropped out of the study. All other participants made significant improvements in their ability to walk. FGA scores increased from mean of 13.8 to 18. BBS scores increased from mean of 43.8 to 48.8, although a ceiling effect was seen for this test. Overground walking speed increased from mean of .49m/s to .68m/s. A low-cost VR system combined with BWSTT is feasible for improved gait and balance of patients poststroke. Copyright (c) 2010 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Data Convergence - An Australian Perspective
NASA Astrophysics Data System (ADS)
Allen, S. S.; Howell, B.
2012-12-01
Coupled numerical physical, biogeochemical and sediment models are increasingly being used as integrators to help understand the cumulative or far field effects of change in the coastal environment. This reliance on modeling has forced observations to be delivered as data streams ingestible by modeling frameworks. This has made it easier to create near real-time or forecasting models than to try to recreate the past, and has lead in turn to the conversion of historical data into data streams to allow them to be ingested by the same frameworks. The model and observation frameworks under development within Australia's Commonwealth and Industrial Research Organisation (CSIRO) are now feeding into the Australian Ocean Data Network's (AODN's) MARine Virtual Laboratory (MARVL) . The sensor, or data stream, brokering solution is centred around the "message" and all data flowing through the gateway is wrapped as a message. Messages consist of a topic and a data object and their routing through the gateway to pre-processors and listeners is determined by the topic. The Sensor Message Gateway (SMG) method is allowing data from different sensors measuring the same thing but with different temporal resolutions, units or spatial coverage to be ingested or visualized seamlessly. At the same time the model output as a virtual sensor is being explored, this again being enabled by the SMG. It is only for two way communications with sensor that rigorous adherence to standards is needed, by accepting existing data in less than ideal formats, but exposing them though the SMG we can move a step closer to the Internet Of Things by creating an Internet of Industries where each vested interest can continue with business as usual, contribute to data convergence and adopt more open standards when investment seems appropriate to that sector or business.Architecture Overview
[Initial results with the Munich knee simulator].
Frey, M; Riener, R; Burgkart, R; Pröll, T
2002-01-01
In orthopaedics more than 50 different clinical knee joint evaluation tests exist that have to be trained in orthopaedic education. Often it is not possible to obtain sufficient practical training in a clinical environment. The training can be improved by Virtual Reality technology. In the frame of the Munich Knee Joint Simulation project an artificial leg with anatomical properties is attached by a force-torque sensor to an industrial robot. The recorded forces and torques are the input for a simple biomechanical model of the human knee joint. The robot is controlled in such way that the user gets the feeling he moves a real leg. The leg is embedded in a realistic environment with a couch and a patient on it.
Freeman, Daniel; Bradley, Jonathan; Antley, Angus; Bourke, Emilie; DeWeever, Natalie; Evans, Nicole; Černis, Emma; Sheaves, Bryony; Waite, Felicity; Dunn, Graham; Slater, Mel; Clark, David M
2016-07-01
Persecutory delusions may be unfounded threat beliefs maintained by safety-seeking behaviours that prevent disconfirmatory evidence being successfully processed. Use of virtual reality could facilitate new learning. To test the hypothesis that enabling patients to test the threat predictions of persecutory delusions in virtual reality social environments with the dropping of safety-seeking behaviours (virtual reality cognitive therapy) would lead to greater delusion reduction than exposure alone (virtual reality exposure). Conviction in delusions and distress in a real-world situation were assessed in 30 patients with persecutory delusions. Patients were then randomised to virtual reality cognitive therapy or virtual reality exposure, both with 30 min in graded virtual reality social environments. Delusion conviction and real-world distress were then reassessed. In comparison with exposure, virtual reality cognitive therapy led to large reductions in delusional conviction (reduction 22.0%, P = 0.024, Cohen's d = 1.3) and real-world distress (reduction 19.6%, P = 0.020, Cohen's d = 0.8). Cognitive therapy using virtual reality could prove highly effective in treating delusions. © The Royal College of Psychiatrists 2016.
Freeman, Daniel; Bradley, Jonathan; Antley, Angus; Bourke, Emilie; DeWeever, Natalie; Evans, Nicole; Černis, Emma; Sheaves, Bryony; Waite, Felicity; Dunn, Graham; Slater, Mel; Clark, David M.
2016-01-01
Background Persecutory delusions may be unfounded threat beliefs maintained by safety-seeking behaviours that prevent disconfirmatory evidence being successfully processed. Use of virtual reality could facilitate new learning. Aims To test the hypothesis that enabling patients to test the threat predictions of persecutory delusions in virtual reality social environments with the dropping of safety-seeking behaviours (virtual reality cognitive therapy) would lead to greater delusion reduction than exposure alone (virtual reality exposure). Method Conviction in delusions and distress in a real-world situation were assessed in 30 patients with persecutory delusions. Patients were then randomised to virtual reality cognitive therapy or virtual reality exposure, both with 30 min in graded virtual reality social environments. Delusion conviction and real-world distress were then reassessed. Results In comparison with exposure, virtual reality cognitive therapy led to large reductions in delusional conviction (reduction 22.0%, P = 0.024, Cohen's d = 1.3) and real-world distress (reduction 19.6%, P = 0.020, Cohen's d = 0.8). Conclusion Cognitive therapy using virtual reality could prove highly effective in treating delusions. PMID:27151071
Automotive Radar and Lidar Systems for Next Generation Driver Assistance Functions
NASA Astrophysics Data System (ADS)
Rasshofer, R. H.; Gresser, K.
2005-05-01
Automotive radar and lidar sensors represent key components for next generation driver assistance functions (Jones, 2001). Today, their use is limited to comfort applications in premium segment vehicles although an evolution process towards more safety-oriented functions is taking place. Radar sensors available on the market today suffer from low angular resolution and poor target detection in medium ranges (30 to 60m) over azimuth angles larger than ±30°. In contrast, Lidar sensors show large sensitivity towards environmental influences (e.g. snow, fog, dirt). Both sensor technologies today have a rather high cost level, forbidding their wide-spread usage on mass markets. A common approach to overcome individual sensor drawbacks is the employment of data fusion techniques (Bar-Shalom, 2001). Raw data fusion requires a common, standardized data interface to easily integrate a variety of asynchronous sensor data into a fusion network. Moreover, next generation sensors should be able to dynamically adopt to new situations and should have the ability to work in cooperative sensor environments. As vehicular function development today is being shifted more and more towards virtual prototyping, mathematical sensor models should be available. These models should take into account the sensor's functional principle as well as all typical measurement errors generated by the sensor.
Yi, Meng; Chen, Qingkui; Xiong, Neal N.
2016-01-01
This paper considers the distributed access and control problem of massive wireless sensor networks’ data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate. PMID:27827878
An Integrated Simulation Module for Cyber-Physical Automation Systems †
Ferracuti, Francesco; Freddi, Alessandro; Monteriù, Andrea; Prist, Mariorosario
2016-01-01
The integration of Wireless Sensors Networks (WSNs) into Cyber Physical Systems (CPSs) is an important research problem to solve in order to increase the performances, safety, reliability and usability of wireless automation systems. Due to the complexity of real CPSs, emulators and simulators are often used to replace the real control devices and physical connections during the development stage. The most widespread simulators are free, open source, expandable, flexible and fully integrated into mathematical modeling tools; however, the connection at a physical level and the direct interaction with the real process via the WSN are only marginally tackled; moreover, the simulated wireless sensor motes are not able to generate the analogue output typically required for control purposes. A new simulation module for the control of a wireless cyber-physical system is proposed in this paper. The module integrates the COntiki OS JAva Simulator (COOJA), a cross-level wireless sensor network simulator, and the LabVIEW system design software from National Instruments. The proposed software module has been called “GILOO” (Graphical Integration of Labview and cOOja). It allows one to develop and to debug control strategies over the WSN both using virtual or real hardware modules, such as the National Instruments Real-Time Module platform, the CompactRio, the Supervisory Control And Data Acquisition (SCADA), etc. To test the proposed solution, we decided to integrate it with one of the most popular simulators, i.e., the Contiki OS, and wireless motes, i.e., the Sky mote. As a further contribution, the Contiki Sky DAC driver and a new “Advanced Sky GUI” have been proposed and tested in the COOJA Simulator in order to provide the possibility to develop control over the WSN. To test the performances of the proposed GILOO software module, several experimental tests have been made, and interesting preliminary results are reported. The GILOO module has been applied to a smart home mock-up where a networked control has been developed for the LED lighting system. PMID:27164109
An Integrated Simulation Module for Cyber-Physical Automation Systems.
Ferracuti, Francesco; Freddi, Alessandro; Monteriù, Andrea; Prist, Mariorosario
2016-05-05
The integration of Wireless Sensors Networks (WSNs) into Cyber Physical Systems (CPSs) is an important research problem to solve in order to increase the performances, safety, reliability and usability of wireless automation systems. Due to the complexity of real CPSs, emulators and simulators are often used to replace the real control devices and physical connections during the development stage. The most widespread simulators are free, open source, expandable, flexible and fully integrated into mathematical modeling tools; however, the connection at a physical level and the direct interaction with the real process via the WSN are only marginally tackled; moreover, the simulated wireless sensor motes are not able to generate the analogue output typically required for control purposes. A new simulation module for the control of a wireless cyber-physical system is proposed in this paper. The module integrates the COntiki OS JAva Simulator (COOJA), a cross-level wireless sensor network simulator, and the LabVIEW system design software from National Instruments. The proposed software module has been called "GILOO" (Graphical Integration of Labview and cOOja). It allows one to develop and to debug control strategies over the WSN both using virtual or real hardware modules, such as the National Instruments Real-Time Module platform, the CompactRio, the Supervisory Control And Data Acquisition (SCADA), etc. To test the proposed solution, we decided to integrate it with one of the most popular simulators, i.e., the Contiki OS, and wireless motes, i.e., the Sky mote. As a further contribution, the Contiki Sky DAC driver and a new "Advanced Sky GUI" have been proposed and tested in the COOJA Simulator in order to provide the possibility to develop control over the WSN. To test the performances of the proposed GILOO software module, several experimental tests have been made, and interesting preliminary results are reported. The GILOO module has been applied to a smart home mock-up where a networked control has been developed for the LED lighting system.
Virtual Induction Loops Based on Cooperative Vehicular Communications
Gramaglia, Marco; Bernardos, Carlos J.; Calderon, Maria
2013-01-01
Induction loop detectors have become the most utilized sensors in traffic management systems. The gathered traffic data is used to improve traffic efficiency (i.e., warning users about congested areas or planning new infrastructures). Despite their usefulness, their deployment and maintenance costs are expensive. Vehicular networks are an emerging technology that can support novel strategies for ubiquitous and more cost-effective traffic data gathering. In this article, we propose and evaluate VIL (Virtual Induction Loop), a simple and lightweight traffic monitoring system based on cooperative vehicular communications. The proposed solution has been experimentally evaluated through simulation using real vehicular traces. PMID:23348033
Developing Flexible Networked Lighting Control Systems
, Bluetooth, ZigBee and others are increasingly used for building control purposes. Low-cost computation : Bundling digital intelligence at the sensors and lights adds virtually no incremental cost. Coupled with cost. Research Goals and Objectives This project "Developing Flexible, Networked Lighting Control
Distributed Pervasive Worlds: The Case of Exergames
ERIC Educational Resources Information Center
Laine, Teemu H.; Sedano, Carolina Islas
2015-01-01
Pervasive worlds are computing environments where a virtual world converges with the physical world through context-aware technologies such as sensors. In pervasive worlds, technology is distributed among entities that may be distributed geographically. We explore the concept, possibilities, and challenges of distributed pervasive worlds in a case…
Interacting With A Near Real-Time Urban Digital Watershed Using Emerging Geospatial Web Technologies
NASA Astrophysics Data System (ADS)
Liu, Y.; Fazio, D. J.; Abdelzaher, T.; Minsker, B.
2007-12-01
The value of real-time hydrologic data dissemination including river stage, streamflow, and precipitation for operational stormwater management efforts is particularly high for communities where flash flooding is common and costly. Ideally, such data would be presented within a watershed-scale geospatial context to portray a holistic view of the watershed. Local hydrologic sensor networks usually lack comprehensive integration with sensor networks managed by other agencies sharing the same watershed due to administrative, political, but mostly technical barriers. Recent efforts on providing unified access to hydrological data have concentrated on creating new SOAP-based web services and common data format (e.g. WaterML and Observation Data Model) for users to access the data (e.g. HIS and HydroSeek). Geospatial Web technology including OGC sensor web enablement (SWE), GeoRSS, Geo tags, Geospatial browsers such as Google Earth and Microsoft Virtual Earth and other location-based service tools provides possibilities for us to interact with a digital watershed in near-real-time. OGC SWE proposes a revolutionary concept towards a web-connected/controllable sensor networks. However, these efforts have not provided the capability to allow dynamic data integration/fusion among heterogeneous sources, data filtering and support for workflows or domain specific applications where both push and pull mode of retrieving data may be needed. We propose a light weight integration framework by extending SWE with open source Enterprise Service Bus (e.g., mule) as a backbone component to dynamically transform, transport, and integrate both heterogeneous sensor data sources and simulation model outputs. We will report our progress on building such framework where multi-agencies" sensor data and hydro-model outputs (with map layers) will be integrated and disseminated in a geospatial browser (e.g. Microsoft Virtual Earth). This is a collaborative project among NCSA, USGS Illinois Water Science Center, Computer Science Department at UIUC funded by the Adaptive Environmental Infrastructure Sensing and Information Systems initiative at UIUC.
Design of virtual display and testing system for moving mass electromechanical actuator
NASA Astrophysics Data System (ADS)
Gao, Zhigang; Geng, Keda; Zhou, Jun; Li, Peng
2015-12-01
Aiming at the problem of control, measurement and movement virtual display of moving mass electromechanical actuator(MMEA), the virtual testing system of MMEA was developed based on the PC-DAQ architecture and the software platform of LabVIEW, and the comprehensive test task such as drive control of MMEA, tests of kinematic parameter, measurement of centroid position and virtual display of movement could be accomplished. The system could solve the alignment for acquisition time between multiple measurement channels in different DAQ cards, then on this basis, the researches were focused on the dynamic 3D virtual display by the LabVIEW, and the virtual display of MMEA were realized by the method of calling DLL and the method of 3D graph drawing controls. Considering the collaboration with the virtual testing system, including the hardware drive, the measurement software of data acquisition, and the 3D graph drawing controls method was selected, which could obtained the synchronization measurement, control and display. The system can measure dynamic centroid position and kinematic position of movable mass block while controlling the MMEA, and the interface of 3D virtual display has realistic effect and motion smooth, which can solve the problem of display and playback about MMEA in the closed shell.
Planning Image-Based Measurements in Wind Tunnels by Virtual Imaging
NASA Technical Reports Server (NTRS)
Kushner, Laura Kathryn; Schairer, Edward T.
2011-01-01
Virtual imaging is routinely used at NASA Ames Research Center to plan the placement of cameras and light sources for image-based measurements in production wind tunnel tests. Virtual imaging allows users to quickly and comprehensively model a given test situation, well before the test occurs, in order to verify that all optical testing requirements will be met. It allows optimization of the placement of cameras and light sources and leads to faster set-up times, thereby decreasing tunnel occupancy costs. This paper describes how virtual imaging was used to plan optical measurements for three tests in production wind tunnels at NASA Ames.
Development of a novel virtual reality gait intervention.
Boone, Anna E; Foreman, Matthew H; Engsberg, Jack R
2017-02-01
Improving gait speed and kinematics can be a time consuming and tiresome process. We hypothesize that incorporating virtual reality videogame play into variable improvement goals will improve levels of enjoyment and motivation and lead to improved gait performance. To develop a feasible, engaging, VR gait intervention for improving gait variables. Completing this investigation involved four steps: 1) identify gait variables that could be manipulated to improve gait speed and kinematics using the Microsoft Kinect and free software, 2) identify free internet videogames that could successfully manipulate the chosen gait variables, 3) experimentally evaluate the ability of the videogames and software to manipulate the gait variables, and 4) evaluate the enjoyment and motivation from a small sample of persons without disability. The Kinect sensor was able to detect stride length, cadence, and joint angles. FAAST software was able to identify predetermined gait variable thresholds and use the thresholds to play free online videogames. Videogames that involved continuous pressing of a keyboard key were found to be most appropriate for manipulating the gait variables. Five participants without disability evaluated the effectiveness for modifying the gait variables and enjoyment and motivation during play. Participants were able to modify gait variables to permit successful videogame play. Motivation and enjoyment were high. A clinically feasible and engaging virtual intervention for improving gait speed and kinematics has been developed and initially tested. It may provide an engaging avenue for achieving thousands of repetitions necessary for neural plastic changes and improved gait. Copyright © 2016 Elsevier B.V. All rights reserved.
A Direct Comparison of Real-World and Virtual Navigation Performance in Chronic Stroke Patients.
Claessen, Michiel H G; Visser-Meily, Johanna M A; de Rooij, Nicolien K; Postma, Albert; van der Ham, Ineke J M
2016-04-01
An increasing number of studies have presented evidence that various patient groups with acquired brain injury suffer from navigation problems in daily life. This skill is, however, scarcely addressed in current clinical neuropsychological practice and suitable diagnostic instruments are lacking. Real-world navigation tests are limited by geographical location and associated with practical constraints. It was, therefore, investigated whether virtual navigation might serve as a useful alternative. To investigate the convergent validity of virtual navigation testing, performance on the Virtual Tubingen test was compared to that on an analogous real-world navigation test in 68 chronic stroke patients. The same eight subtasks, addressing route and survey knowledge aspects, were assessed in both tests. In addition, navigation performance of stroke patients was compared to that of 44 healthy controls. A correlation analysis showed moderate overlap (r = .535) between composite scores of overall real-world and virtual navigation performance in stroke patients. Route knowledge composite scores correlated somewhat stronger (r = .523) than survey knowledge composite scores (r = .442). When comparing group performances, patients obtained lower scores than controls on seven subtasks. Whereas the real-world test was found to be easier than its virtual counterpart, no significant interaction-effects were found between group and environment. Given moderate overlap of the total scores between the two navigation tests, we conclude that virtual testing of navigation ability is a valid alternative to navigation tests that rely on real-world route exposure.
Riva, Giuseppe; Raspelli, Simona; Algeri, Davide; Pallavicini, Federica; Gorini, Alessandra; Wiederhold, Brenda K; Gaggioli, Andrea
2010-02-01
The use of new technologies, particularly virtual reality, is not new in the treatment of posttraumatic stress disorders (PTSD): VR is used to facilitate the activation of the traumatic event during exposure therapy. However, during the therapy, VR is a new and distinct realm, separate from the emotions and behaviors experienced by the patient in the real world: the behavior of the patient in VR has no direct effects on the real-life experience; the emotions and problems experienced by the patient in the real world are not directly addressed in the VR exposure. In this article, we suggest that the use of a new technological paradigm, Interreality, may improve the clinical outcome of PTSD. The main feature of Interreality is a twofold link between the virtual and real worlds: (a) behavior in the physical world influences the experience in the virtual one; (b) behavior in the virtual world influences the experience in the real one. This is achieved through 3D shared virtual worlds; biosensors and activity sensors (from the real to the virtual world); and personal digital assistants and/or mobile phones (from the virtual world to the real one). We describe different technologies that are involved in the Interreality vision and its clinical rationale. To illustrate the concept of Interreality in practice, a clinical scenario is also presented and discussed: Rosa, a 55-year-old nurse, involved in a major car accident.
NASA Astrophysics Data System (ADS)
S, Sreekanth T.
begin{center} Large Large Rain Drop Charge Sensor Sreekanth T S*, Suby Symon*, G. Mohan Kumar (1) , S. Murali Das (2) *Atmospheric Sciences Division, Centre for Earth Science Studies, Thiruvananthapuram 695011 (1) D-330, Swathi Nagar, West Fort, Thiruvananthapuram 695023 (2) Kavyam, Manacaud, Thiruvananthapuram 695009 begin{center} ABSTRACT To study the inter-relations with precipitation electricity and precipitation microphysical parameters a rain drop charge sensor was designed and developed at CESS Electronics & Instrumentation Laboratory. Simultaneous measurement of electric charge and fall speed of rain drops could be done using this charge sensor. A cylindrical metal tube (sensor tube) of 30 cm length is placed inside another thick metal cover opened at top and bottom for electromagnetic shielding. Mouth of the sensor tube is exposed and bottom part is covered with metal net in the shielding cover. The instrument is designed in such a way that rain drops can pass only through unhindered inside the sensor tube. When electrically charged rain drops pass through the sensor tube, it is charged to the same magnitude of drop charge but with opposite polarity. The sensor tube is electrically connected the inverted input of a current to voltage converter operational amplifier using op-amp AD549. Since the sensor is electrically connected to the virtual ground of the op-amp, the charge flows to the ground and the generated current is converted to amplified voltage. This output voltage is recorded using a high frequency (1kHz) voltage recorder. From the recorded pulse, charge magnitude, polarity and fall speed of rain drop are calculated. From the fall speed drop diameter also can be calculated. The prototype is now under test running at CESS campus. As the magnitude of charge in rain drops is an indication of accumulated charge in clouds in lightning, this instrument has potential application in the field of risk and disaster management. By knowing the charge magnitude of initial drops from a precipitation event, gross cloud charge can be estimated and necessary precautions can be taken during convective cloud events. Being a site of high lightning incidence in tropics, Kerala state is affected in India and calls for much attention in lightning hazards mitigation. Installing this charge sensor and atmospheric electric field mill, an attempt to a better warning system can be attempted.
Development of a GNSS Buoy for Monitoring Water Surface Elevations in Estuaries and Coastal Areas.
Lin, Yen-Pin; Huang, Ching-Jer; Chen, Sheng-Hsueh; Doong, Dong-Jiing; Kao, Chia Chuen
2017-01-18
In this work, a Global Navigation Satellite System (GNSS) buoy that utilizes a Virtual Base Station (VBS) combined with the Real-Time Kinematic (RTK) positioning technology was developed to monitor water surface elevations in estuaries and coastal areas. The GNSS buoy includes a buoy hull, a RTK GNSS receiver, data-transmission devices, a data logger, and General Purpose Radio Service (GPRS) modems for transmitting data to the desired land locations. Laboratory and field tests were conducted to test the capability of the buoy and verify the accuracy of the monitored water surface elevations. For the field tests, the GNSS buoy was deployed in the waters of Suao (northeastern part of Taiwan). Tide data obtained from the GNSS buoy were consistent with those obtained from the neighboring tide station. Significant wave heights, zero-crossing periods, and peak wave directions obtained from the GNSS buoy were generally consistent with those obtained from an accelerometer-tilt-compass (ATC) sensor. The field tests demonstrate that the developed GNSS buoy can be used to obtain accurate real-time tide and wave data in estuaries and coastal areas.
Development of a GNSS Buoy for Monitoring Water Surface Elevations in Estuaries and Coastal Areas
Lin, Yen-Pin; Huang, Ching-Jer; Chen, Sheng-Hsueh; Doong, Dong-Jiing; Kao, Chia Chuen
2017-01-01
In this work, a Global Navigation Satellite System (GNSS) buoy that utilizes a Virtual Base Station (VBS) combined with the Real-Time Kinematic (RTK) positioning technology was developed to monitor water surface elevations in estuaries and coastal areas. The GNSS buoy includes a buoy hull, a RTK GNSS receiver, data-transmission devices, a data logger, and General Purpose Radio Service (GPRS) modems for transmitting data to the desired land locations. Laboratory and field tests were conducted to test the capability of the buoy and verify the accuracy of the monitored water surface elevations. For the field tests, the GNSS buoy was deployed in the waters of Suao (northeastern part of Taiwan). Tide data obtained from the GNSS buoy were consistent with those obtained from the neighboring tide station. Significant wave heights, zero-crossing periods, and peak wave directions obtained from the GNSS buoy were generally consistent with those obtained from an accelerometer-tilt-compass (ATC) sensor. The field tests demonstrate that the developed GNSS buoy can be used to obtain accurate real-time tide and wave data in estuaries and coastal areas. PMID:28106763
Jan, Shau-Shiun; Hsu, Li-Ta; Tsai, Wen-Ming
2010-01-01
In order to provide the seamless navigation and positioning services for indoor environments, an indoor location based service (LBS) test bed is developed to integrate the indoor positioning system and the indoor three-dimensional (3D) geographic information system (GIS). A wireless sensor network (WSN) is used in the developed indoor positioning system. Considering the power consumption, in this paper the ZigBee radio is used as the wireless protocol, and the received signal strength (RSS) fingerprinting positioning method is applied as the primary indoor positioning algorithm. The matching processes of the user location include the nearest neighbor (NN) algorithm, the K-weighted nearest neighbors (KWNN) algorithm, and the probabilistic approach. To enhance the positioning accuracy for the dynamic user, the particle filter is used to improve the positioning performance. As part of this research, a 3D indoor GIS is developed to be used with the indoor positioning system. This involved using the computer-aided design (CAD) software and the virtual reality markup language (VRML) to implement a prototype indoor LBS test bed. Thus, a rapid and practical procedure for constructing a 3D indoor GIS is proposed, and this GIS is easy to update and maintenance for users. The building of the Department of Aeronautics and Astronautics at National Cheng Kung University in Taiwan is used as an example to assess the performance of various algorithms for the indoor positioning system.
Jan, Shau-Shiun; Hsu, Li-Ta; Tsai, Wen-Ming
2010-01-01
In order to provide the seamless navigation and positioning services for indoor environments, an indoor location based service (LBS) test bed is developed to integrate the indoor positioning system and the indoor three-dimensional (3D) geographic information system (GIS). A wireless sensor network (WSN) is used in the developed indoor positioning system. Considering the power consumption, in this paper the ZigBee radio is used as the wireless protocol, and the received signal strength (RSS) fingerprinting positioning method is applied as the primary indoor positioning algorithm. The matching processes of the user location include the nearest neighbor (NN) algorithm, the K-weighted nearest neighbors (KWNN) algorithm, and the probabilistic approach. To enhance the positioning accuracy for the dynamic user, the particle filter is used to improve the positioning performance. As part of this research, a 3D indoor GIS is developed to be used with the indoor positioning system. This involved using the computer-aided design (CAD) software and the virtual reality markup language (VRML) to implement a prototype indoor LBS test bed. Thus, a rapid and practical procedure for constructing a 3D indoor GIS is proposed, and this GIS is easy to update and maintenance for users. The building of the Department of Aeronautics and Astronautics at National Cheng Kung University in Taiwan is used as an example to assess the performance of various algorithms for the indoor positioning system. PMID:22319282
ERIC Educational Resources Information Center
Burris, Justin T.
2013-01-01
Technology permeates every aspect of daily life, from the sensors that control the traffic signals to the cameras that allow real-time video chats with family around the world. At times, technology may make life easier, faster, and more productive. However, does technology do the same in schools and classrooms? Will the benefits of technology…
MATREX: A Unifying Modeling and Simulation Architecture for Live-Virtual-Constructive Applications
2007-05-23
Deployment Systems Acquisition Operations & Support B C Sustainment FRP Decision Review FOC LRIP/IOT& ECritical Design Review Pre-Systems...CMS2 – Comprehensive Munitions & Sensor Server • CSAT – C4ISR Static Analysis Tool • C4ISR – Command & Control, Communications, Computers
An Interactive Logistics Centre Information Integration System Using Virtual Reality
NASA Astrophysics Data System (ADS)
Hong, S.; Mao, B.
2018-04-01
The logistics industry plays a very important role in the operation of modern cities. Meanwhile, the development of logistics industry has derived various problems that are urgent to be solved, such as the safety of logistics products. This paper combines the study of logistics industry traceability and logistics centre environment safety supervision with virtual reality technology, creates an interactive logistics centre information integration system. The proposed system utilizes the immerse characteristic of virtual reality, to simulate the real logistics centre scene distinctly, which can make operation staff conduct safety supervision training at any time without regional restrictions. On the one hand, a large number of sensor data can be used to simulate a variety of disaster emergency situations. On the other hand, collecting personnel operation data, to analyse the improper operation, which can improve the training efficiency greatly.
Virtual environment application with partial gravity simulation
NASA Technical Reports Server (NTRS)
Ray, David M.; Vanchau, Michael N.
1994-01-01
To support manned missions to the surface of Mars and missions requiring manipulation of payloads and locomotion in space, a training facility is required to simulate the conditions of both partial and microgravity. A partial gravity simulator (Pogo) which uses pneumatic suspension is being studied for use in virtual reality training. Pogo maintains a constant partial gravity simulation with a variation of simulated body force between 2.2 and 10 percent, depending on the type of locomotion inputs. this paper is based on the concept and application of a virtual environment system with Pogo including a head-mounted display and glove. The reality engine consists of a high end SGI workstation and PC's which drive Pogo's sensors and data acquisition hardware used for tracking and control. The tracking system is a hybrid of magnetic and optical trackers integrated for this application.
A virtual pointer to support the adoption of professional vision in laparoscopic training.
Feng, Yuanyuan; McGowan, Hannah; Semsar, Azin; Zahiri, Hamid R; George, Ivan M; Turner, Timothy; Park, Adrian; Kleinsmith, Andrea; Mentis, Helena M
2018-05-23
To assess a virtual pointer in supporting surgical trainees' development of professional vision in laparoscopic surgery. We developed a virtual pointing and telestration system utilizing the Microsoft Kinect movement sensor as an overlay for any imagine system. Training with the application was compared to a standard condition, i.e., verbal instruction with un-mediated gestures, in a laparoscopic training environment. Seven trainees performed four simulated laparoscopic tasks guided by an experienced surgeon as the trainer. Trainee performance was subjectively assessed by the trainee and trainer, and objectively measured by number of errors, time to task completion, and economy of movement. No significant differences in errors and time to task completion were obtained between virtual pointer and standard conditions. Economy of movement in the non-dominant hand was significantly improved when using virtual pointer ([Formula: see text]). The trainers perceived a significant improvement in trainee performance in virtual pointer condition ([Formula: see text]), while the trainees perceived no difference. The trainers' perception of economy of movement was similar between the two conditions in the initial three runs and became significantly improved in virtual pointer condition in the fourth run ([Formula: see text]). Results show that the virtual pointer system improves the trainer's perception of trainee's performance and this is reflected in the objective performance measures in the third and fourth training runs. The benefit of a virtual pointing and telestration system may be perceived by the trainers early on in training, but this is not evident in objective trainee performance until further mastery has been attained. In addition, the performance improvement of economy of motion specifically shows that the virtual pointer improves the adoption of professional vision- improved ability to see and use laparoscopic video results in more direct instrument movement.
Vehicle Lateral State Estimation Based on Measured Tyre Forces
Tuononen, Ari J.
2009-01-01
Future active safety systems need more accurate information about the state of vehicles. This article proposes a method to evaluate the lateral state of a vehicle based on measured tyre forces. The tyre forces of two tyres are estimated from optically measured tyre carcass deflections and transmitted wirelessly to the vehicle body. The two remaining tyres are so-called virtual tyre sensors, the forces of which are calculated from the real tyre sensor estimates. The Kalman filter estimator for lateral vehicle state based on measured tyre forces is presented, together with a simple method to define adaptive measurement error covariance depending on the driving condition of the vehicle. The estimated yaw rate and lateral velocity are compared with the validation sensor measurements. PMID:22291535
Martín, Angel; Padín, Jorge; Anquela, Ana Belén; Sánchez, Juán; Belda, Santiago
2009-01-01
Magnetic data consists of a sequence of collected points with spatial coordinates and magnetic information. The spatial location of these points needs to be as exact as possible in order to develop a precise interpretation of magnetic anomalies. GPS is a valuable tool for accomplishing this objective, especially if the RTK approach is used. In this paper the VRS (Virtual Reference Station) technique is introduced as a new approach for real-time positioning of magnetic sensors. The main advantages of the VRS approach are, firstly, that only a single GPS receiver is needed (no base station is necessary), reducing field work and equipment costs. Secondly, VRS can operate at distances separated 50–70 km from the reference stations without degrading accuracy. A compact integration of a GSM-19 magnetometer sensor with a geodetic GPS antenna is presented; this integration does not diminish the operational flexibility of the original magnetometer and can work with the VRS approach. The coupled devices were tested in marshlands around Gandia, a city located approximately 100 km South of Valencia (Spain), thought to be the site of a Roman cemetery. The results obtained show adequate geometry and high-precision positioning for the structures to be studied (a comparison with the original low precision GPS of the magnetometer is presented). Finally, the results of the magnetic survey are of great interest for archaeological purposes. PMID:22574055
Virtualization of System of Systems Test and Evaluation
2012-06-04
computers and is the primary enabler for virtualization. 2. Virtualization System Elements Parmalee, Peterson , Tillman, & Hatfield (1972) outlined the...The work of Abu-Taieh and El Sheikh, based on the work of Balci (1994, 1995), and Balci et al. ( 1996 ), seeks to organize types of tests and to...and testing. In A. Dasso & A. Funes (Eds.), Verification, validation, and testing in software engineering (pp. 155–184). Hershey , PA: Idea Group
Use of Occupancy Sensors in LED Parking Lot and Garage Applications: Early Experiences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinzey, Bruce R.; Myer, Michael; Royer, Michael P.
2012-11-07
Occupancy sensor systems are gaining traction as an effective technological approach to reducing energy use in exterior commercial lighting applications. Done correctly, occupancy sensors can substantially enhance the savings from an already efficient lighting system. However, this technology is confronted by several potential challenges and pitfalls that can leave a significant amount of the prospective savings on the table. This report describes anecdotal experiences from field installations of occupancy sensor controlled light-emitting diode (LED) lighting at two parking structures and two parking lots. The relative levels of success at these installations reflect a marked range of potential outcomes: from anmore » additional 76% in energy savings to virtually no additional savings. Several issues that influenced savings were encountered in these early stage installations and are detailed in the report. Ultimately, care must be taken in the design, selection, and commissioning of a sensor-controlled lighting installation, else the only guaranteed result may be its cost.« less
Occupant detection using support vector machines with a polynomial kernel function
NASA Astrophysics Data System (ADS)
Destefanis, Eduardo A.; Kienzle, Eberhard; Canali, Luis R.
2000-10-01
The use of air bags in the presence of bad passenger and baby seat positions in car seats can injure or kill these individuals in case of an accident when this device is inflated. A proposed solution is the use of range sensors to detect passenger and baby seat risky positions. Such sensors allow the Airbag inflation to be controlled. This work is concerned with the application of different classification schemes to a real world problem and the optimization of a sensor as a function of the classification performance. The sensor is constructed using a new technology which is called Photo-Mixer-Device (PMD). A systematic analysis of the occupant detection problem was made using real and virtual environments. The challenge is to find the best sensor geometry and to adapt a classification scheme under the current technological constraints. Passenger head position detection is also a desirable issue. A couple of classifiers have been used into a simple configuration to reach this goal. Experiences and results are described.
A TEOM (tm) particulate monitor for comet dust, near Earth space, and planetary atmospheres
NASA Astrophysics Data System (ADS)
1988-04-01
Scientific missions to comets, near earth space, and planetary atmospheres require particulate and mass accumulation instrumentation for both scientific and navigation purposes. The Rupprecht & Patashnick tapered element oscillating microbalance can accurately measure both mass flux and mass distribution of particulates over a wide range of particle sizes and loadings. Individual particles of milligram size down to a few picograms can be resolved and counted, and the accumulation of smaller particles or molecular deposition can be accurately measured using the sensors perfected and toughened under this contract. No other sensor has the dynamic range or sensitivity attained by these picogram direct mass measurement sensors. The purpose of this contract was to develop and implement reliable and repeatable manufacturing methods; build and test prototype sensors; and outline a quality control program. A dust 'thrower' was to be designed and built, and used to verify performance. Characterization and improvement of the optical motion detection system and drive feedback circuitry was to be undertaken, with emphasis on reliability, low noise, and low power consumption. All the goals of the contract were met or exceeded. An automated glass puller was built and used to make repeatable tapered elements. Materials and assembly methods were standardized, and controllers and calibrated fixtures were developed and used in all phases of preparing, coating and assembling the sensors. Quality control and reliability resulted from the use of calibrated manufacturing equipment with measurable working parameters. Thermal and vibration testing of completed prototypes showed low temperature sensitivity and high vibration tolerance. An electrostatic dust thrower was used in vacuum to throw particles from 2 x 106 g to 7 x 10-12 g in size. Using long averaging times, particles as small as 0.7 to 4 x 1011 g were weighted to resolutions in the 5 to 9 x 10-13 g range. The drive circuit and optics systems were developed beyond what was anticipated in the contract, and are now virtually flight prototypes. There is already commercial interest in the developed capability of measuring picogram mass losses and gains. One area is contamination and outgassing research, both measuring picogram losses from samples and collecting products of outgassing.
Grewal, Gurtej Singh; Schwenk, Michael; Lee-Eng, Jacqueline; Parvaneh, Saman; Bharara, Manish; Menzies, Robert A; Talal, Talal K; Armstrong, David G; Najafi, Bijan
2015-01-01
Individuals with diabetic peripheral neuropathy (DPN) have deficits in sensory and motor skills leading to inadequate proprioceptive feedback, impaired postural balance and higher fall risk. This study investigated the effect of sensor-based interactive balance training on postural stability and daily physical activity in older adults with diabetes. Thirty-nine older adults with DPN were enrolled (age 63.7 ± 8.2 years, BMI 30.6 ± 6, 54% females) and randomized to either an intervention (IG) or a control (CG) group. The IG received sensor-based interactive exercise training tailored for people with diabetes (twice a week for 4 weeks). The exercises focused on shifting weight and crossing virtual obstacles. Body-worn sensors were implemented to acquire kinematic data and provide real-time joint visual feedback during the training. Outcome measurements included changes in center of mass (CoM) sway, ankle and hip joint sway measured during a balance test while the eyes were open and closed at baseline and after the intervention. Daily physical activities were also measured during a 48-hour period at baseline and at follow-up. Analysis of covariance was performed for the post-training outcome comparison. Compared with the CG, the patients in the IG showed a significantly reduced CoM sway (58.31%; p = 0.009), ankle sway (62.7%; p = 0.008) and hip joint sway (72.4%; p = 0.017) during the balance test with open eyes. The ankle sway was also significantly reduced in the IG group (58.8%; p = 0.037) during measurements while the eyes were closed. The number of steps walked showed a substantial but nonsignificant increase (+27.68%; p = 0.064) in the IG following training. The results of this randomized controlled trial demonstrate that people with DPN can significantly improve their postural balance with diabetes-specific, tailored, sensor-based exercise training. The results promote the use of wearable technology in exercise training; however, future studies comparing this technology with commercially available systems are required to evaluate the benefit of interactive visual joint movement feedback. © 2015 S. Karger AG, Basel.
Virtual Reality-Based Center of Mass-Assisted Personalized Balance Training System.
Kumar, Deepesh; González, Alejandro; Das, Abhijit; Dutta, Anirban; Fraisse, Philippe; Hayashibe, Mitsuhiro; Lahiri, Uttama
2017-01-01
Poststroke hemiplegic patients often show altered weight distribution with balance disorders, increasing their risk of fall. Conventional balance training, though powerful, suffers from scarcity of trained therapists, frequent visits to clinics to get therapy, one-on-one therapy sessions, and monotony of repetitive exercise tasks. Thus, technology-assisted balance rehabilitation can be an alternative solution. Here, we chose virtual reality as a technology-based platform to develop motivating balance tasks. This platform was augmented with off-the-shelf available sensors such as Nintendo Wii balance board and Kinect to estimate one's center of mass (CoM). The virtual reality-based CoM-assisted balance tasks (Virtual CoMBaT) was designed to be adaptive to one's individualized weight-shifting capability quantified through CoM displacement. Participants were asked to interact with Virtual CoMBaT that offered tasks of varying challenge levels while adhering to ankle strategy for weight shifting. To facilitate the patients to use ankle strategy during weight-shifting, we designed a heel lift detection module. A usability study was carried out with 12 hemiplegic patients. Results indicate the potential of our system to contribute to improving one's overall performance in balance-related tasks belonging to different difficulty levels.
Navigation integrity monitoring and obstacle detection for enhanced-vision systems
NASA Astrophysics Data System (ADS)
Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter
2001-08-01
Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.
Testing the Data Assimilation Capability of the Profiler Virtual Module
2016-02-01
ARL-TR-7601 ● FEB 2016 US Army Research Laboratory Testing the Data Assimilation Capability of the Profiler Virtual Module by...originator. ARL-TR-7601 ● FEB 2016 US Army Research Laboratory Testing the Data Assimilation Capability of the Profiler Virtual...hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and
NASA Astrophysics Data System (ADS)
Rosyidah, T. H.; Firman, H.; Rusyati, L.
2017-02-01
This research was comparing virtual and paper-based test to measure students’ critical thinking based on VAK (Visual-Auditory-Kynesthetic) learning style model. Quasi experiment method with one group post-test only design is applied in this research in order to analyze the data. There was 40 eight grade students at one of public junior high school in Bandung becoming the sample in this research. The quantitative data was obtained through 26 questions about living thing and environment sustainability which is constructed based on the eight elements of critical thinking and be provided in the form of virtual and paper-based test. Based on analysis of the result, it is shown that within visual, auditory, and kinesthetic were not significantly difference in virtual and paper-based test. Besides, all result was supported by quistionnaire about students’ respond on virtual test which shows 3.47 in the scale of 4. Means that student showed positive respond in all aspet measured, which are interest, impression, and expectation.
Leveraging simulation to evaluate system performance in presence of fixed pattern noise
NASA Astrophysics Data System (ADS)
Teaney, Brian P.
2017-05-01
The development of image simulation techniques which map the effects of a notional, modeled sensor system onto an existing image can be used to evaluate the image quality of camera systems prior to the development of prototype systems. In addition, image simulation or `virtual prototyping' can be utilized to reduce the time and expense associated with conducting extensive field trials. In this paper we examine the development of a perception study designed to assess the performance of the NVESD imager performance metrics as a function of fixed pattern noise. This paper discusses the development of the model theory and the implementation and execution of the perception study. In addition, other applications of the image simulation component including the evaluation of limiting resolution and other test targets is provided.
Virtual gaming simulation of a mental health assessment: A usability study.
Verkuyl, Margaret; Romaniuk, Daria; Mastrilli, Paula
2018-05-18
Providing safe and realistic virtual simulations could be an effective way to facilitate the transition from the classroom to clinical practice. As nursing programs begin to include virtual simulations as a learning strategy; it is critical to first assess the technology for ease of use and usefulness. A virtual gaming simulation was developed, and a usability study was conducted to assess its ease of use and usefulness for students and faculty. The Technology Acceptance Model provided the framework for the study, which included expert review and testing by nursing faculty and nursing students. This study highlighted the importance of assessing ease of use and usefulness in a virtual game simulation and provided feedback for the development of an effective virtual gaming simulation. The study participants said the virtual gaming simulation was engaging, realistic and similar to a clinical experience. Participants found the game easy to use and useful. Testing provided the development team with ideas to improve the user interface. The usability methodology provided is a replicable approach to testing virtual experiences before a research study or before implementing virtual experiences into curriculum. Copyright © 2018 Elsevier Ltd. All rights reserved.
Simulation of a group of rangefinders adapted to alterations of measurement angle
NASA Astrophysics Data System (ADS)
Baikov, D. V.; Pastushkova, A. A.; Danshin, V. V.; Chepin, E. V.
2017-01-01
As part of the National Research Nuclear University of National Research Nuclear University MEPhI (MEPhI) at the Department of Computer Systems and Technologies working laboratory "Robotics." University teachers and laboratory staff implement a training program for master's program "Computer technology in robotics." Undergraduates and graduate students conduct laboratory research and development in several promising areas in robotics. One of the methodologies that are actively used in carrying out dissertation research is the modeling of advanced hardware and software systems, robotics. This article presents the results of such a study. The purpose of this article is to simulate a sensor comprised of a group of laser rangefinders. The rangefinders should be simulated according to the following principle. Beams will originate from one point though with a deviation from normal, providing thereby simultaneous scanning of different points. The data obtained in our virtual test room should be used to indicate an average distance from the device to obstacles for all the four sensors in real time. By leveling the divergence angle of the beams we can simulate different kinds of rangefinders (laser and ultrasonic ones). By adjusting noise parameters we can achieve results similar to those of real models (rangefinders), and obtain a surface map displaying irregularities. We should use a model of an aircraft (quadcopter) as a device to install the sensor. In the article we made an overview of works on rangefinder simulation undertaken at institutions around the world and performed tests. The article draws a conclusion about the relevance of the suggested approach, the methods used, necessity and feasibility of further research in this area.
NASA Astrophysics Data System (ADS)
Dhitareka, P. H.; Firman, H.; Rusyati, L.
2018-05-01
This research is comparing science virtual and paper-based test in measuring grade 7 students’ critical thinking based on Multiple Intelligences and gender. Quasi experimental method with within-subjects design is conducted in this research in order to obtain the data. The population of this research was all seventh grade students in ten classes of one public secondary school in Bandung. There were 71 students within two classes taken randomly became the sample in this research. The data are obtained through 28 questions with a topic of living things and environmental sustainability constructed based on eight critical thinking elements proposed by Inch then the questions provided in science virtual and paper-based test. The data was analysed by using paired-samples t test when the data are parametric and Wilcoxon signed ranks test when the data are non-parametric. In general comparison, the p-value of the comparison between science virtual and paper-based tests’ score is 0.506, indicated that there are no significance difference between science virtual and paper-based test based on the tests’ score. The results are furthermore supported by the students’ attitude result which is 3.15 from the scale from 1 to 4, indicated that they have positive attitudes towards Science Virtual Test.
Virtual reality 3D headset based on DMD light modulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernacki, Bruce E.; Evans, Allan; Tang, Edward
We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.
Reconfigurable routing protocol for free space optical sensor networks.
Xie, Rong; Yang, Won-Hyuk; Kim, Young-Chon
2012-01-01
Recently, free space optical sensor networks (FSOSNs), which are based on free space optics (FSO) instead of radio frequency (RF), have gained increasing visibility over traditional wireless sensor networks (WSNs) due to their advantages such as larger capacity, higher security, and lower cost. However, the performance of FSOSNs is restricted to the requirement of a direct line-of-sight (LOS) path between a sender and a receiver pair. Once a node dies of energy depletion, the network would probably suffer from a dramatic decrease of connectivity, resulting in a huge loss of data packets. Thus, this paper proposes a reconfigurable routing protocol (RRP) to overcome this problem by dynamically reconfiguring the network virtual topology. The RRP works in three phases: (1) virtual topology construction, (2) routing establishment, and (3) reconfigurable routing. When data transmission begins, the data packets are first routed through the shortest hop paths. Then a reconfiguration is initiated by the node whose residual energy falls below a threshold. Nodes affected by this dying node are classified into two types, namely maintenance nodes and adjustment nodes, and they are reconfigured according to the types. An energy model is designed to evaluate the performance of RRP through OPNET simulation. Our simulation results indicate that the RRP achieves better performance compared with the simple-link protocol and a direct reconfiguration scheme in terms of connectivity, network lifetime, packet delivery ratio and the number of living nodes.
Can a virtual reality assessment of fine motor skill predict successful central line insertion?
Mohamadipanah, Hossein; Parthiban, Chembian; Nathwani, Jay; Rutherford, Drew; DiMarco, Shannon; Pugh, Carla
2016-10-01
Due to the increased use of peripherally inserted central catheter lines, central lines are not performed as frequently. The aim of this study is to evaluate whether a virtual reality (VR)-based assessment of fine motor skills can be used as a valid and objective assessment of central line skills. Surgical residents (N = 43) from 7 general surgery programs performed a subclavian central line in a simulated setting. Then, they participated in a force discrimination task in a VR environment. Hand movements from the subclavian central line simulation were tracked by electromagnetic sensors. Gross movements as monitored by the electromagnetic sensors were compared with the fine motor metrics calculated from the force discrimination tasks in the VR environment. Long periods of inactivity (idle time) during needle insertion and lack of smooth movements, as detected by the electromagnetic sensors, showed a significant correlation with poor force discrimination in the VR environment. Also, long periods of needle insertion time correlated to the poor performance in force discrimination in the VR environment. This study shows that force discrimination in a defined VR environment correlates to needle insertion time, idle time, and hand smoothness when performing subclavian central line placement. Fine motor force discrimination may serve as a valid and objective assessment of the skills required for successful needle insertion when placing central lines. Copyright © 2016 Elsevier Inc. All rights reserved.
Generalized compliant motion primitive
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor)
1994-01-01
This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.
Wide-angle vision for road views
NASA Astrophysics Data System (ADS)
Huang, F.; Fehrs, K.-K.; Hartmann, G.; Klette, R.
2013-03-01
The field-of-view of a wide-angle image is greater than (say) 90 degrees, and so contains more information than available in a standard image. A wide field-of-view is more advantageous than standard input for understanding the geometry of 3D scenes, and for estimating the poses of panoramic sensors within such scenes. Thus, wide-angle imaging sensors and methodologies are commonly used in various road-safety, street surveillance, street virtual touring, or street 3D modelling applications. The paper reviews related wide-angle vision technologies by focusing on mathematical issues rather than on hardware.
Digital Photography and Its Impact on Instruction.
ERIC Educational Resources Information Center
Lantz, Chris
Today the chemical processing of film is being replaced by a virtual digital darkroom. Digital image storage makes new levels of consistency possible because its nature is less volatile and more mutable than traditional photography. The potential of digital imaging is great, but issues of disk storage, computer speed, camera sensor resolution,…
Final Report-Rail Sensor Testbed Program: Active Agents in Containers for Transport Chain Security
2011-03-21
information. These trust approaches have been applied to a variety of regimes, including virtual communities [14], email [15] and ecommerce [16...2004(http://www .arxiv.org/abs/cond-mat/0402143). 16. Melnik, M., Aim, J., Does a seller’s eCommerce reputation matter? evidence from eBay auctions
Landsat's role in ecological applications of remote sensing.
Warren B. Cohen; Samuel N. Goward
2004-01-01
Remote sensing, geographic information systems, and modeling have combined to produce a virtual explosion of growth in ecological investigations and applications that are explicitly spatial and temporal. Of all remotely sensed data, those acquired by landsat sensors have played the most pivotal role in spatial and temporal scaling. Modern terrestrial ecology relies on...
NASA Astrophysics Data System (ADS)
Montalto, F. A.; Yu, Z.; Soldner, K.; Israel, A.; Fritch, M.; Kim, Y.; White, S.
2017-12-01
Urban stormwater utilities are increasingly using decentralized "green" infrastructure (GI) systems to capture stormwater and achieve compliance with regulations. Because environmental conditions, and design varies by GSI facility, monitoring of GSI systems under a range of conditions is essential. Conventional monitoring efforts can be costly because in-field data logging requires intense data transmission rates. The Internet of Things (IoT) can be used to more cost-effectively collect, store, and publish GSI monitoring data. Using 3G mobile networks, a cloud-based database was built on an Amazon Web Services (AWS) EC2 virtual machine to store and publish data collected with environmental sensors deployed in the field. This database can store multi-dimensional time series data, as well as photos and other observations logged by citizen scientists through a public engagement mobile app through a new Application Programming Interface (API). Also on the AWS EC2 virtual machine, a real-time QAQC flagging algorithm was developed to validate the sensor data streams.
Prediction of dynamic strains on a monopile offshore wind turbine using virtual sensors
NASA Astrophysics Data System (ADS)
Iliopoulos, A. N.; Weijtjens, W.; Van Hemelrijck, D.; Devriendt, C.
2015-07-01
The monitoring of the condition of the offshore wind turbine during its operational states offers the possibility of performing accurate assessments of the remaining life-time as well as supporting maintenance decisions during its entire life. The efficacy of structural monitoring in the case of the offshore wind turbine, though, is undermined by the practical limitations connected to the measurement system in terms of cost, weight and feasibility of sensor mounting (e.g. at muddline level 30m below the water level). This limitation is overcome by reconstructing the full-field response of the structure based on the limited number of measured accelerations and a calibrated Finite Element Model of the system. A modal decomposition and expansion approach is used for reconstructing the responses at all degrees of freedom of the finite element model. The paper will demonstrate the possibility to predict dynamic strains from acceleration measurements based on the aforementioned methodology. These virtual dynamic strains will then be evaluated and validated based on actual strain measurements obtained from a monitoring campaign on an offshore Vestas V90 3 MW wind turbine on a monopile foundation.
Cyber entertainment system using an immersive networked virtual environment
NASA Astrophysics Data System (ADS)
Ihara, Masayuki; Honda, Shinkuro; Kobayashi, Minoru; Ishibashi, Satoshi
2002-05-01
Authors are examining a cyber entertainment system that applies IPT (Immersive Projection Technology) displays to the entertainment field. This system enables users who are in remote locations to communicate with each other so that they feel as if they are together. Moreover, the system enables those users to experience a high degree of presence, this is due to provision of stereoscopic vision as well as a haptic interface and stereo sound. This paper introduces this system from the viewpoint of space sharing across the network and elucidates its operation using the theme of golf. The system is developed by integrating avatar control, an I/O device, communication links, virtual interaction, mixed reality, and physical simulations. Pairs of these environments are connected across the network. This allows the two players to experience competition. An avatar of each player is displayed by the other player's IPT display in the remote location and is driven by only two magnetic sensors. That is, in the proposed system, users don't need to wear any data suit with a lot of sensors and they are able to play golf without any encumbrance.
Mansano, Raul K; Godoy, Eduardo P; Porto, Arthur J V
2014-12-18
Recent advances in wireless networking technology and the proliferation of industrial wireless sensors have led to an increasing interest in using wireless networks for closed loop control. The main advantages of Wireless Networked Control Systems (WNCSs) are the reconfigurability, easy commissioning and the possibility of installation in places where cabling is impossible. Despite these advantages, there are two main problems which must be considered for practical implementations of WNCSs. One problem is the sampling period constraint of industrial wireless sensors. This problem is related to the energy cost of the wireless transmission, since the power supply is limited, which precludes the use of these sensors in several closed-loop controls. The other technological concern in WNCS is the energy efficiency of the devices. As the sensors are powered by batteries, the lowest possible consumption is required to extend battery lifetime. As a result, there is a compromise between the sensor sampling period, the sensor battery lifetime and the required control performance for the WNCS. This paper develops a model-based soft sensor to overcome these problems and enable practical implementations of WNCSs. The goal of the soft sensor is generating virtual data allowing an actuation on the process faster than the maximum sampling period available for the wireless sensor. Experimental results have shown the soft sensor is a solution to the sampling period constraint problem of wireless sensors in control applications, enabling the application of industrial wireless sensors in WNCSs. Additionally, our results demonstrated the soft sensor potential for implementing energy efficient WNCS through the battery saving of industrial wireless sensors.
Virtual prototyping and testing of in-vehicle interfaces.
Bullinger, Hans-Jörg; Dangelmaier, Manfred
2003-01-15
Electronic innovations that are slowly but surely changing the very nature of driving need to be tested before being introduced to the market. To meet this need a system for integrated virtual prototyping and testing has been developed. Functional virtual prototypes of various traffic systems, such as driver assistance, driver information, and multimedia systems can now be easily tested in a driving simulator by a rapid prototyping approach. The system has been applied in recent R&D projects.
An interactive VR system based on full-body tracking and gesture recognition
NASA Astrophysics Data System (ADS)
Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru
2016-10-01
Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.
Learning Desert Geomorphology Virtually versus in the Field
ERIC Educational Resources Information Center
Stumpf, Richard J., II; Douglass, John; Dorn, Ronald I.
2008-01-01
Statistical analyses of pre-test and post-test results, as well as qualitative insight obtained by essays, compared introductory physical geography college students who learned desert geomorphology only virtually, in the field and both ways. With the exception of establishing geographic context, the virtual field trip was statistically…
Froese, Tom; Iizuka, Hiroyuki; Ikegami, Takashi
2014-01-14
Scientists have traditionally limited the mechanisms of social cognition to one brain, but recent approaches claim that interaction also realizes cognitive work. Experiments under constrained virtual settings revealed that interaction dynamics implicitly guide social cognition. Here we show that embodied social interaction can be constitutive of agency detection and of experiencing another's presence. Pairs of participants moved their "avatars" along an invisible virtual line and could make haptic contact with three identical objects, two of which embodied the other's motions, but only one, the other's avatar, also embodied the other's contact sensor and thereby enabled responsive interaction. Co-regulated interactions were significantly correlated with identifications of the other's avatar and reports of the clearest awareness of the other's presence. These results challenge folk psychological notions about the boundaries of mind, but make sense from evolutionary and developmental perspectives: an extendible mind can offload cognitive work into its environment.
Temporally coherent 4D video segmentation for teleconferencing
NASA Astrophysics Data System (ADS)
Ehmann, Jana; Guleryuz, Onur G.
2013-09-01
We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap- plications on emerging mobile devices. Our algorithm extracts users from their environments and places them onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac- tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth sensors, while providing useful information for segmentation, result in noisy depth maps with a large number of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other- wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve- ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video conferencing scenarios.
Magnetosensitive e-skins with directional perception for augmented reality
Cañón Bermúdez, Gilbert Santiago; Karnaushenko, Dmitriy D.; Karnaushenko, Daniil; Lebanov, Ana; Bischoff, Lothar; Kaltenbrunner, Martin; Fassbender, Jürgen; Schmidt, Oliver G.; Makarov, Denys
2018-01-01
Electronic skins equipped with artificial receptors are able to extend our perception beyond the modalities that have naturally evolved. These synthetic receptors offer complimentary information on our surroundings and endow us with novel means of manipulating physical or even virtual objects. We realize highly compliant magnetosensitive skins with directional perception that enable magnetic cognition, body position tracking, and touchless object manipulation. Transfer printing of eight high-performance spin valve sensors arranged into two Wheatstone bridges onto 1.7-μm-thick polyimide foils ensures mechanical imperceptibility. This resembles a new class of interactive devices extracting information from the surroundings through magnetic tags. We demonstrate this concept in augmented reality systems with virtual knob-turning functions and the operation of virtual dialing pads, based on the interaction with magnetic fields. This technology will enable a cornucopia of applications from navigation, motion tracking in robotics, regenerative medicine, and sports and gaming to interaction in supplemented reality. PMID:29376121
Froese, Tom; Iizuka, Hiroyuki; Ikegami, Takashi
2014-01-01
Scientists have traditionally limited the mechanisms of social cognition to one brain, but recent approaches claim that interaction also realizes cognitive work. Experiments under constrained virtual settings revealed that interaction dynamics implicitly guide social cognition. Here we show that embodied social interaction can be constitutive of agency detection and of experiencing another's presence. Pairs of participants moved their “avatars” along an invisible virtual line and could make haptic contact with three identical objects, two of which embodied the other's motions, but only one, the other's avatar, also embodied the other's contact sensor and thereby enabled responsive interaction. Co-regulated interactions were significantly correlated with identifications of the other's avatar and reports of the clearest awareness of the other's presence. These results challenge folk psychological notions about the boundaries of mind, but make sense from evolutionary and developmental perspectives: an extendible mind can offload cognitive work into its environment. PMID:24419102
VEVI: A Virtual Reality Tool For Robotic Planetary Explorations
NASA Technical Reports Server (NTRS)
Piguet, Laurent; Fong, Terry; Hine, Butler; Hontalas, Phil; Nygren, Erik
1994-01-01
The Virtual Environment Vehicle Interface (VEVI), developed by the NASA Ames Research Center's Intelligent Mechanisms Group, is a modular operator interface for direct teleoperation and supervisory control of robotic vehicles. Virtual environments enable the efficient display and visualization of complex data. This characteristic allows operators to perceive and control complex systems in a natural fashion, utilizing the highly-evolved human sensory system. VEVI utilizes real-time, interactive, 3D graphics and position / orientation sensors to produce a range of interface modalities from the flat panel (windowed or stereoscopic) screen displays to head mounted/head-tracking stereo displays. The interface provides generic video control capability and has been used to control wheeled, legged, air bearing, and underwater vehicles in a variety of different environments. VEVI was designed and implemented to be modular, distributed and easily operated through long-distance communication links, using a communication paradigm called SYNERGY.
Planar maneuvering control of underwater snake robots using virtual holonomic constraints.
Kohl, Anna M; Kelasidi, Eleni; Mohammadi, Alireza; Maggiore, Manfredi; Pettersen, Kristin Y
2016-11-24
This paper investigates the problem of planar maneuvering control for bio-inspired underwater snake robots that are exposed to unknown ocean currents. The control objective is to make a neutrally buoyant snake robot which is subject to hydrodynamic forces and ocean currents converge to a desired planar path and traverse the path with a desired velocity. The proposed feedback control strategy enforces virtual constraints which encode biologically inspired gaits on the snake robot configuration. The virtual constraints, parametrized by states of dynamic compensators, are used to regulate the orientation and forward speed of the snake robot. A two-state ocean current observer based on relative velocity sensors is proposed. It enables the robot to follow the path in the presence of unknown constant ocean currents. The efficacy of the proposed control algorithm for several biologically inspired gaits is verified both in simulations for different path geometries and in experiments.
Liu, Lu; Masfary, Osama; Antonopoulos, Nick
2012-01-01
The increasing trends of electrical consumption within data centres are a growing concern for business owners as they are quickly becoming a large fraction of the total cost of ownership. Ultra small sensors could be deployed within a data centre to monitor environmental factors to lower the electrical costs and improve the energy efficiency. Since servers and air conditioners represent the top users of electrical power in the data centre, this research sets out to explore methods from each subsystem of the data centre as part of an overall energy efficient solution. In this paper, we investigate the current trends of Green IT awareness and how the deployment of small environmental sensors and Site Infrastructure equipment optimization techniques which can offer a solution to a global issue by reducing carbon emissions.
Virtual GEOINT Center: C2ISR through an avatar's eyes
NASA Astrophysics Data System (ADS)
Seibert, Mark; Tidbal, Travis; Basil, Maureen; Muryn, Tyler; Scupski, Joseph; Williams, Robert
2013-05-01
As the number of devices collecting and sending data in the world are increasing, finding ways to visualize and understand that data is becoming more and more of a problem. This has often been coined as the problem of "Big Data." The Virtual Geoint Center (VGC) aims to aid in solving that problem by providing a way to combine the use of the virtual world with outside tools. Using open-source software such as OpenSim and Blender, the VGC uses a visually stunning 3D environment to display the data sent to it. The VGC is broken up into two major components: The Kinect Minimap, and the Geoint Map. The Kinect Minimap uses the Microsoft Kinect and its open-source software to make a miniature display of people the Kinect detects in front of it. The Geoint Map collect smartphone sensor information from online databases and displays them in real time onto a map generated by Google Maps. By combining outside tools and the virtual world, the VGC can help a user "visualize" data, and provide additional tools to "understand" the data.
NASA Astrophysics Data System (ADS)
Jagodziński, Piotr; Wolski, Robert
2015-02-01
Natural User Interfaces (NUI) are now widely used in electronic devices such as smartphones, tablets and gaming consoles. We have tried to apply this technology in the teaching of chemistry in middle school and high school. A virtual chemical laboratory was developed in which students can simulate the performance of laboratory activities similar to those that they perform in a real laboratory. Kinect sensor was used for the detection and analysis of the student's hand movements, which is an example of NUI. The studies conducted found the effectiveness of educational virtual laboratory. The extent to which the use of a teaching aid increased the students' progress in learning chemistry was examined. The results indicate that the use of NUI creates opportunities to both enhance and improve the quality of the chemistry education. Working in a virtual laboratory using the Kinect interface results in greater emotional involvement and an increased sense of self-efficacy in the laboratory work among students. As a consequence, students are getting higher marks and are more interested in the subject of chemistry.
Augmenting the thermal flux experiment: A mixed reality approach with the HoloLens
NASA Astrophysics Data System (ADS)
Strzys, M. P.; Kapp, S.; Thees, M.; Kuhn, J.; Lukowicz, P.; Knierim, P.; Schmidt, A.
2017-09-01
In the field of Virtual Reality (VR) and Augmented Reality (AR), technologies have made huge progress during the last years and also reached the field of education. The virtuality continuum, ranging from pure virtuality on one side to the real world on the other, has been successfully covered by the use of immersive technologies like head-mounted displays, which allow one to embed virtual objects into the real surroundings, leading to a Mixed Reality (MR) experience. In such an environment, digital and real objects do not only coexist, but moreover are also able to interact with each other in real time. These concepts can be used to merge human perception of reality with digitally visualized sensor data, thereby making the invisible visible. As a first example, in this paper we introduce alongside the basic idea of this column an MR experiment in thermodynamics for a laboratory course for freshman students in physics or other science and engineering subjects that uses physical data from mobile devices for analyzing and displaying physical phenomena to students.
Virtual environment assessment for laser-based vision surface profiling
NASA Astrophysics Data System (ADS)
ElSoussi, Adnane; Al Alami, Abed ElRahman; Abu-Nabah, Bassam A.
2015-03-01
Oil and gas businesses have been raising the demand from original equipment manufacturers (OEMs) to implement a reliable metrology method in assessing surface profiles of welds before and after grinding. This certainly mandates the deviation from the commonly used surface measurement gauges, which are not only operator dependent, but also limited to discrete measurements along the weld. Due to its potential accuracy and speed, the use of laser-based vision surface profiling systems have been progressively rising as part of manufacturing quality control. This effort presents a virtual environment that lends itself for developing and evaluating existing laser vision sensor (LVS) calibration and measurement techniques. A combination of two known calibration techniques is implemented to deliver a calibrated LVS system. System calibration is implemented virtually and experimentally to scan simulated and 3D printed features of known profiles, respectively. Scanned data is inverted and compared with the input profiles to validate the virtual environment capability for LVS surface profiling and preliminary assess the measurement technique for weld profiling applications. Moreover, this effort brings 3D scanning capability a step closer towards robust quality control applications in a manufacturing environment.
Fast in-situ tool inspection based on inverse fringe projection and compact sensor heads
NASA Astrophysics Data System (ADS)
Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard
2016-11-01
Inspection of machine elements is an important task in production processes in order to ensure the quality of produced parts and to gather feedback for the continuous improvement process. A new measuring system is presented, which is capable of performing the inspection of critical tool geometries, such as gearing elements, inside the forming machine. To meet the constraints on sensor head size and inspection time imposed by the limited space inside the machine and the cycle time of the process, the measuring device employs a combination of endoscopy techniques with the fringe projection principle. Compact gradient index lenses enable a compact design of the sensor head, which is connected to a CMOS camera and a flexible micro-mirror based projector via flexible fiber bundles. Using common fringe projection patterns, the system achieves measuring times of less than five seconds. To further reduce the time required for inspection, the generation of inverse fringe projection patterns has been implemented for the system. Inverse fringe projection speeds up the inspection process by employing object-adapted patterns, which enable the detection of geometry deviations in a single image. Two different approaches to generate object adapted patterns are presented. The first approach uses a reference measurement of a manufactured tool master to generate the inverse pattern. The second approach is based on a virtual master geometry in the form of a CAD file and a ray-tracing model of the measuring system. Virtual modeling of the measuring device and inspection setup allows for geometric tolerancing for free-form surfaces by the tool designer in the CAD-file. A new approach is presented, which uses virtual tolerance specifications and additional simulation steps to enable fast checking of metric tolerances. Following the description of the pattern generation process, the image processing steps required for inspection are demonstrated on captures of gearing geometries.
Virtual reality applied to teletesting
NASA Astrophysics Data System (ADS)
van den Berg, Thomas J.; Smeenk, Roland J. M.; Mazy, Alain; Jacques, Patrick; Arguello, Luis; Mills, Simon
2003-05-01
The activity "Virtual Reality applied to Teletesting" is related to a wider European Space Agency (ESA) initiative of cost reduction, in particular the reduction of test costs. Reduction of costs of space related projects have to be performed on test centre operating costs and customer company costs. This can accomplished by increasing the automation and remote testing ("teletesting") capabilities of the test centre. Main problems related to teletesting are a lack of situational awareness and the separation of control over the test environment. The objective of the activity is to evaluate the use of distributed computing and Virtual Reality technology to support the teletesting of a payload under vacuum conditions, and to provide a unified man-machine interface for the monitoring and control of payload, vacuum chamber and robotics equipment. The activity includes the development and testing of a "Virtual Reality Teletesting System" (VRTS). The VRTS is deployed at one of the ESA certified test centres to perform an evaluation and test campaign using a real payload. The VRTS is entirely written in the Java programming language, using the J2EE application model. The Graphical User Interface runs as an applet in a Web browser, enabling easy access from virtually any place.
Decentralized real-time simulation of forest machines
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Adam, Frank; Hoffmann, Katharina; Rossmann, Juergen; Kraemer, Michael; Schluse, Michael
2000-10-01
To develop realistic forest machine simulators is a demanding task. A useful simulator has to provide a close- to-reality simulation of the forest environment as well as the simulation of the physics of the vehicle. Customers demand a highly realistic three dimensional forestry landscape and the realistic simulation of the complex motion of the vehicle even in rough terrain in order to be able to use the simulator for operator training under close-to- reality conditions. The realistic simulation of the vehicle, especially with the driver's seat mounted on a motion platform, greatly improves the effect of immersion into the virtual reality of a simulated forest and the achievable level of education of the driver. Thus, the connection of the real control devices of forest machines to the simulation system has to be supported, i.e. the real control devices like the joysticks or the board computer system to control the crane, the aggregate etc. Beyond, the fusion of the board computer system and the simulation system is realized by means of sensors, i.e. digital and analog signals. The decentralized system structure allows several virtual reality systems to evaluate and visualize the information of the control devices and the sensors. So, while the driver is practicing, the instructor can immerse into the same virtual forest to monitor the session from his own viewpoint. In this paper, we are describing the realized structure as well as the necessary software and hardware components and application experiences.
Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality
Cushman, Laura A.; Stein, Karen; Duffy, Charles J.
2008-01-01
Background: Older adults get lost, in many cases because of recognized or incipient Alzheimer disease (AD). In either case, getting lost can be a threat to individual and public safety, as well as to personal autonomy and quality of life. Here we compare our previously described real-world navigation test with a virtual reality (VR) version simulating the same navigational environment. Methods: Quantifying real-world navigational performance is difficult and time-consuming. VR testing is a promising alternative, but it has not been compared with closely corresponding real-world testing in aging and AD. We have studied navigation using both real-world and virtual environments in the same subjects: young normal controls (YNCs, n = 35), older normal controls (ONCs, n = 26), patients with mild cognitive impairment (MCI, n = 12), and patients with early AD (EAD, n = 14). Results: We found close correlations between real-world and virtual navigational deficits that increased across groups from YNC to ONC, to MCI, and to EAD. Analyses of subtest performance showed similar profiles of impairment in real-world and virtual testing in all four subject groups. The ONC, MCI, and EAD subjects all showed greatest difficulty in self-orientation and scene localization tests. MCI and EAD patients also showed impaired verbal recall about both test environments. Conclusions: Virtual environment testing provides a valid assessment of navigational skills. Aging and Alzheimer disease (AD) share the same patterns of difficulty in associating visual scenes and locations, which is complicated in AD by the accompanying loss of verbally mediated navigational capacities. We conclude that virtual navigation testing reveals deficits in aging and AD that are associated with potentially grave risks to our patients and the community. GLOSSARY AD = Alzheimer disease; EAD = early Alzheimer disease; MCI = mild cognitive impairment; MMSE = Mini-Mental State Examination; ONC = older normal control; std. wt. = standardized weight; THSD = Tukey honestly significant difference; VR = virtual reality; YNC = young normal control. PMID:18794491
Krpič, Andrej; Savanović, Arso; Cikajlo, Imre
2013-06-01
Telerehabilitation can offer prolonged rehabilitation for patients with stroke after being discharged from the hospital, whilst remote diagnostics may reduce the frequency of the outpatient services required. Here, we compared a novel telerehabilitation system for virtual reality-supported balance training with balance training with only a standing frame and with conventional therapy in the hospital. The proposed low-cost experimental system for balance training enabling multiple home systems, real-time tracking of task's performance and different views of captured data with balance training, consists of a standing frame equipped with a tilt sensor, a low-cost computer, display, and internet connection. Goal-based tasks for balance training in the virtual environment proved motivating for the participating individuals. The physiotherapist, located in the remote healthcare center, could remotely adjust the level of complexity and difficulty or preview the outcomes and instructions with the application on the mobile smartphone. Patients using the virtual reality-supported balance training showed an improvement in the task performance time of 45% and number of collisions of 68%, showing significant improvements in the Berg Balance Scale, Timed 'Up and Go', and 10 m Walk Test. The clinical outcomes were not significantly different from balance training with only the standing frame or conventional therapy. The proposed telerehabilitation can facilitate the physiotherapists' work and thus enable rehabilitation to a larger number of patients after release from the hospital because it requires less time and infrequent presence of the clinical staff. However, a comprehensive clinical evaluation is required to confirm the applicability of the concept.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmid, Beat; Tomlinson, Jason M.; Hubbe, John M.
2014-05-01
The Department of Energy Atmospheric Radiation Measurement (ARM) Program is a climate research user facility operating stationary ground sites that provide long-term measurements of climate relevant properties, mobile ground- and ship-based facilities to conduct shorter field campaigns (6-12 months), and the ARM Aerial Facility (AAF). The airborne observations acquired by the AAF enhance the surface-based ARM measurements by providing high-resolution in-situ measurements for process understanding, retrieval-algorithm development, and model evaluation that are not possible using ground- or satellite-based techniques. Several ARM aerial efforts were consolidated into the AAF in 2006. With the exception of a small aircraft used for routinemore » measurements of aerosols and carbon cycle gases, AAF at the time had no dedicated aircraft and only a small number of instruments at its disposal. In this "virtual hangar" mode, AAF successfully carried out several missions contracting with organizations and investigators who provided their research aircraft and instrumentation. In 2009, AAF started managing operations of the Battelle-owned Gulfstream I (G-1) large twin-turboprop research aircraft. Furthermore, the American Recovery and Reinvestment Act of 2009 provided funding for the procurement of over twenty new instruments to be used aboard the G-1 and other AAF virtual-hangar aircraft. AAF now executes missions in the virtual- and real-hangar mode producing freely available datasets for studying aerosol, cloud, and radiative processes in the atmosphere. AAF is also engaged in the maturation and testing of newly developed airborne sensors to help foster the next generation of airborne instruments.« less
Integration of the Shuttle RMS/CBM Positioning Virtual Environment Simulation
NASA Technical Reports Server (NTRS)
Dumas, Joseph D.
1996-01-01
Constructing the International Space Station, or other structures, in space presents a number of problems. In particular, payload restrictions for the Space Shuttle and other launch mechanisms prohibit assembly of large space-based structures on Earth. Instead, a number of smaller modules must be boosted into orbit separately and then assembled to form the final structure. The assembly process is difficult, as docking interfaces such as Common Berthing Mechanisms (CBMS) must be precisely positioned relative to each other to be within the "capture envelope" (approximately +/- 1 inch and +/- 0.3 degrees from the nominal position) and attach properly. In the case of the Space Station, the docking mechanisms are to be positioned robotically by an astronaut using the 55-foot-long Remote Manipulator System (RMS) robot arm. Unfortunately, direct visual or video observation of the placement process is difficult or impossible in many scenarios. One method that has been tested for aligning the CBMs uses a boresighted camera mounted on one CBM to view a standard target on the opposing CBM. While this method might be sufficient to achieve proper positioning with considerable effort, it does not provide a high level of confidence that the mechanisms have been placed within capture range of each other. It also does nothing to address the risk of inadvertent contact between the CBMS, which could result in RMS control software errors. In general, constraining the operator to a single viewpoint with few, if any, depth cues makes the task much more difficult than it would be if the target could be viewed in three-dimensional space from various viewpoints. The actual work area could be viewed by an astronaut during EVA; however, it would be extremely impractical to have an astronaut control the RMS while spacewalking. On the other hand, a view of the RMS and CBMs to be positioned in a virtual environment aboard the Space Shuttle orbiter or Space Station could provide similar benefits more safely and conveniently with little additional cost. In order to render and view the RMS and CBMs in a virtual world, the position and orientation of the end effector in three-dimensional space must be known with a high degree of accuracy. A precision video alignment sensor has been developed which can determine the position and orientation of the controlled element relative to the target CBM within approximately one-sixteenth inch and 0.07 angular degrees. Such a sensor could replace or augment the boresighted camera mentioned above. The computer system used to render the virtual world and the position tracking systems which might be used to monitor the user's movements (in order to adjust the viewpoint in virtual space) are small enough to carry to orbit. Thus, such a system would be feasible for use in constructing structures in space.
Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality.
Cushman, Laura A; Stein, Karen; Duffy, Charles J
2008-09-16
Older adults get lost, in many cases because of recognized or incipient Alzheimer disease (AD). In either case, getting lost can be a threat to individual and public safety, as well as to personal autonomy and quality of life. Here we compare our previously described real-world navigation test with a virtual reality (VR) version simulating the same navigational environment. Quantifying real-world navigational performance is difficult and time-consuming. VR testing is a promising alternative, but it has not been compared with closely corresponding real-world testing in aging and AD. We have studied navigation using both real-world and virtual environments in the same subjects: young normal controls (YNCs, n = 35), older normal controls (ONCs, n = 26), patients with mild cognitive impairment (MCI, n = 12), and patients with early AD (EAD, n = 14). We found close correlations between real-world and virtual navigational deficits that increased across groups from YNC to ONC, to MCI, and to EAD. Analyses of subtest performance showed similar profiles of impairment in real-world and virtual testing in all four subject groups. The ONC, MCI, and EAD subjects all showed greatest difficulty in self-orientation and scene localization tests. MCI and EAD patients also showed impaired verbal recall about both test environments. Virtual environment testing provides a valid assessment of navigational skills. Aging and Alzheimer disease (AD) share the same patterns of difficulty in associating visual scenes and locations, which is complicated in AD by the accompanying loss of verbally mediated navigational capacities. We conclude that virtual navigation testing reveals deficits in aging and AD that are associated with potentially grave risks to our patients and the community.
Fernandez Montenegro, Juan Manuel; Argyriou, Vasileios
2017-05-01
Alzheimer's screening tests are commonly used by doctors to diagnose the patient's condition and stage as early as possible. Most of these tests are based on pen-paper interaction and do not embrace the advantages provided by new technologies. This paper proposes novel Alzheimer's screening tests based on virtual environments and game principles using new immersive technologies combined with advanced Human Computer Interaction (HCI) systems. These new tests are focused on the immersion of the patient in a virtual room, in order to mislead and deceive the patient's mind. In addition, we propose two novel variations of Turing Test proposed by Alan Turing as a method to detect dementia. As a result, four tests are introduced demonstrating the wide range of screening mechanisms that could be designed using virtual environments and game concepts. The proposed tests are focused on the evaluation of memory loss related to common objects, recent conversations and events; the diagnosis of problems in expressing and understanding language; the ability to recognize abnormalities; and to differentiate between virtual worlds and reality, or humans and machines. The proposed screening tests were evaluated and tested using both patients and healthy adults in a comparative study with state-of-the-art Alzheimer's screening tests. The results show the capacity of the new tests to distinguish healthy people from Alzheimer's patients. Copyright © 2017. Published by Elsevier Inc.
From Antarctica to space: Use of telepresence and virtual reality in control of remote vehicles
NASA Technical Reports Server (NTRS)
Stoker, Carol; Hine, Butler P., III; Sims, Michael; Rasmussen, Daryl; Hontalas, Phil; Fong, Terrence W.; Steele, Jay; Barch, Don; Andersen, Dale; Miles, Eric
1994-01-01
In the Fall of 1993, NASA Ames deployed a modified Phantom S2 Remotely-Operated underwater Vehicle (ROV) into an ice-covered sea environment near McMurdo Science Station, Antarctica. This deployment was part of the antarctic Space Analog Program, a joint program between NASA and the National Science Foundation to demonstrate technologies relevant for space exploration in realistic field setting in the Antarctic. The goal of the mission was to operationally test the use of telepresence and virtual reality technology in the operator interface to a remote vehicle, while performing a benthic ecology study. The vehicle was operated both locally, from above a dive hole in the ice through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research Center. Local control of the vehicle was accomplished using the standard Phantom control box containing joysticks and switches, with the operator viewing stereo video camera images on a stereo display monitor. Remote control of the vehicle over the satellite link was accomplished using the Virtual Environment Vehicle Interface (VEVI) control software developed at NASA Ames. The remote operator interface included either a stereo display monitor similar to that used locally or a stereo head-mounted head-tracked display. The compressed video signal from the vehicle was transmitted to NASA Ames over a 768 Kbps satellite channel. Another channel was used to provide a bi-directional Internet link to the vehicle control computer through which the command and telemetry signals traveled, along with a bi-directional telephone service. In addition to the live stereo video from the satellite link, the operator could view a computer-generated graphic representation of the underwater terrain, modeled from the vehicle's sensors. The virtual environment contained an animate graphic model of the vehicle which reflected the state of the actual vehicle, along with ancillary information such as the vehicle track, science markers, and locations of video snapshots. The actual vehicle was driven either from within the virtual environment or through a telepresence interface. All vehicle functions could be controlled remotely over the satellite link.
Virtual Reality-Based Center of Mass-Assisted Personalized Balance Training System
Kumar, Deepesh; González, Alejandro; Das, Abhijit; Dutta, Anirban; Fraisse, Philippe; Hayashibe, Mitsuhiro; Lahiri, Uttama
2018-01-01
Poststroke hemiplegic patients often show altered weight distribution with balance disorders, increasing their risk of fall. Conventional balance training, though powerful, suffers from scarcity of trained therapists, frequent visits to clinics to get therapy, one-on-one therapy sessions, and monotony of repetitive exercise tasks. Thus, technology-assisted balance rehabilitation can be an alternative solution. Here, we chose virtual reality as a technology-based platform to develop motivating balance tasks. This platform was augmented with off-the-shelf available sensors such as Nintendo Wii balance board and Kinect to estimate one’s center of mass (CoM). The virtual reality-based CoM-assisted balance tasks (Virtual CoMBaT) was designed to be adaptive to one’s individualized weight-shifting capability quantified through CoM displacement. Participants were asked to interact with Virtual CoMBaT that offered tasks of varying challenge levels while adhering to ankle strategy for weight shifting. To facilitate the patients to use ankle strategy during weight-shifting, we designed a heel lift detection module. A usability study was carried out with 12 hemiplegic patients. Results indicate the potential of our system to contribute to improving one’s overall performance in balance-related tasks belonging to different difficulty levels. PMID:29359128
NASA Astrophysics Data System (ADS)
Farkas, Attila J.; Hajnal, Alen; Shiratuddin, Mohd F.; Szatmary, Gabriella
In this paper, we propose a novel approach of using interactive virtual environment technology in Vision Restoration Therapy caused by Traumatic Brain Injury. We called the new system Interactive Visuotactile Virtual Environment and it holds a promise of expanding the scope of already existing rehabilitation techniques. Traditional vision rehabilitation methods are based on passive psychophysical training procedures, and can last up to six months before any modest improvements can be seen in patients. A highly immersive and interactive virtual environment will allow the patient to practice everyday activities such as object identification and object manipulation through the use 3D motion sensoring handheld devices such data glove or the Nintendo Wiimote. Employing both perceptual and action components in the training procedures holds the promise of more efficient sensorimotor rehabilitation. Increased stimulation of visual and sensorimotor areas of the brain should facilitate a comprehensive recovery of visuomotor function by exploiting the plasticity of the central nervous system. Integrated with a motion tracking system and an eye tracking device, the interactive virtual environment allows for the creation and manipulation of a wide variety of stimuli, as well as real-time recording of hand-, eye- and body movements and coordination. The goal of the project is to design a cost-effective and efficient vision restoration system.
ERIC Educational Resources Information Center
Parton, Becky Sue
2006-01-01
In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based…
Asensio, C; Gasco, L; Ruiz, M; Recuero, M
2015-02-01
This paper describes a methodology and case study for the implementation of educational virtual laboratories for practice training on acoustic tests according to international standards. The objectives of this activity are (a) to help the students understand and apply the procedures described in the standards and (b) to familiarize the students with the uncertainty in measurement and its estimation in acoustics. The virtual laboratory will not focus on the handling and set-up of real acoustic equipment but rather on procedures and uncertainty. The case study focuses on the application of the virtual laboratory for facade sound insulation tests according to ISO 140-5:1998 (International Organization for Standardization, Geneva, Switzerland, 1998), and the paper describes the causal and stochastic models and the constraints applied in the virtual environment under consideration. With a simple user interface, the laboratory will provide measurement data that the students will have to process to report the insulation results that must converge with the "virtual true values" in the laboratory. The main advantage of the virtual laboratory is derived from the customization of factors in which the student will be instructed or examined (for instance, background noise correction, the detection of sporadic corrupted observations, and the effect of instrument precision).
Poland, Michael P; Nugent, Chris D; Wang, Hui; Chen, Liming
2009-01-01
Smart Homes offer potential solutions for various forms of independent living for the elderly. The assistive and protective environment afforded by smart homes offer a safe, relatively inexpensive, dependable and viable alternative to vulnerable inhabitants. Nevertheless, the success of a smart home rests upon the quality of information its decision support system receives and this in turn places great importance on the issue of correct sensor deployment. In this article we present a software tool that has been developed to address the elusive issue of sensor distribution within smart homes. Details of the tool will be presented and it will be shown how it can be used to emulate any real world environment whereby virtual sensor distributions can be rapidly implemented and assessed without the requirement for physical deployment for evaluation. As such, this approach offers the potential of tailoring sensor distributions to the specific needs of a patient in a non-evasive manner. The heuristics based tool presented here has been developed as the first part of a three stage project.
Niewiadomska-Szynkiewicz, Ewa; Sikora, Andrzej; Marks, Michał
2016-01-01
Using mobile robots or unmanned vehicles to assist optimal wireless sensors deployment in a working space can significantly enhance the capability to investigate unknown environments. This paper addresses the issues of the application of numerical optimization and computer simulation techniques to on-line calculation of a wireless sensor network topology for monitoring and tracking purposes. We focus on the design of a self-organizing and collaborative mobile network that enables a continuous data transmission to the data sink (base station) and automatically adapts its behavior to changes in the environment to achieve a common goal. The pre-defined and self-configuring approaches to the mobile-based deployment of sensors are compared and discussed. A family of novel algorithms for the optimal placement of mobile wireless devices for permanent monitoring of indoor and outdoor dynamic environments is described. They employ a network connectivity-maintaining mobility model utilizing the concept of the virtual potential function for calculating the motion trajectories of platforms carrying sensors. Their quality and utility have been justified through simulation experiments and are discussed in the final part of the paper. PMID:27649186
A simulator for airborne laser swath mapping via photon counting
NASA Astrophysics Data System (ADS)
Slatton, K. C.; Carter, W. E.; Shrestha, R.
2005-06-01
Commercially marketed airborne laser swath mapping (ALSM) instruments currently use laser rangers with sufficient energy per pulse to work with return signals of thousands of photons per shot. The resulting high signal to noise level virtually eliminates spurious range values caused by noise, such as background solar radiation and sensor thermal noise. However, the high signal level approach requires laser repetition rates of hundreds of thousands of pulses per second to obtain contiguous coverage of the terrain at sub-meter spatial resolution, and with currently available technology, affords little scalability for significantly downsizing the hardware, or reducing the costs. A photon-counting ALSM sensor has been designed by the University of Florida and Sigma Space, Inc. for improved topographic mapping with lower power requirements and weight than traditional ALSM sensors. Major elements of the sensor design are presented along with preliminary simulation results. The simulator is being developed so that data phenomenology and target detection potential can be investigated before the system is completed. Early simulations suggest that precise estimates of terrain elevation and target detection will be possible with the sensor design.
Niewiadomska-Szynkiewicz, Ewa; Sikora, Andrzej; Marks, Michał
2016-09-14
Using mobile robots or unmanned vehicles to assist optimal wireless sensors deployment in a working space can significantly enhance the capability to investigate unknown environments. This paper addresses the issues of the application of numerical optimization and computer simulation techniques to on-line calculation of a wireless sensor network topology for monitoring and tracking purposes. We focus on the design of a self-organizing and collaborative mobile network that enables a continuous data transmission to the data sink (base station) and automatically adapts its behavior to changes in the environment to achieve a common goal. The pre-defined and self-configuring approaches to the mobile-based deployment of sensors are compared and discussed. A family of novel algorithms for the optimal placement of mobile wireless devices for permanent monitoring of indoor and outdoor dynamic environments is described. They employ a network connectivity-maintaining mobility model utilizing the concept of the virtual potential function for calculating the motion trajectories of platforms carrying sensors. Their quality and utility have been justified through simulation experiments and are discussed in the final part of the paper.
Magnetic sensor technology for detecting mines, UXO, and other concealed security threats
NASA Astrophysics Data System (ADS)
Czipott, Peter V.; Iwanowski, Mark D.
1997-01-01
Magnetic sensors have been the sensor of choice in the detection and classification of buried mines and unexploded ordnance (UXO), both on land and underwater, Quantum Magnetics (QM), together with its research partner IBM, have developed a variety of advanced, very high sensitivity superconducting and room temperature magnetic sensors to meet military needs. This work has led to the development and utilization of a three-sensor gradiometer (TSG) patented by IBM, which cannot only detect, but also localize mines and ordnance. QM is also working with IBM and the U.S. Navy to develop an advanced superconducting gradiometer for buried underwater mine detection. The ability to both detect and classify buried non-metallic mines is virtually impossible with existing magnetic sensors. To solve this problem, Quantum Magnetics, building on work of the Naval Research Laboratory (NRL), is pioneering work in the development of quadrupole resonance (QR) methods which can be used to detect the explosive material directly. Based on recent laboratory work done at QM and previous work done in the U.S., Russia and the United Kingdom, we are confident that QR can be effectively applied to the non-metallic mine identification problem.
Mansano, Raul K.; Godoy, Eduardo P.; Porto, Arthur J. V.
2014-01-01
Recent advances in wireless networking technology and the proliferation of industrial wireless sensors have led to an increasing interest in using wireless networks for closed loop control. The main advantages of Wireless Networked Control Systems (WNCSs) are the reconfigurability, easy commissioning and the possibility of installation in places where cabling is impossible. Despite these advantages, there are two main problems which must be considered for practical implementations of WNCSs. One problem is the sampling period constraint of industrial wireless sensors. This problem is related to the energy cost of the wireless transmission, since the power supply is limited, which precludes the use of these sensors in several closed-loop controls. The other technological concern in WNCS is the energy efficiency of the devices. As the sensors are powered by batteries, the lowest possible consumption is required to extend battery lifetime. As a result, there is a compromise between the sensor sampling period, the sensor battery lifetime and the required control performance for the WNCS. This paper develops a model-based soft sensor to overcome these problems and enable practical implementations of WNCSs. The goal of the soft sensor is generating virtual data allowing an actuation on the process faster than the maximum sampling period available for the wireless sensor. Experimental results have shown the soft sensor is a solution to the sampling period constraint problem of wireless sensors in control applications, enabling the application of industrial wireless sensors in WNCSs. Additionally, our results demonstrated the soft sensor potential for implementing energy efficient WNCS through the battery saving of industrial wireless sensors. PMID:25529208
Umchid, S.; Gopinath, R.; Srinivasan, K.; Lewin, P. A.; Daryoush, A. S.; Bansal, L.; El-Sherif, M.
2009-01-01
The primary objective of this work was to develop and optimize the calibration techniques for ultrasonic hydrophone probes used in acoustic field measurements up to 100 MHz. A dependable, 100 MHz calibration method was necessary to examine the behavior of a sub-millimeter spatial resolution fiber optic (FO) sensor and assess the need for such a sensor as an alternative tool for high frequency characterization of ultrasound fields. Also, it was of interest to investigate the feasibility of using FO probes in high intensity fields such as those employed in HIFU (High Intensity Focused Ultrasound) applications. In addition to the development and validation of a novel, 100 MHz calibration technique the innovative elements of this research include implementation and testing of a prototype FO sensor with an active diameter of about 10 μm that exhibits uniform sensitivity over the considered frequency range and does not require any spatial averaging corrections up to about 75 MHz. The results of the calibration measurements are presented and it is shown that the optimized calibration technique allows the sensitivity of the hydrophone probes to be determined as a virtually continuous function of frequency and is also well suited to verify the uniformity of the FO sensor frequency response. As anticipated, the overall uncertainty of the calibration was dependent on frequency and determined to be about ±12% (±1 dB) up to 40 MHz, ±20% (±1.5 dB) from 40 to 60 MHz and ±25% (±2 dB) from 60 to 100 MHz. The outcome of this research indicates that once fully developed and calibrated, the combined acousto-optic system will constitute a universal reference tool in the wide, 100 MHz bandwidth. PMID:19110289
Prospects for infrasound bolide detections from balloon-borne platforms
NASA Astrophysics Data System (ADS)
Young, Eliot; Bowman, Daniel; Arrowsmith, Stephen; Boslough, Marc; Klein, Viliam; Ballard, Courtney; Lees, Jonathan
2017-04-01
We report on an experiment to assess whether balloon-borne instruments can improve sensitivities to bolides exploding in the Earth's atmosphere (essentially using the atmosphere as a witness plate to characterize the small end of the NEO (Near Earth Object) population). The CTBTO's infrasound network regularly detects infrasound disturbances caused by bolides, including the 15-FEB-2013 Chelybinsk impact. Balloon-borne infrasound sensors should have two important advantages over ground-based infrasound stations: there should be virtually no wind noise on a free-floating platform, and a sensor in the stratosphere should benefit from its location within the stratospheric duct. Balloon-borne sensors also have the disadvantage that the amplitude of infrasound waves will decrease as they ascend with altitude. To test the performance of balloon-borne sensors, we conducted an experiment on a NASA high altitude (35 km) balloon launched from Ft Sumner, NM on 28-SEP-2016. We were able to put two independent infrasound payloads on this flight. We arranged for three 3000-lb ANFO explosions to be detonated from Socorro, NM at 12:00, 14:00 and 16:29:59 MST. The first two explosions were detected from the NASA balloon, with the first explosion showing three separate waveforms arriving within a 25-s span. The peak-to-peak amplitude of the waveforms was about 0.06 Pa, and the cleanest microphone channel detected this waveform with an SNR greater than 20. A second balloon at 15 km altitude also detected the second explosion. We have signals from a dozen ground stations at various positions from Socorro to Ft Sumner. We will report on wave propagation models and how they compare with observations from the two balloons and the various ground-stations.
Development of a bio-magnetic measurement system and sensor configuration analysis for rats
NASA Astrophysics Data System (ADS)
Kim, Ji-Eun; Kim, In-Seon; Kim, Kiwoong; Lim, Sanghyun; Kwon, Hyukchan; Kang, Chan Seok; Ahn, San; Yu, Kwon Kyu; Lee, Yong-Ho
2017-04-01
Magnetoencephalography (MEG) based on superconducting quantum interference devices enables the measurement of very weak magnetic fields (10-1000 fT) generated from the human or animal brain. In this article, we introduce a small MEG system that we developed specifically for use with rats. Our system has the following characteristics: (1) variable distance between the pick-up coil and outer Dewar bottom (˜5 mm), (2) small pick-up coil (4 mm) for high spatial resolution, (3) good field sensitivity (45 ˜ 80 fT /cm/√{Hz} ) , (4) the sensor interval satisfies the Nyquist spatial sampling theorem, and (5) small source localization error for the region to be investigated. To reduce source localization error, it is necessary to establish an optimal sensor layout. To this end, we simulated confidence volumes at each point on a grid on the surface of a virtual rat head. In this simulation, we used locally fitted spheres as model rat heads. This enabled us to consider more realistic volume currents. We constrained the model such that the dipoles could have only four possible orientations: the x- and y-axes from the original coordinates, and two tangentially layered dipoles (local x- and y-axes) in the locally fitted spheres. We considered the confidence volumes according to the sensor layout and dipole orientation and positions. We then conducted a preliminary test with a 4-channel MEG system prior to manufacturing the multi-channel system. Using the 4-channel MEG system, we measured rat magnetocardiograms. We obtained well defined P-, QRS-, and T-waves in rats with a maximum value of 15 pT/cm. Finally, we measured auditory evoked fields and steady state auditory evoked fields with maximum values 400 fT/cm and 250 fT/cm, respectively.
Communication Architecture in Mixed-Reality Simulations of Unmanned Systems
2018-01-01
Verification of the correct functionality of multi-vehicle systems in high-fidelity scenarios is required before any deployment of such a complex system, e.g., in missions of remote sensing or in mobile sensor networks. Mixed-reality simulations where both virtual and physical entities can coexist and interact have been shown to be beneficial for development, testing, and verification of such systems. This paper deals with the problems of designing a certain communication subsystem for such highly desirable realistic simulations. Requirements of this communication subsystem, including proper addressing, transparent routing, visibility modeling, or message management, are specified prior to designing an appropriate solution. Then, a suitable architecture of this communication subsystem is proposed together with solutions to the challenges that arise when simultaneous virtual and physical message transmissions occur. The proposed architecture can be utilized as a high-fidelity network simulator for vehicular systems with implicit mobility models that are given by real trajectories of the vehicles. The architecture has been utilized within multiple projects dealing with the development and practical deployment of multi-UAV systems, which support the architecture’s viability and advantages. The provided experimental results show the achieved similarity of the communication characteristics of the fully deployed hardware setup to the setup utilizing the proposed mixed-reality architecture. PMID:29538290
Communication Architecture in Mixed-Reality Simulations of Unmanned Systems.
Selecký, Martin; Faigl, Jan; Rollo, Milan
2018-03-14
Verification of the correct functionality of multi-vehicle systems in high-fidelity scenarios is required before any deployment of such a complex system, e.g., in missions of remote sensing or in mobile sensor networks. Mixed-reality simulations where both virtual and physical entities can coexist and interact have been shown to be beneficial for development, testing, and verification of such systems. This paper deals with the problems of designing a certain communication subsystem for such highly desirable realistic simulations. Requirements of this communication subsystem, including proper addressing, transparent routing, visibility modeling, or message management, are specified prior to designing an appropriate solution. Then, a suitable architecture of this communication subsystem is proposed together with solutions to the challenges that arise when simultaneous virtual and physical message transmissions occur. The proposed architecture can be utilized as a high-fidelity network simulator for vehicular systems with implicit mobility models that are given by real trajectories of the vehicles. The architecture has been utilized within multiple projects dealing with the development and practical deployment of multi-UAV systems, which support the architecture's viability and advantages. The provided experimental results show the achieved similarity of the communication characteristics of the fully deployed hardware setup to the setup utilizing the proposed mixed-reality architecture.
Virtual Control Policy for Binary Ordered Resources Petri Net Class.
Rovetto, Carlos A; Concepción, Tomás J; Cano, Elia Esther
2016-08-18
Prevention and avoidance of deadlocks in sensor networks that use the wormhole routing algorithm is an active research domain. There are diverse control policies that will address this problem being our approach a new method. In this paper we present a virtual control policy for the new specialized Petri net subclass called Binary Ordered Resources Petri Net (BORPN). Essentially, it is an ordinary class constructed from various state machines that share unitary resources in a complex form, which allows branching and joining of processes. The reduced structure of this new class gives advantages that allow analysis of the entire system's behavior, which is a prohibitive task for large systems because of the complexity and routing algorithms.
Simulation of Detecting Damage in Composite Stiffened Panel Using Lamb Waves
NASA Technical Reports Server (NTRS)
Wang, John T.; Ross, Richard W.; Huang, Guo L.; Yuan, Fuh G.
2013-01-01
Lamb wave damage detection in a composite stiffened panel is simulated by performing explicit transient dynamic finite element analyses and using signal imaging techniques. This virtual test process does not need to use real structures, actuators/sensors, or laboratory equipment. Quasi-isotropic laminates are used for the stiffened panels. Two types of damage are studied. One type is a damage in the skin bay and the other type is a debond between the stiffener flange and the skin. Innovative approaches for identifying the damage location and imaging the damage were developed. The damage location is identified by finding the intersection of the damage locus and the path of the time reversal wave packet re-emitted from the sensor nodes. The damage locus is a circle that envelops the potential damage locations. Its center is at the actuator location and its radius is computed by multiplying the group velocity by the time of flight to damage. To create a damage image for estimating the size of damage, a group of nodes in the neighborhood of the damage location is identified for applying an image condition. The image condition, computed at a finite element node, is the zero-lag cross-correlation (ZLCC) of the time-reversed incident wave signal and the time reversal wave signal from the sensor nodes. This damage imaging process is computationally efficient since only the ZLCC values of a small amount of nodes in the neighborhood of the identified damage location are computed instead of those of the full model.
Development of design parameters for virtual cement and concrete testing.
DOT National Transportation Integrated Search
2013-12-01
The development, testing, and certification of new concrete mix designs is an expensive and time-consuming aspect : of the concrete industry. A software package, named the Virtual Concrete and Cement Testing Laboratory (VCCTL), : has been developed b...
Generic Helicopter-Based Testbed for Surface Terrain Imaging Sensors
NASA Technical Reports Server (NTRS)
Alexander, James; Goldberg, Hannah; Montgomery, James; Spiers, Gary; Liebe, Carl; Johnson, Andrew; Gromov, Konstantin; Konefat, Edward; Lam, Raymond; Meras, Patrick
2008-01-01
To be certain that a candidate sensor system will perform as expected during missions, we have developed a field test system and have executed test flights with a helicopter-mounted sensor platform over desert terrains, which simulate Lunar features. A key advantage to this approach is that different sensors can be tested and characterized in an environment relevant to the flight needs prior to flight. Testing the various sensors required the development of a field test system, including an instrument to validate the truth of the sensor system under test. The field test system was designed to be flexible enough to cover the test needs of many sensors (lidar, radar, cameras) that require an aerial test platform, including helicopters, airplanes, unmanned aerial vehicles (UAV), or balloons. To validate the performance of the sensor under test, the dynamics of the test platform must be known with sufficient accuracy to provide accurate models for input into algorithm development. The test system provides support equipment to measure the dynamics of the field test sensor platform, and allow computation of the truth position, velocity, attitude, and time.
Reconstruction of in-plane strain maps using hybrid dense sensor network composed of sensing skin
NASA Astrophysics Data System (ADS)
Downey, Austin; Laflamme, Simon; Ubertini, Filippo
2016-12-01
The authors have recently developed a soft-elastomeric capacitive (SEC)-based thin film sensor for monitoring strain on mesosurfaces. Arranged in a network configuration, the sensing system is analogous to a biological skin, where local strain can be monitored over a global area. Under plane stress conditions, the sensor output contains the additive measurement of the two principal strain components over the monitored surface. In applications where the evaluation of strain maps is useful, in structural health monitoring for instance, such signal must be decomposed into linear strain components along orthogonal directions. Previous work has led to an algorithm that enabled such decomposition by leveraging a dense sensor network configuration with the addition of assumed boundary conditions. Here, we significantly improve the algorithm’s accuracy by leveraging mature off-the-shelf solutions to create a hybrid dense sensor network (HDSN) to improve on the boundary condition assumptions. The system’s boundary conditions are enforced using unidirectional RSGs and assumed virtual sensors. Results from an extensive experimental investigation demonstrate the good performance of the proposed algorithm and its robustness with respect to sensors’ layout. Overall, the proposed algorithm is seen to effectively leverage the advantages of a hybrid dense network for application of the thin film sensor to reconstruct surface strain fields over large surfaces.
Software platform virtualization in chemistry research and university teaching
2009-01-01
Background Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Results Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Conclusion Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide. PMID:20150997
Software platform virtualization in chemistry research and university teaching.
Kind, Tobias; Leamy, Tim; Leary, Julie A; Fiehn, Oliver
2009-11-16
Modern chemistry laboratories operate with a wide range of software applications under different operating systems, such as Windows, LINUX or Mac OS X. Instead of installing software on different computers it is possible to install those applications on a single computer using Virtual Machine software. Software platform virtualization allows a single guest operating system to execute multiple other operating systems on the same computer. We apply and discuss the use of virtual machines in chemistry research and teaching laboratories. Virtual machines are commonly used for cheminformatics software development and testing. Benchmarking multiple chemistry software packages we have confirmed that the computational speed penalty for using virtual machines is low and around 5% to 10%. Software virtualization in a teaching environment allows faster deployment and easy use of commercial and open source software in hands-on computer teaching labs. Software virtualization in chemistry, mass spectrometry and cheminformatics is needed for software testing and development of software for different operating systems. In order to obtain maximum performance the virtualization software should be multi-core enabled and allow the use of multiprocessor configurations in the virtual machine environment. Server consolidation, by running multiple tasks and operating systems on a single physical machine, can lead to lower maintenance and hardware costs especially in small research labs. The use of virtual machines can prevent software virus infections and security breaches when used as a sandbox system for internet access and software testing. Complex software setups can be created with virtual machines and are easily deployed later to multiple computers for hands-on teaching classes. We discuss the popularity of bioinformatics compared to cheminformatics as well as the missing cheminformatics education at universities worldwide.
Multi-Sensor Testing for Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Lab
NASA Technical Reports Server (NTRS)
Brewster, Linda L.; Howard, Richard T.; Johnston, A. S.; Carrington, Connie; Mitchell, Jennifer D.; Cryan, Scott P.
2008-01-01
The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success ofthe Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor-proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-Ioop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of "pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL) using the FRL's 6-DOF gantry system, called the Dynamic Overhead Target System (DOTS). The target vehicle for "docking" in the laboratory was a mockup that was representative of the proposed CEV docking system, with added retroreflectors for the AVGS.' The multi-sensor test configuration used 35 open-loop test trajectories covering three major objectives: (l) sensor characterization trajectories designed to test a wide range of performance parameters; (2) CEV-specific trajectories designed to test performance during CEV-like approach and departure profiles; and (3) sensor characterization tests designed for evaluating sensor performance under more extreme conditions as might be induced during a spacecraft failure or during contingency situations. This paper describes the test development, test facility, test preparations, test execution, and test results of the multisensor series oftrajectories
NASA Technical Reports Server (NTRS)
Poppel, G. L.; Marple, D. T. F.; Kingsley, J. D.
1981-01-01
Analyses and the design, fabrication, and testing of an optical tip clearance sensor with intended application in aircraft propulsion control systems are reported. The design of a sensor test rig, evaluation of optical sensor components at elevated temperatures, sensor design principles, sensor test results at room temperature, and estimations of sensor accuracy at temperatures of an aircraft engine environment are discussed. Room temperature testing indicated possible measurement accuracies of less than 12.7 microns (0.5 mils). Ways to improve performance at engine operating temperatures are recommended. The potential of this tip clearance sensor is assessed.
An Improved Method of Pose Estimation for Lighthouse Base Station Extension.
Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang
2017-10-22
In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal.
An Improved Method of Pose Estimation for Lighthouse Base Station Extension
Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang
2017-01-01
In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal. PMID:29065509
NASA Astrophysics Data System (ADS)
Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.
2014-08-01
The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.
Liu, Lu; Masfary, Osama; Antonopoulos, Nick
2012-01-01
The increasing trends of electrical consumption within data centres are a growing concern for business owners as they are quickly becoming a large fraction of the total cost of ownership. Ultra small sensors could be deployed within a data centre to monitor environmental factors to lower the electrical costs and improve the energy efficiency. Since servers and air conditioners represent the top users of electrical power in the data centre, this research sets out to explore methods from each subsystem of the data centre as part of an overall energy efficient solution. In this paper, we investigate the current trends of Green IT awareness and how the deployment of small environmental sensors and Site Infrastructure equipment optimization techniques which can offer a solution to a global issue by reducing carbon emissions. PMID:22778660
Al-Dahir, Sara; Bryant, Kendrea; Kennedy, Kathleen B; Robinson, Donna S
2014-05-15
To evaluate the efficacy of faculty-led problem-based learning (PBL) vs online simulated-patient case in fourth-year (P4) pharmacy students. Fourth-year pharmacy students were randomly assigned to participate in either online branched-case learning using a virtual simulation platform or a small-group discussion. Preexperience and postexperience student assessments and a survey instrument were completed. While there were no significant differences in the preexperience test scores between the groups, there was a significant increase in scores in both the virtual-patient group and the PBL group between the preexperience and postexperience tests. The PBL group had higher postexperience test scores (74.8±11.7) than did the virtual-patient group (66.5±13.6) (p=0.001). The PBL method demonstrated significantly greater improvement in postexperience test scores than did the virtual-patient method. Both were successful learning methods, suggesting that a diverse approach to simulated patient cases may reach more student learning styles.
NASA Astrophysics Data System (ADS)
Lukes, George E.; Cain, Joel M.
1996-02-01
The Advanced Distributed Simulation (ADS) Synthetic Environments Program seeks to create robust virtual worlds from operational terrain and environmental data sources of sufficient fidelity and currency to interact with the real world. While some applications can be met by direct exploitation of standard digital terrain data, more demanding applications -- particularly those support operations 'close to the ground' -- are well-served by emerging capabilities for 'value-adding' by the user working with controlled imagery. For users to rigorously refine and exploit controlled imagery within functionally different workstations they must have a shared framework to allow interoperability within and between these environments in terms of passing image and object coordinates and other information using a variety of validated sensor models. The Synthetic Environments Program is now being expanded to address rapid construction of virtual worlds with research initiatives in digital mapping, softcopy workstations, and cartographic image understanding. The Synthetic Environments Program is also participating in a joint initiative for a sensor model applications programer's interface (API) to ensure that a common controlled imagery exploitation framework is available to all researchers, developers and users. This presentation provides an introduction to ADS and the associated requirements for synthetic environments to support synthetic theaters of war. It provides a technical rationale for exploring applications of image understanding technology to automated cartography in support of ADS and related programs benefitting from automated analysis of mapping, earth resources and reconnaissance imagery. And it provides an overview and status of the joint initiative for a sensor model API.
Simulation of Attacks for Security in Wireless Sensor Network.
Diaz, Alvaro; Sanchez, Pablo
2016-11-18
The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.
NASA Astrophysics Data System (ADS)
Hristoforou, E.; Vlachos, D. S.; Giouroudi, I.; Kar-Narayan, S.; Potirakis, S.
2016-03-01
The 5th International Conference on Materials and Applications for Sensors and Transducers, Mykonos island, Greece, hosted about 110 oral and poster papers and more than 90 participants. IC-MAS, as an international annual conference which tries to meet the needs for various types of sensors, particularly those which may be manufactured by low cost methods (i.e. hybrid sensors, smart specialization devices, particular applications not necessarily requiring integrated micro-nano technologies), covering all types of materials and physical effects, appears to be a necessity. IC-MAST has been established as a high quality international conference by: I. Gathering together multinational researchers from all over the world, working in different materials for sensors and transducers and technical applications of sensors, but also in some cases in the management of the data coming from sensors and transducers. The careful selection of the conference place (like Aegean Sea, Budapest, Prague, Bilbao, Mykonos etc) allows for enjoying the local hospitality and sightseeing. II. Emphasizing in hybrid sensors and smart specialization devices produced by inexpensive methods, without excluding of course micro-nano technology, from all kinds of solid state, liquid and gaseous materials, as well as in particular transducer applications (design and development, as well as use of sensing data) III. Innovatively implementing the Virtual Paper Concept, allowing for large impact of research works presented in the conference by authors who either have no time or no funding support for visiting a conference; this year more than 12 virtual papers are presented in the 5th IC MAST, following a standardized procedure via the our robust and reliable Conference Site (www.icmast.net!) > IV. Allowing for lengthy technical and managerial discussions in terms of sensor, material and instrumentation development; furthermore, the different research groups gathered together are offered the particular advantage of arranging and concluding research proposals and projects, otherwise not having a visible possibility of such realization The 5th IC-MAST organizing committee is proud that the Conference Keynote Speaker was Prof George Hadjipanayis, University of Delaware. We are also proud for the invited speakers of the conference: • Stergios Logothetidis, Aristotle University of Thessaloniki, Greece • Dimitris Tsoukalas, National Technical University of Athens, Greece • Susana Cardoso de Freitas, INESC Microsistemas e Nanotecnologias • Yuris Dzenis, University of Nebraska-Lincoln, USA The IC-MAST 2015 organizers believe that the target of the Conference has been successfully met by enhancing knowledge in sensors by all participants, accelerating the achievement of results and optimizing the under design products, in a quite friendly way! Therefore, participants made an appointment for the next year in Athens, Greece, where the 6th International MAST Conference will be realized!
Petkewich, Matthew D.; Daamen, Ruby C.; Roehl, Edwin A.; Conrads, Paul
2016-09-29
The Everglades Depth Estimation Network (EDEN), with over 240 real-time gaging stations, provides hydrologic data for freshwater and tidal areas of the Everglades. These data are used to generate daily water-level and water-depth maps of the Everglades that are used to assess biotic responses to hydrologic change resulting from the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. The generation of EDEN daily water-level and water-depth maps is dependent on high quality real-time data from water-level stations. Real-time data are automatically checked for outliers by assigning minimum and maximum thresholds for each station. Small errors in the real-time data, such as gradual drift of malfunctioning pressure transducers, are more difficult to immediately identify with visual inspection of time-series plots and may only be identified during on-site inspections of the stations. Correcting these small errors in the data often is time consuming and water-level data may not be finalized for several months. To provide daily water-level and water-depth maps on a near real-time basis, EDEN needed an automated process to identify errors in water-level data and to provide estimates for missing or erroneous water-level data.The Automated Data Assurance and Management (ADAM) software uses inferential sensor technology often used in industrial applications. Rather than installing a redundant sensor to measure a process, such as an additional water-level station, inferential sensors, or virtual sensors, were developed for each station that make accurate estimates of the process measured by the hard sensor (water-level gaging station). The inferential sensors in the ADAM software are empirical models that use inputs from one or more proximal stations. The advantage of ADAM is that it provides a redundant signal to the sensor in the field without the environmental threats associated with field conditions at stations (flood or hurricane, for example). In the event that a station does malfunction, ADAM provides an accurate estimate for the period of missing data. The ADAM software also is used in the quality assurance and quality control of the data. The virtual signals are compared to the real-time data, and if the difference between the two signals exceeds a certain tolerance, corrective action to the data and (or) the gaging station can be taken. The ADAM software is automated so that, each morning, the real-time EDEN data are compared to the inferential sensor signals and digital reports highlighting potential erroneous real-time data are generated for appropriate support personnel. The development and application of inferential sensors is easily transferable to other real-time hydrologic monitoring networks.
The effects of virtual experience on attitudes toward real brands.
Dobrowolski, Pawel; Pochwatko, Grzegorz; Skorko, Maciek; Bielecki, Maksymilian
2014-02-01
Although the commercial availability and implementation of virtual reality interfaces has seen rapid growth in recent years, little research has been conducted on the potential for virtual reality to affect consumer behavior. One unaddressed issue is how our real world attitudes are affected when we have a virtual experience with the target of those attitudes. This study compared participant (N=60) attitudes toward car brands before and after a virtual test drive of those cars was provided. Results indicated that attitudes toward test brands changed after experience with virtual representations of those brands. Furthermore, manipulation of the quality of this experience (in this case modification of driving difficulty) was reflected in the direction of attitude change. We discuss these results in the context of the associative-propositional evaluation model.
NASA Technical Reports Server (NTRS)
Brewster, L.; Johnston, A.; Howard, R.; Mitchell, J.; Cryan, S.
2007-01-01
The Exploration Systems Architecture defines missions that require rendezvous, proximity operations, and docking (RPOD) of two spacecraft both in Low Earth Orbit (LEO) and in Low Lunar Orbit (LLO). Uncrewed spacecraft must perform automated and/or autonomous rendezvous, proximity operations and docking operations (commonly known as AR&D). The crewed missions may also perform rendezvous and docking operations and may require different levels of automation and/or autonomy, and must provide the crew with relative navigation information for manual piloting. The capabilities of the RPOD sensors are critical to the success of the Exploration Program. NASA has the responsibility to determine whether the Crew Exploration Vehicle (CEV) contractor proposed relative navigation sensor suite will meet the requirements. The relatively low technology readiness level of AR&D relative navigation sensors has been carried as one of the CEV Project's top risks. The AR&D Sensor Technology Project seeks to reduce the risk by the testing and analysis of selected relative navigation sensor technologies through hardware-in-the-loop testing and simulation. These activities will provide the CEV Project information to assess the relative navigation sensors maturity as well as demonstrate test methods and capabilities. The first year of this project focused on a series of"pathfinder" testing tasks to develop the test plans, test facility requirements, trajectories, math model architecture, simulation platform, and processes that will be used to evaluate the Contractor-proposed sensors. Four candidate sensors were used in the first phase of the testing. The second phase of testing used four sensors simultaneously: two Marshall Space Flight Center (MSFC) Advanced Video Guidance Sensors (AVGS), a laser-based video sensor that uses retroreflectors attached to the target vehicle, and two commercial laser range finders. The multi-sensor testing was conducted at MSFC's Flight Robotics Laboratory (FRL) using the FRL's 6-DOF gantry system, called the Dynamic Overhead Target System (DOTS). The target vehicle for "docking" in the laboratory was a mockup that was representative of the proposed CEV docking system, with added retroreflectors for the AVGS. The multi-sensor test configuration used 35 open-loop test trajectories covering three major objectives: (1) sensor characterization trajectories designed to test a wide range of performance parameters; (2) CEV-specific trajectories designed to test performance during CEV-like approach and departure profiles; and (3) sensor characterization tests designed for evaluating sensor performance under more extreme conditions as might be induced during a spacecraft failure or during contingency situations. This paper describes the test development, test facility, test preparations, test execution, and test results of the multi-sensor series of trajectories.
Safeguarding a Lunar Rover with Wald's Sequential Probability Ratio Test
NASA Technical Reports Server (NTRS)
Furlong, Michael; Dille, Michael; Wong, Uland; Nefian, Ara
2016-01-01
The virtual bumper is a safeguarding mechanism for autonomous and remotely operated robots. In this paper we take a new approach to the virtual bumper system by using an old statistical test. By using a modified version of Wald's sequential probability ratio test we demonstrate that we can reduce the number of false positive reported by the virtual bumper, thereby saving valuable mission time. We use the concept of sequential probability ratio to control vehicle speed in the presence of possible obstacles in order to increase certainty about whether or not obstacles are present. Our new algorithm reduces the chances of collision by approximately 98 relative to traditional virtual bumper safeguarding without speed control.
A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods
2014-08-01
Approved for public release; distribution is unlimited. A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods...ABSTRACT A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods Report Title This experiment tests whether a virtual... PEDAGOGICAL EFFECTIVENESS OF VIRTUAL WORLDS AND OF TRADITIONAL TRAINING METHODS A Thesis by BENJAMIN PETERS
Sakai, Hiromi; Nagano, Akinori; Seki, Keiko; Okahashi, Sayaka; Kojima, Maki; Luo, Zhiwei
2018-07-01
We developed a virtual reality test to assess the cognitive function of Japanese people in near-daily-life environment, namely, a virtual shopping test (VST). In this test, participants were asked to execute shopping tasks using touch panel operations in a "virtual shopping mall." We examined differences in VST performances among healthy participants of different ages and correlations between VST and screening tests, such as the Mini-Mental State Examination (MMSE) and Everyday Memory Checklist (EMC). We included 285 healthy participants between 20 and 86 years of age in seven age groups. Therefore, each VST index tended to decrease with advancing age; differences among age groups were significant. Most VST indices had a significantly negative correlation with MMSE and significantly positive correlation with EMC. VST may be useful for assessing general cognitive decline; effects of age must be considered for proper interpretation of the VST scores.
X-Ray Calibration Facility/Advanced Video Guidance Sensor Test
NASA Technical Reports Server (NTRS)
Johnston, N. A. S.; Howard, R. T.; Watson, D. W.
2004-01-01
The advanced video guidance sensor was tested in the X-Ray Calibration facility at Marshall Space Flight Center to establish performance during vacuum. Two sensors were tested and a timeline for each are presented. The sensor and test facility are discussed briefly. A new test stand was also developed. A table establishing sensor bias and spot size growth for several ranges is detailed along with testing anomalies.
Rosenthal, Rachel; Hamel, Christian; Oertli, Daniel; Demartines, Nicolas; Gantert, Walter A
2010-08-01
The aim of the present study was to investigate whether trainees' performance on a virtual reality angled laparoscope navigation task correlates with scores obtained on a validated conventional test of spatial ability. 56 participants of a surgery workshop performed an angled laparoscope navigation task on the Xitact LS 500 virtual reality Simulator. Performance parameters were correlated with the score of a validated paper-and-pencil test of spatial ability. Performance at the conventional spatial ability test significantly correlated with performance at the virtual reality task for overall task score (p < 0.001), task completion time (p < 0.001) and economy of movement (p = 0.035), not for endoscope travel speed (p = 0.947). In conclusion, trainees' performance in a standardized virtual reality camera navigation task correlates with their innate spatial ability. This VR session holds potential to serve as an assessment tool for trainees.
Visual field examination method using virtual reality glasses compared with the Humphrey perimeter.
Tsapakis, Stylianos; Papaconstantinou, Dimitrios; Diagourtas, Andreas; Droutsas, Konstantinos; Andreanos, Konstantinos; Moschos, Marilita M; Brouzas, Dimitrios
2017-01-01
To present a visual field examination method using virtual reality glasses and evaluate the reliability of the method by comparing the results with those of the Humphrey perimeter. Virtual reality glasses, a smartphone with a 6 inch display, and software that implements a fast-threshold 3 dB step staircase algorithm for the central 24° of visual field (52 points) were used to test 20 eyes of 10 patients, who were tested in a random and consecutive order as they appeared in our glaucoma department. The results were compared with those obtained from the same patients using the Humphrey perimeter. High correlation coefficient ( r =0.808, P <0.0001) was found between the virtual reality visual field test and the Humphrey perimeter visual field. Visual field examination results using virtual reality glasses have a high correlation with the Humphrey perimeter allowing the method to be suitable for probable clinical use.
Results of a massive experiment on virtual currency endowments and money demand.
Živić, Nenad; Andjelković, Igor; Özden, Tolga; Dekić, Milovan; Castronova, Edward
2017-01-01
We use a 575,000-subject, 28-day experiment to investigate monetary policy in a virtual setting. The experiment tests the effect of virtual currency endowments on player retention and virtual currency demand. An increase in endowments of a virtual currency should lower the demand for the currency in the short run. However, in the long run, we would expect money demand to rise in response to inflation in the virtual world. We test for this behavior in a virtual field experiment in the football management game Top11. 575,000 players were selected at random and allocated to different "shards" or versions of the world. The shards differed only in terms of the initial money endowment offered to new players. Money demand was observed for 28 days as players used real money to purchase additional virtual currency. The results indicate that player money purchases were significantly higher in the shards where higher endowments were given. This suggests that a positive change in the money supply in a virtual context leads to inflation and increased money demand, and does so much more quickly than in real-world economies. Differences between virtual and real currency behavior will become more interesting as virtual currency becomes a bigger part of the real economy.
Results of a massive experiment on virtual currency endowments and money demand
Živić, Nenad; Andjelković, Igor; Özden, Tolga; Dekić, Milovan
2017-01-01
We use a 575,000-subject, 28-day experiment to investigate monetary policy in a virtual setting. The experiment tests the effect of virtual currency endowments on player retention and virtual currency demand. An increase in endowments of a virtual currency should lower the demand for the currency in the short run. However, in the long run, we would expect money demand to rise in response to inflation in the virtual world. We test for this behavior in a virtual field experiment in the football management game Top11. 575,000 players were selected at random and allocated to different “shards” or versions of the world. The shards differed only in terms of the initial money endowment offered to new players. Money demand was observed for 28 days as players used real money to purchase additional virtual currency. The results indicate that player money purchases were significantly higher in the shards where higher endowments were given. This suggests that a positive change in the money supply in a virtual context leads to inflation and increased money demand, and does so much more quickly than in real-world economies. Differences between virtual and real currency behavior will become more interesting as virtual currency becomes a bigger part of the real economy. PMID:29045494
Return to Flight: Crew Activities Resource Reel 1 of 2
NASA Technical Reports Server (NTRS)
2005-01-01
The crew of the STS-114 Discovery Mission is seen in various aspects of training for space flight. The crew activities include: 1) STS-114 Return to Flight Crew Photo Session; 2) Tile Repair Training on Precision Air Bearing Floor; 3) SAFER Tile Inspection Training in Virtual Reality Laboratory; 4) Guidance and Navigation Simulator Tile Survey Training; 5) Crew Inspects Orbital Boom and Sensor System (OBSS); 6) Bailout Training-Crew Compartment; 7) Emergency Egress Training-Crew Compartment Trainer (CCT); 8) Water Survival Training-Neutral Buoyancy Lab (NBL); 9) Ascent Training-Shuttle Motion Simulator; 10) External Tank Photo Training-Full Fuselage Trainer; 11) Rendezvous and Docking Training-Shuttle Engineering Simulator (SES) Dome; 12) Shuttle Robot Arm Training-SES Dome; 13) EVA Training Virtual Reality Lab; 14) EVA Training Neutral Buoyancy Lab; 15) EVA-2 Training-NBL; 16) EVA Tool Training-Partial Gravity Simulator; 17) Cure in Place Ablator Applicator (CIPAA) Training Glove Vacuum Chamber; 16) Crew Visit to Merritt Island Launch Area (MILA); 17) Crew Inspection-Space Shuttle Discovery; and 18) Crew Inspection-External Tank and Orbital Boom and Sensor System (OBSS). The crew are then seen answering questions from the media at the Space Shuttle Landing Facility.
Performance Of The IEEE 802.15.4 Protocol As The Marker Of Augmented Reality In Museum
NASA Astrophysics Data System (ADS)
Kurniawan Saputro, Adi; Sumpeno, Surya; Hariadi, Mochamad
2018-04-01
Museum is a place to keep the historic objects and historical education center to introduce the nation’s culture. Utilizing technology in a museum to become a smart city is a challenge. Internet of thing (IOT) is a technological advance in Information and communication (ICT) that can be applied in the museum The current ICT development is not only a transmission medium, but Augmented Reality technology is also being developed. Currently, Augmented Reality technology creates virtual objects into the real world using markers or images. In this study, researcher used signals to make virtual objects appear in the real world using the IEEE 802.14.5 protocol replacing the Augmented Reality marker. RSSI and triangulation are used as a substitute microlocation for AR objects. The result is the performance of Wireless Sensor Network could be used for data transmission in the museum. LOS research at a distance of 15 meters with 1000 ms delay found 1.4% error rate and NLOS with 2.3% error rate. So it can be concluded that utilization technology (IOT) using signal wireless sensor network as a replace for marker augmented reality can be used in museum
Virtual Reality for Pediatric Sedation: A Randomized Controlled Trial Using Simulation.
Zaveri, Pavan P; Davis, Aisha B; O'Connell, Karen J; Willner, Emily; Aronson Schinasi, Dana A; Ottolini, Mary
2016-02-09
Team training for procedural sedation for pediatric residents has traditionally consisted of didactic presentations and simulated scenarios using high-fidelity mannequins. We assessed the effectiveness of a virtual reality module in teaching preparation for and management of sedation for procedures. After developing a virtual reality environment in Second Life® (Linden Lab, San Francisco, CA) where providers perform and recover patients from procedural sedation, we conducted a randomized controlled trial to assess the effectiveness of the virtual reality module versus a traditional web-based educational module. A 20 question pre- and post-test was administered to assess knowledge change. All subjects participated in a simulated pediatric procedural sedation scenario that was video recorded for review and assessed using a 32-point checklist. A brief survey elicited feedback on the virtual reality module and the simulation scenario. The median score on the assessment checklist was 75% for the intervention group and 70% for the control group (P = 0.32). For the knowledge tests, there was no statistically significant difference between the groups (P = 0.14). Users had excellent reviews of the virtual reality module and reported that the module added to their education. Pediatric residents performed similarly in simulation and on a knowledge test after a virtual reality module compared with a traditional web-based module on procedural sedation. Although users enjoyed the virtual reality experience, these results question the value virtual reality adds in improving the performance of trainees. Further inquiry is needed into how virtual reality provides true value in simulation-based education.
Virtual Reality for Pediatric Sedation: A Randomized Controlled Trial Using Simulation
Davis, Aisha B; O'Connell, Karen J; Willner, Emily; Aronson Schinasi, Dana A; Ottolini, Mary
2016-01-01
Introduction: Team training for procedural sedation for pediatric residents has traditionally consisted of didactic presentations and simulated scenarios using high-fidelity mannequins. We assessed the effectiveness of a virtual reality module in teaching preparation for and management of sedation for procedures. Methods: After developing a virtual reality environment in Second Life® (Linden Lab, San Francisco, CA) where providers perform and recover patients from procedural sedation, we conducted a randomized controlled trial to assess the effectiveness of the virtual reality module versus a traditional web-based educational module. A 20 question pre- and post-test was administered to assess knowledge change. All subjects participated in a simulated pediatric procedural sedation scenario that was video recorded for review and assessed using a 32-point checklist. A brief survey elicited feedback on the virtual reality module and the simulation scenario. Results: The median score on the assessment checklist was 75% for the intervention group and 70% for the control group (P = 0.32). For the knowledge tests, there was no statistically significant difference between the groups (P = 0.14). Users had excellent reviews of the virtual reality module and reported that the module added to their education. Conclusions: Pediatric residents performed similarly in simulation and on a knowledge test after a virtual reality module compared with a traditional web-based module on procedural sedation. Although users enjoyed the virtual reality experience, these results question the value virtual reality adds in improving the performance of trainees. Further inquiry is needed into how virtual reality provides true value in simulation-based education. PMID:27014520
Modeling Environmental Impacts on Cognitive Performance for Artificially Intelligent Entities
2017-06-01
of the agent behavior model is presented in a military-relevant virtual game environment. We then outline a quantitative approach to test the agent...relevant virtual game environment. We then outline a quantitative approach to test the agent behavior model within the virtual environment. Results show...x Game View of Hot Environment Condition Displaying Total “f” Cost for Each Searched Waypoint Node
ERIC Educational Resources Information Center
Reid, Denise
2005-01-01
The Pediatric Volitional Questionnaire (PVQ) was used along with the Test of Playfulness (TOP) to assess 16 children with cerebral palsy who took part in a study of virtual reality play intervention. Both observational measures are designed to assess children as they are engaged in occupations in one or more environments. Virtual reality offers an…
A TEOM (tm) particulate monitor for comet dust, near Earth space, and planetary atmospheres
NASA Technical Reports Server (NTRS)
1988-01-01
Scientific missions to comets, near earth space, and planetary atmospheres require particulate and mass accumulation instrumentation for both scientific and navigation purposes. The Rupprecht & Patashnick tapered element oscillating microbalance can accurately measure both mass flux and mass distribution of particulates over a wide range of particle sizes and loadings. Individual particles of milligram size down to a few picograms can be resolved and counted, and the accumulation of smaller particles or molecular deposition can be accurately measured using the sensors perfected and toughened under this contract. No other sensor has the dynamic range or sensitivity attained by these picogram direct mass measurement sensors. The purpose of this contract was to develop and implement reliable and repeatable manufacturing methods; build and test prototype sensors; and outline a quality control program. A dust 'thrower' was to be designed and built, and used to verify performance. Characterization and improvement of the optical motion detection system and drive feedback circuitry was to be undertaken, with emphasis on reliability, low noise, and low power consumption. All the goals of the contract were met or exceeded. An automated glass puller was built and used to make repeatable tapered elements. Materials and assembly methods were standardized, and controllers and calibrated fixtures were developed and used in all phases of preparing, coating and assembling the sensors. Quality control and reliability resulted from the use of calibrated manufacturing equipment with measurable working parameters. Thermal and vibration testing of completed prototypes showed low temperature sensitivity and high vibration tolerance. An electrostatic dust thrower was used in vacuum to throw particles from 2 x 10(exp 6) g to 7 x 10(exp -12) g in size. Using long averaging times, particles as small as 0.7 to 4 x 10(exp 11) g were weighted to resolutions in the 5 to 9 x 10(exp -13) g range. The drive circuit and optics systems were developed beyond what was anticipated in the contract, and are now virtually flight prototypes. There is already commercial interest in the developed capability of measuring picogram mass losses and gains. One area is contamination and outgassing research, both measuring picogram losses from samples and collecting products of outgassing.
Affordable and personalized lighting using inverse modeling and virtual sensors
NASA Astrophysics Data System (ADS)
Basu, Chandrayee; Chen, Benjamin; Richards, Jacob; Dhinakaran, Aparna; Agogino, Alice; Martin, Rodney
2014-03-01
Wireless sensor networks (WSN) have great potential to enable personalized intelligent lighting systems while reducing building energy use by 50%-70%. As a result WSN systems are being increasingly integrated in state-ofart intelligent lighting systems. In the future these systems will enable participation of lighting loads as ancillary services. However, such systems can be expensive to install and lack the plug-and-play quality necessary for user-friendly commissioning. In this paper we present an integrated system of wireless sensor platforms and modeling software to enable affordable and user-friendly intelligent lighting. It requires ⇠ 60% fewer sensor deployments compared to current commercial systems. Reduction in sensor deployments has been achieved by optimally replacing the actual photo-sensors with real-time discrete predictive inverse models. Spatially sparse and clustered sub-hourly photo-sensor data captured by the WSN platforms are used to develop and validate a piece-wise linear regression of indoor light distribution. This deterministic data-driven model accounts for sky conditions and solar position. The optimal placement of photo-sensors is performed iteratively to achieve the best predictability of the light field desired for indoor lighting control. Using two weeks of daylight and artificial light training data acquired at the Sustainability Base at NASA Ames, the model was able to predict the light level at seven monitored workstations with 80%-95% accuracy. We estimate that 10% adoption of this intelligent wireless sensor system in commercial buildings could save 0.2-0.25 quads BTU of energy nationwide.
High sensitivity to multisensory conflicts in agoraphobia exhibited by virtual reality.
Viaud-Delmon, Isabelle; Warusfel, Olivier; Seguelas, Angeline; Rio, Emmanuel; Jouvent, Roland
2006-10-01
The primary aim of this study was to evaluate the effect of auditory feedback in a VR system planned for clinical use and to address the different factors that should be taken into account in building a bimodal virtual environment (VE). We conducted an experiment in which we assessed spatial performances in agoraphobic patients and normal subjects comparing two kinds of VEs, visual alone (Vis) and auditory-visual (AVis), during separate sessions. Subjects were equipped with a head-mounted display coupled with an electromagnetic sensor system and immersed in a virtual town. Their task was to locate different landmarks and become familiar with the town. In the AVis condition subjects were equipped with the head-mounted display and headphones, which delivered a soundscape updated in real-time according to their movement in the virtual town. While general performances remained comparable across the conditions, the reported feeling of immersion was more compelling in the AVis environment. However, patients exhibited more cybersickness symptoms in this condition. The result of this study points to the multisensory integration deficit of agoraphobic patients and underline the need for further research on multimodal VR systems for clinical use.
The mixed reality of things: emerging challenges for human-information interaction
NASA Astrophysics Data System (ADS)
Spicer, Ryan P.; Russell, Stephen M.; Rosenberg, Evan Suma
2017-05-01
Virtual and mixed reality technology has advanced tremendously over the past several years. This nascent medium has the potential to transform how people communicate over distance, train for unfamiliar tasks, operate in challenging environments, and how they visualize, interact, and make decisions based on complex data. At the same time, the marketplace has experienced a proliferation of network-connected devices and generalized sensors that are becoming increasingly accessible and ubiquitous. As the "Internet of Things" expands to encompass a predicted 50 billion connected devices by 2020, the volume and complexity of information generated in pervasive and virtualized environments will continue to grow exponentially. The convergence of these trends demands a theoretically grounded research agenda that can address emerging challenges for human-information interaction (HII). Virtual and mixed reality environments can provide controlled settings where HII phenomena can be observed and measured, new theories developed, and novel algorithms and interaction techniques evaluated. In this paper, we describe the intersection of pervasive computing with virtual and mixed reality, identify current research gaps and opportunities to advance the fundamental understanding of HII, and discuss implications for the design and development of cyber-human systems for both military and civilian use.
A Concept for Optimizing Behavioural Effectiveness & Efficiency
NASA Astrophysics Data System (ADS)
Barca, Jan Carlo; Rumantir, Grace; Li, Raymond
Both humans and machines exhibit strengths and weaknesses that can be enhanced by merging the two entities. This research aims to provide a broader understanding of how closer interactions between these two entities can facilitate more optimal goal-directed performance through the use of artificial extensions of the human body. Such extensions may assist us in adapting to and manipulating our environments in a more effective way than any system known today. To demonstrate this concept, we have developed a simulation where a semi interactive virtual spider can be navigated through an environment consisting of several obstacles and a virtual predator capable of killing the spider. The virtual spider can be navigated through the use of three different control systems that can be used to assist in optimising overall goal directed performance. The first two control systems use, an onscreen button interface and a touch sensor, respectively to facilitate human navigation of the spider. The third control system is an autonomous navigation system through the use of machine intelligence embedded in the spider. This system enables the spider to navigate and react to changes in its local environment. The results of this study indicate that machines should be allowed to override human control in order to maximise the benefits of collaboration between man and machine. This research further indicates that the development of strong machine intelligence, sensor systems that engage all human senses, extra sensory input systems, physical remote manipulators, multiple intelligent extensions of the human body, as well as a tighter symbiosis between man and machine, can support an upgrade of the human form.
Melvin, Emilie; Cushing, Anna; Tam, Anne; Kitada, Ruri; Manice, Melissa
2017-01-01
Non-adherence to asthma daily controller medications is a common problem, reported to be responsible for 60% of asthma-related hospitalisations. The mean level of adherence for asthma medications is estimated to be as low as 22%. Therefore, objective measurements of adherence to medicine are necessary. This virtual observational study is designed to measure the usability of an electronic monitoring device platform that measures adherence. Understanding how patients use the BreatheSmart mobile technology at home is essential to assess its feasibility as a solution to improve medication adherence. We anticipate this approach can be applied to real-world environments as a cost-effective solution to improve medication adherence. This is a virtual 6-month observational study of 100 adults (≥18 years) with an asthma diagnosis, using inhaled corticosteroids for at least 3 months. Participants will be recruited in the USA through ad placements online. All participants receive wireless Bluetooth-enabled inhaler sensors that track medication usage and an mSpirometer TM capable of clinical-grade lung function measurements, and download the BreatheSmart mobile application that transmits data to a secure server. All analyses are based on an intention-to-treat. Usability is assessed by patient questionnaires and question sessions. Simple paired t-test is used to assess significant change in Asthma Control Test score, quality of life (EuroQol-5D questionnaire) and lung function. No ethical or safety concerns pertain to the collection of these data. Results of this research are planned to be published as soon as available. NCT03103880.
Subjective visual vertical assessment with mobile virtual reality system.
Ulozienė, Ingrida; Totilienė, Milda; Paulauskas, Andrius; Blažauskas, Tomas; Marozas, Vaidotas; Kaski, Diego; Ulozas, Virgilijus
2017-01-01
The subjective visual vertical (SVV) is a measure of a subject's perceived verticality, and a sensitive test of vestibular dysfunction. Despite this, and consequent upon technical and logistical limitations, SVV has not entered mainstream clinical practice. The aim of the study was to develop a mobile virtual reality based system for SVV test, evaluate the suitability of different controllers and assess the system's usability in practical settings. In this study, we describe a novel virtual reality based system that has been developed to test SVV using integrated software and hardware, and report normative values across healthy population. Participants wore a mobile virtual reality headset in order to observe a 3D stimulus presented across separate conditions - static, dynamic and an immersive real-world ("boat in the sea") SVV tests. The virtual reality environment was controlled by the tester using a Bluetooth connected controllers. Participants controlled the movement of a vertical arrow using either a gesture control armband or a general-purpose gamepad, to indicate perceived verticality. We wanted to compare 2 different methods for object control in the system, determine normal values and compare them with literature data, to evaluate the developed system with the help of the system usability scale questionnaire and evaluate possible virtually induced dizziness with the help of subjective visual analog scale. There were no statistically significant differences in SVV values during static, dynamic and virtual reality stimulus conditions, obtained using the two different controllers and the results are compared to those previously reported in the literature using alternative methodologies. The SUS scores for the system were high, with a median of 82.5 for the Myo controller and of 95.0 for the Gamepad controller, representing a statistically significant difference between the two controllers (P<0.01). The median of virtual reality-induced dizziness for both devices was 0.7. The mobile virtual reality based system for implementation of subjective visual vertical test, is accurate and applicable in the clinical environment. The gamepad-based virtual object control method was preferred by the users. The tests were well tolerated with low dizziness scores in the majority of patients. Copyright © 2018 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Sp. z o.o. All rights reserved.
A Context-Aware Method for Authentically Simulating Outdoors Shadows for Mobile Augmented Reality.
Barreira, Joao; Bessa, Maximino; Barbosa, Luis; Magalhaes, Luis
2018-03-01
Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.
Kinect-based virtual rehabilitation and evaluation system for upper limb disorders: A case study.
Ding, W L; Zheng, Y Z; Su, Y P; Li, X L
2018-04-19
To help patients with disabilities of the arm and shoulder recover the accuracy and stability of movements, a novel and simple virtual rehabilitation and evaluation system called the Kine-VRES system was developed using Microsoft Kinect. First, several movements and virtual tasks were designed to increase the coordination, control and speed of the arm movements. The movements of the patients were then captured using the Kinect sensor, and kinematics-based interaction and real-time feedback were integrated into the system to enhance the motivation and self-confidence of the patient. Finally, a quantitative evaluation method of upper limb movements was provided using the recorded kinematics during hand-to-hand movement. A preliminary study of this rehabilitation system indicates that the shoulder movements of two participants with ataxia became smoother after three weeks of training (one hour per day). This case study demonstrated the effectiveness of the designed system, which could be promising for the rehabilitation of patients with upper limb disorders.