Sample records for virtual sensor based

  1. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  2. Virtual Sensors for Designing Irrigation Controllers in Greenhouses

    PubMed Central

    Sánchez, Jorge Antonio; Rodríguez, Francisco; Guzmán, José Luis; Arahal, Manuel R

    2012-01-01

    Monitoring the greenhouse transpiration for control purposes is currently a difficult task. The absence of affordable sensors that provide continuous transpiration measurements motivates the use of estimators. In the case of tomato crops, the availability of estimators allows the design of automatic fertirrigation (irrigation + fertilization) schemes in greenhouses, minimizing the dispensed water while fulfilling crop needs. This paper shows how system identification techniques can be applied to obtain nonlinear virtual sensors for estimating transpiration. The greenhouse used for this study is equipped with a microlysimeter, which allows one to continuously sample the transpiration values. While the microlysimeter is an advantageous piece of equipment for research, it is also expensive and requires maintenance. This paper presents the design and development of a virtual sensor to model the crop transpiration, hence avoiding the use of this kind of expensive sensor. The resulting virtual sensor is obtained by dynamical system identification techniques based on regressors taken from variables typically found in a greenhouse, such as global radiation and vapor pressure deficit. The virtual sensor is thus based on empirical data. In this paper, some effort has been made to eliminate some problems associated with grey-box models: advance phenomenon and overestimation. The results are tested with real data and compared with other approaches. Better results are obtained with the use of nonlinear Black-box virtual sensors. This sensor is based on global radiation and vapor pressure deficit (VPD) measurements. Predictive results for the three models are developed for comparative purposes. PMID:23202208

  3. Virtual Sensor Test Instrumentation

    NASA Technical Reports Server (NTRS)

    Wang, Roy

    2011-01-01

    Virtual Sensor Test Instrumentation is based on the concept of smart sensor technology for testing with intelligence needed to perform sell-diagnosis of health, and to participate in a hierarchy of health determination at sensor, process, and system levels. A virtual sensor test instrumentation consists of five elements: (1) a common sensor interface, (2) microprocessor, (3) wireless interface, (4) signal conditioning and ADC/DAC (analog-to-digital conversion/ digital-to-analog conversion), and (5) onboard EEPROM (electrically erasable programmable read-only memory) for metadata storage and executable software to create powerful, scalable, reconfigurable, and reliable embedded and distributed test instruments. In order to maximize the efficient data conversion through the smart sensor node, plug-and-play functionality is required to interface with traditional sensors to enhance their identity and capabilities for data processing and communications. Virtual sensor test instrumentation can be accessible wirelessly via a Network Capable Application Processor (NCAP) or a Smart Transducer Interlace Module (STIM) that may be managed under real-time rule engines for mission-critical applications. The transducer senses the physical quantity being measured and converts it into an electrical signal. The signal is fed to an A/D converter, and is ready for use by the processor to execute functional transformation based on the sensor characteristics stored in a Transducer Electronic Data Sheet (TEDS). Virtual sensor test instrumentation is built upon an open-system architecture with standardized protocol modules/stacks to interface with industry standards and commonly used software. One major benefit for deploying the virtual sensor test instrumentation is the ability, through a plug-and-play common interface, to convert raw sensor data in either analog or digital form, to an IEEE 1451 standard-based smart sensor, which has instructions to program sensors for a wide variety of functions. The sensor data is processed in a distributed fashion across the network, providing a large pool of resources in real time to meet stringent latency requirements.

  4. Experimental Characterization of Microfabricated VirtualImpactor Efficiency

    EPA Science Inventory

    The Air-Microfluidics Group is developing a microelectromechanical systems-based direct reading particulate matter (PM) mass sensor. The sensor consists of two main components: a microfabricated virtual impactor (VI) and a PM mass sensor. The VI leverages particle inertia to sepa...

  5. Human-computer interface glove using flexible piezoelectric sensors

    NASA Astrophysics Data System (ADS)

    Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min

    2017-05-01

    In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.

  6. Virtual Sensors for Advanced Controllers in Rehabilitation Robotics.

    PubMed

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Portillo, Eva; Jung, Je Hyung

    2018-03-05

    In order to properly control rehabilitation robotic devices, the measurement of interaction force and motion between patient and robot is an essential part. Usually, however, this is a complex task that requires the use of accurate sensors which increase the cost and the complexity of the robotic device. In this work, we address the development of virtual sensors that can be used as an alternative of actual force and motion sensors for the Universal Haptic Pantograph (UHP) rehabilitation robot for upper limbs training. These virtual sensors estimate the force and motion at the contact point where the patient interacts with the robot using the mathematical model of the robotic device and measurement through low cost position sensors. To demonstrate the performance of the proposed virtual sensors, they have been implemented in an advanced position/force controller of the UHP rehabilitation robot and experimentally evaluated. The experimental results reveal that the controller based on the virtual sensors has similar performance to the one using direct measurement (less than 0.005 m and 1.5 N difference in mean error). Hence, the developed virtual sensors to estimate interaction force and motion can be adopted to replace actual precise but normally high-priced sensors which are fundamental components for advanced control of rehabilitation robotic devices.

  7. A sensor network based virtual beam-like structure method for fault diagnosis and monitoring of complex structures with Improved Bacterial Optimization

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-02-01

    This paper proposes a novel method for the fault diagnosis of complex structures based on an optimized virtual beam-like structure approach. A complex structure can be regarded as a combination of numerous virtual beam-like structures considering the vibration transmission path from vibration sources to each sensor. The structural 'virtual beam' consists of a sensor chain automatically obtained by an Improved Bacterial Optimization Algorithm (IBOA). The biologically inspired optimization method (i.e. IBOA) is proposed for solving the discrete optimization problem associated with the selection of the optimal virtual beam for fault diagnosis. This novel virtual beam-like-structure approach needs less or little prior knowledge. Neither does it require stationary response data, nor is it confined to a specific structure design. It is easy to implement within a sensor network attached to the monitored structure. The proposed fault diagnosis method has been tested on the detection of loosening screws located at varying positions in a real satellite-like model. Compared with empirical methods, the proposed virtual beam-like structure method has proved to be very effective and more reliable for fault localization.

  8. Two-Time Scale Virtual Sensor Design for Vibration Observation of a Translational Flexible-Link Manipulator Based on Singular Perturbation and Differential Games

    PubMed Central

    Ju, Jinyong; Li, Wei; Wang, Yuqiao; Fan, Mengbao; Yang, Xuefeng

    2016-01-01

    Effective feedback control requires all state variable information of the system. However, in the translational flexible-link manipulator (TFM) system, it is unrealistic to measure the vibration signals and their time derivative of any points of the TFM by infinite sensors. With the rigid-flexible coupling between the global motion of the rigid base and the elastic vibration of the flexible-link manipulator considered, a two-time scale virtual sensor, which includes the speed observer and the vibration observer, is designed to achieve the estimation for the vibration signals and their time derivative of the TFM, as well as the speed observer and the vibration observer are separately designed for the slow and fast subsystems, which are decomposed from the dynamic model of the TFM by the singular perturbation. Additionally, based on the linear-quadratic differential games, the observer gains of the two-time scale virtual sensor are optimized, which aims to minimize the estimation error while keeping the observer stable. Finally, the numerical calculation and experiment verify the efficiency of the designed two-time scale virtual sensor. PMID:27801840

  9. VLSI Design of Trusted Virtual Sensors.

    PubMed

    Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada

    2018-01-25

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).

  10. VLSI Design of Trusted Virtual Sensors

    PubMed Central

    2018-01-01

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μs. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time). PMID:29370141

  11. A Virtual Sensor for Online Fault Detection of Multitooth-Tools

    PubMed Central

    Bustillo, Andres; Correa, Maritza; Reñones, Anibal

    2011-01-01

    The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a Bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases. PMID:22163766

  12. A virtual sensor for online fault detection of multitooth-tools.

    PubMed

    Bustillo, Andres; Correa, Maritza; Reñones, Anibal

    2011-01-01

    The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases.

  13. An Integrated FDD System for HVAC&R Based on Virtual Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Woohyun

    According to the U.S Department of Energy, space heating, ventilation and air conditioning system account for 40% of residential primary energy use and for 30% of primary energy use in commercial buildings. A study released by the Energy Information Administration indicated that packaged air conditioners are widely used in 46% of all commercial buildings in the U.S. This study indicates that the annual cooling energy consumption related to the packaged air conditioner is about 160 trillion Btus. Therefore, an automated FDD system that can automatically detect and diagnose faults and evaluate fault impacts has the potential for improving energy efficiencymore » along with reducing service costs and comfort complaints. The primary bottlenecks to diagnostic implementation in the field are the high initial costs of additional sensors. To prevent those limitations, virtual sensors with low cost measurements and simple models are developed to estimate quantities that would be expensive and or difficult to measure directly. The use of virtual sensors can reduce costs compared to the use of real sensors and provide additional information for economic assessment. The virtual sensor can be embedded in a permanently installed control or monitoring system and continuous monitoring potentially leads to early detection of faults. The virtual sensors of individual equipment components can be integrated to estimate overall diagnostic information using the output of each virtual sensor.« less

  14. A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks.

    PubMed

    Gui, Jinsong; Zhou, Kai; Xiong, Naixue

    2016-09-25

    Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude.

  15. A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks

    PubMed Central

    Gui, Jinsong; Zhou, Kai; Xiong, Naixue

    2016-01-01

    Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude. PMID:27681731

  16. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network.

    PubMed

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-12-12

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.

  17. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network

    PubMed Central

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-01-01

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868

  18. Roi-Orientated Sensor Correction Based on Virtual Steady Reimaging Model for Wide Swath High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.

    2017-09-01

    To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.

  19. Adaptive Fault Detection on Liquid Propulsion Systems with Virtual Sensors: Algorithms and Architectures

    NASA Technical Reports Server (NTRS)

    Matthews, Bryan L.; Srivastava, Ashok N.

    2010-01-01

    Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.

  20. Open Source Virtual Worlds and Low Cost Sensors for Physical Rehab of Patients with Chronic Diseases

    NASA Astrophysics Data System (ADS)

    Romero, Salvador J.; Fernandez-Luque, Luis; Sevillano, José L.; Vognild, Lars

    For patients with chronic diseases, exercise is a key part of rehab to deal better with their illness. Some of them do rehabilitation at home with telemedicine systems. However, keeping to their exercising program is challenging and many abandon the rehabilitation. We postulate that information technologies for socializing and serious games can encourage patients to keep doing physical exercise and rehab. In this paper we present Virtual Valley, a low cost telemedicine system for home exercising, based on open source virtual worlds and utilizing popular low cost motion controllers (e.g. Wii Remote) and medical sensors. Virtual Valley allows patient to socialize, learn, and play group based serious games while exercising.

  1. Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis

    PubMed Central

    Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel

    2013-01-01

    This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007

  2. Head-mounted active noise control system with virtual sensing technique

    NASA Astrophysics Data System (ADS)

    Miyazaki, Nobuhiro; Kajikawa, Yoshinobu

    2015-03-01

    In this paper, we apply a virtual sensing technique to a head-mounted active noise control (ANC) system we have already proposed. The proposed ANC system can reduce narrowband noise while improving the noise reduction ability at the desired locations. A head-mounted ANC system based on an adaptive feedback structure can reduce noise with periodicity or narrowband components. However, since quiet zones are formed only at the locations of error microphones, an adequate noise reduction cannot be achieved at the locations where error microphones cannot be placed such as near the eardrums. A solution to this problem is to apply a virtual sensing technique. A virtual sensing ANC system can achieve higher noise reduction at the desired locations by measuring the system models from physical sensors to virtual sensors, which will be used in the online operation of the virtual sensing ANC algorithm. Hence, we attempt to achieve the maximum noise reduction near the eardrums by applying the virtual sensing technique to the head-mounted ANC system. However, it is impossible to place the microphone near the eardrums. Therefore, the system models from physical sensors to virtual sensors are estimated using the Head And Torso Simulator (HATS) instead of human ears. Some simulation, experimental, and subjective assessment results demonstrate that the head-mounted ANC system with virtual sensing is superior to that without virtual sensing in terms of the noise reduction ability at the desired locations.

  3. Virtual Sensor for Kinematic Estimation of Flexible Links in Parallel Robots

    PubMed Central

    Cabanes, Itziar; Mancisidor, Aitziber; Pinto, Charles

    2017-01-01

    The control of flexible link parallel manipulators is still an open area of research, endpoint trajectory tracking being one of the main challenges in this type of robot. The flexibility and deformations of the limbs make the estimation of the Tool Centre Point (TCP) position a challenging one. Authors have proposed different approaches to estimate this deformation and deduce the location of the TCP. However, most of these approaches require expensive measurement systems or the use of high computational cost integration methods. This work presents a novel approach based on a virtual sensor which can not only precisely estimate the deformation of the flexible links in control applications (less than 2% error), but also its derivatives (less than 6% error in velocity and 13% error in acceleration) according to simulation results. The validity of the proposed Virtual Sensor is tested in a Delta Robot, where the position of the TCP is estimated based on the Virtual Sensor measurements with less than a 0.03% of error in comparison with the flexible approach developed in ADAMS Multibody Software. PMID:28832510

  4. Virtual Mission Operations of Remote Sensors With Rapid Access To and From Space

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Stewart, Dave; Walke, Jon; Dikeman, Larry; Sage, Steven; Miller, Eric; Northam, James; Jackson, Chris; Taylor, John; Lynch, Scott; hide

    2010-01-01

    This paper describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the United Kingdom Disaster Monitoring Constellation (UK-DMC), is used as the space-based sensor. The UK-DMC s availability is determined via machine-to-machine communications using SSTL s mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL s and Universal Space Network s (USN) ground assets. The availability and scheduling of USN s assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards.

  5. Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.

    PubMed

    Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong

    2018-01-01

    Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.

  6. Virtual Sensor Web Architecture

    NASA Astrophysics Data System (ADS)

    Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.

    2006-12-01

    NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.

  7. Performance analysis of cooperative virtual MIMO systems for wireless sensor networks.

    PubMed

    Rafique, Zimran; Seet, Boon-Chong; Al-Anbuky, Adnan

    2013-05-28

    Multi-Input Multi-Output (MIMO) techniques can be used to increase the data rate for a given bit error rate (BER) and transmission power. Due to the small form factor, energy and processing constraints of wireless sensor nodes, a cooperative Virtual MIMO as opposed to True MIMO system architecture is considered more feasible for wireless sensor network (WSN) applications. Virtual MIMO with Vertical-Bell Labs Layered Space-Time (V-BLAST) multiplexing architecture has been recently established to enhance WSN performance. In this paper, we further investigate the impact of different modulation techniques, and analyze for the first time, the performance of a cooperative Virtual MIMO system based on V-BLAST architecture with multi-carrier modulation techniques. Through analytical models and simulations using real hardware and environment settings, both communication and processing energy consumptions, BER, spectral efficiency, and total time delay of multiple cooperative nodes each with single antenna are evaluated. The results show that cooperative Virtual-MIMO with Binary Phase Shift Keying-Wavelet based Orthogonal Frequency Division Multiplexing (BPSK-WOFDM) modulation is a promising solution for future high data-rate and energy-efficient WSNs.

  8. Performance Analysis of Cooperative Virtual MIMO Systems for Wireless Sensor Networks

    PubMed Central

    Rafique, Zimran; Seet, Boon-Chong; Al-Anbuky, Adnan

    2013-01-01

    Multi-Input Multi-Output (MIMO) techniques can be used to increase the data rate for a given bit error rate (BER) and transmission power. Due to the small form factor, energy and processing constraints of wireless sensor nodes, a cooperative Virtual MIMO as opposed to True MIMO system architecture is considered more feasible for wireless sensor network (WSN) applications. Virtual MIMO with Vertical-Bell Labs Layered Space-Time (V-BLAST) multiplexing architecture has been recently established to enhance WSN performance. In this paper, we further investigate the impact of different modulation techniques, and analyze for the first time, the performance of a cooperative Virtual MIMO system based on V-BLAST architecture with multi-carrier modulation techniques. Through analytical models and simulations using real hardware and environment settings, both communication and processing energy consumptions, BER, spectral efficiency, and total time delay of multiple cooperative nodes each with single antenna are evaluated. The results show that cooperative Virtual-MIMO with Binary Phase Shift Keying-Wavelet based Orthogonal Frequency Division Multiplexing (BPSK-WOFDM) modulation is a promising solution for future high data-rate and energy-efficient WSNs. PMID:23760087

  9. An Intelligent Active Video Surveillance System Based on the Integration of Virtual Neural Sensors and BDI Agents

    NASA Astrophysics Data System (ADS)

    Gregorio, Massimo De

    In this paper we present an intelligent active video surveillance system currently adopted in two different application domains: railway tunnels and outdoor storage areas. The system takes advantages of the integration of Artificial Neural Networks (ANN) and symbolic Artificial Intelligence (AI). This hybrid system is formed by virtual neural sensors (implemented as WiSARD-like systems) and BDI agents. The coupling of virtual neural sensors with symbolic reasoning for interpreting their outputs, makes this approach both very light from a computational and hardware point of view, and rather robust in performances. The system works on different scenarios and in difficult light conditions.

  10. Evaluation of Sensor Configurations for Robotic Surgical Instruments

    PubMed Central

    Gómez-de-Gabriel, Jesús M.; Harwin, William

    2015-01-01

    Designing surgical instruments for robotic-assisted minimally-invasive surgery (RAMIS) is challenging due to constraints on the number and type of sensors imposed by considerations such as space or the need for sterilization. A new method for evaluating the usability of virtual teleoperated surgical instruments based on virtual sensors is presented. This method uses virtual prototyping of the surgical instrument with a dual physical interaction, which allows testing of different sensor configurations in a real environment. Moreover, the proposed approach has been applied to the evaluation of prototypes of a two-finger grasper for lump detection by remote pinching. In this example, the usability of a set of five different sensor configurations, with a different number of force sensors, is evaluated in terms of quantitative and qualitative measures in clinical experiments with 23 volunteers. As a result, the smallest number of force sensors needed in the surgical instrument that ensures the usability of the device can be determined. The details of the experimental setup are also included. PMID:26516863

  11. Evaluation of Sensor Configurations for Robotic Surgical Instruments.

    PubMed

    Gómez-de-Gabriel, Jesús M; Harwin, William

    2015-10-27

    Designing surgical instruments for robotic-assisted minimally-invasive surgery (RAMIS) is challenging due to constraints on the number and type of sensors imposed by considerations such as space or the need for sterilization. A new method for evaluating the usability of virtual teleoperated surgical instruments based on virtual sensors is presented. This method uses virtual prototyping of the surgical instrument with a dual physical interaction, which allows testing of different sensor configurations in a real environment. Moreover, the proposed approach has been applied to the evaluation of prototypes of a two-finger grasper for lump detection by remote pinching. In this example, the usability of a set of five different sensor configurations, with a different number of force sensors, is evaluated in terms of quantitative and qualitative measures in clinical experiments with 23 volunteers. As a result, the smallest number of force sensors needed in the surgical instrument that ensures the usability of the device can be determined. The details of the experimental setup are also included.

  12. Minimizing Input-to-Output Latency in Virtual Environment

    NASA Technical Reports Server (NTRS)

    Adelstein, Bernard D.; Ellis, Stephen R.; Hill, Michael I.

    2009-01-01

    A method and apparatus were developed to minimize latency (time delay ) in virtual environment (VE) and other discrete- time computer-base d systems that require real-time display in response to sensor input s. Latency in such systems is due to the sum of the finite time requi red for information processing and communication within and between sensors, software, and displays.

  13. Rational Design of QCM-D Virtual Sensor Arrays Based on Film Thickness, Viscoelasticity, and Harmonics for Vapor Discrimination.

    PubMed

    Speller, Nicholas C; Siraj, Noureen; Regmi, Bishnu P; Marzoughi, Hassan; Neal, Courtney; Warner, Isiah M

    2015-01-01

    Herein, we demonstrate an alternative strategy for creating QCM-based sensor arrays by use of a single sensor to provide multiple responses per analyte. The sensor, which simulates a virtual sensor array (VSA), was developed by depositing a thin film of ionic liquid, either 1-octyl-3-methylimidazolium bromide ([OMIm][Br]) or 1-octyl-3-methylimidazolium thiocyanate ([OMIm][SCN]), onto the surface of a QCM-D transducer. The sensor was exposed to 18 different organic vapors (alcohols, hydrocarbons, chlorohydrocarbons, nitriles) belonging to the same or different homologous series. The resulting frequency shifts (Δf) were measured at multiple harmonics and evaluated using principal component analysis (PCA) and discriminant analysis (DA) which revealed that analytes can be classified with extremely high accuracy. In almost all cases, the accuracy for identification of a member of the same class, that is, intraclass discrimination, was 100% as determined by use of quadratic discriminant analysis (QDA). Impressively, some VSAs allowed classification of all 18 analytes tested with nearly 100% accuracy. Such results underscore the importance of utilizing lesser exploited properties that influence signal transduction. Overall, these results demonstrate excellent potential of the virtual sensor array strategy for detection and discrimination of vapor phase analytes utilizing the QCM. To the best of our knowledge, this is the first report on QCM VSAs, as well as an experimental sensor array, that is based primarily on viscoelasticity, film thickness, and harmonics.

  14. Compact and high resolution virtual mouse using lens array and light sensor

    NASA Astrophysics Data System (ADS)

    Qin, Zong; Chang, Yu-Cheng; Su, Yu-Jie; Huang, Yi-Pai; Shieh, Han-Ping David

    2016-06-01

    Virtual mouse based on IR source, lens array and light sensor was designed and implemented. Optical architecture including lens amount, lens pitch, baseline length, sensor length, lens-sensor gap, focal length etc. was carefully designed to achieve low detective error, high resolution, and simultaneously, compact system volume. System volume is 3.1mm (thickness) × 4.5mm (length) × 2, which is much smaller than that of camera-based device. Relative detective error of 0.41mm and minimum resolution of 26ppi were verified in experiments, so that it can replace conventional touchpad/touchscreen. If system thickness is eased to 20mm, resolution higher than 200ppi can be achieved to replace real mouse.

  15. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform

    PubMed Central

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-01-01

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform’s mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument’s working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform. PMID:27869722

  16. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform.

    PubMed

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-11-18

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform's mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument's working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform.

  17. Scientific Workflows and the Sensor Web for Virtual Environmental Observatories

    NASA Astrophysics Data System (ADS)

    Simonis, I.; Vahed, A.

    2008-12-01

    Virtual observatories mature from their original domain and become common practice for earth observation research and policy building. The term Virtual Observatory originally came from the astronomical research community. Here, virtual observatories provide universal access to the available astronomical data archives of space and ground-based observatories. Further on, as those virtual observatories aim at integrating heterogeneous ressources provided by a number of participating organizations, the virtual observatory acts as a coordinating entity that strives for common data analysis techniques and tools based on common standards. The Sensor Web is on its way to become one of the major virtual observatories outside of the astronomical research community. Like the original observatory that consists of a number of telescopes, each observing a specific part of the wave spectrum and with a collection of astronomical instruments, the Sensor Web provides a multi-eyes perspective on the current, past, as well as future situation of our planet and its surrounding spheres. The current view of the Sensor Web is that of a single worldwide collaborative, coherent, consistent and consolidated sensor data collection, fusion and distribution system. The Sensor Web can perform as an extensive monitoring and sensing system that provides timely, comprehensive, continuous and multi-mode observations. This technology is key to monitoring and understanding our natural environment, including key areas such as climate change, biodiversity, or natural disasters on local, regional, and global scales. The Sensor Web concept has been well established with ongoing global research and deployment of Sensor Web middleware and standards and represents the foundation layer of systems like the Global Earth Observation System of Systems (GEOSS). The Sensor Web consists of a huge variety of physical and virtual sensors as well as observational data, made available on the Internet at standardized interfaces. All data sets and sensor communication follow well-defined abstract models and corresponding encodings, mostly developed by the OGC Sensor Web Enablement initiative. Scientific progress is currently accelerated by an emerging new concept called scientific workflows, which organize and manage complex distributed computations. A scientific workflow represents and records the highly complex processes that a domain scientist typically would follow in exploration, discovery and ultimately, transformation of raw data to publishable results. The challenge is now to integrate the benefits of scientific workflows with those provided by the Sensor Web in order to leverage all resources for scientific exploration, problem solving, and knowledge generation. Scientific workflows for the Sensor Web represent the next evolutionary step towards efficient, powerful, and flexible earth observation frameworks and platforms. Those platforms support the entire process from capturing data, sharing and integrating, to requesting additional observations. Multiple sites and organizations will participate on single platforms and scientists from different countries and organizations interact and contribute to large-scale research projects. Simultaneously, the data- and information overload becomes manageable, as multiple layers of abstraction will free scientists to deal with underlying data-, processing or storage peculiarities. The vision are automated investigation and discovery mechanisms that allow scientists to pose queries to the system, which in turn would identify potentially related resources, schedules processing tasks and assembles all parts in workflows that may satisfy the query.

  18. Design of a lightweight, cost effective thimble-like sensor for haptic applications based on contact force sensors.

    PubMed

    Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael

    2011-01-01

    This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation.

  19. Design of a Lightweight, Cost Effective Thimble-Like Sensor for Haptic Applications Based on Contact Force Sensors

    PubMed Central

    Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael

    2011-01-01

    This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation. PMID:22247677

  20. a New ER Fluid Based Haptic Actuator System for Virtual Reality

    NASA Astrophysics Data System (ADS)

    Böse, H.; Baumann, M.; Monkman, G. J.; Egersdörfer, S.; Tunayar, A.; Freimuth, H.; Ermert, H.; Khaled, W.

    The concept and some steps in the development of a new actuator system which enables the haptic perception of mechanically inhomogeneous virtual objects are introduced. The system consists of a two-dimensional planar array of actuator elements containing an electrorheological (ER) fluid. When a user presses his fingers onto the surface of the actuator array, he perceives locally variable resistance forces generated by vertical pistons which slide in the ER fluid through the gaps between electrode pairs. The voltage in each actuator element can be individually controlled by a novel sophisticated switching technology based on optoelectric gallium arsenide elements. The haptic information which is represented at the actuator array can be transferred from a corresponding sensor system based on ultrasonic elastography. The combined sensor-actuator system may serve as a technology platform for various applications in virtual reality, like telemedicine where the information on the consistency of tissue of a real patient is detected by the sensor part and recorded by the actuator part at a remote location.

  1. Virtual sensors for active noise control in acoustic-structural coupled enclosures using structural sensing: part II--Optimization of structural sensor placement.

    PubMed

    Halim, Dunant; Cheng, Li; Su, Zhongqing

    2011-04-01

    The work proposed an optimization approach for structural sensor placement to improve the performance of vibro-acoustic virtual sensor for active noise control applications. The vibro-acoustic virtual sensor was designed to estimate the interior sound pressure of an acoustic-structural coupled enclosure using structural sensors. A spectral-spatial performance metric was proposed, which was used to quantify the averaged structural sensor output energy of a vibro-acoustic system excited by a spatially varying point source. It was shown that (i) the overall virtual sensing error energy was contributed additively by the modal virtual sensing error and the measurement noise energy; (ii) each of the modal virtual sensing error system was contributed by both the modal observability levels for the structural sensing and the target acoustic virtual sensing; and further (iii) the strength of each modal observability level was influenced by the modal coupling and resonance frequencies of the associated uncoupled structural/cavity modes. An optimal design of structural sensor placement was proposed to achieve sufficiently high modal observability levels for certain important panel- and cavity-controlled modes. Numerical analysis on a panel-cavity system demonstrated the importance of structural sensor placement on virtual sensing and active noise control performance, particularly for cavity-controlled modes.

  2. Virtual sensors for robust on-line monitoring (OLM) and Diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tipireddy, Ramakrishna; Lerchen, Megan E.; Ramuhalli, Pradeep

    Unscheduled shutdown of nuclear power facilities for recalibration and replacement of faulty sensors can be expensive and disruptive to grid management. In this work, we present virtual (software) sensors that can replace a faulty physical sensor for a short duration thus allowing recalibration to be safely deferred to a later time. The virtual sensor model uses a Gaussian process model to process input data from redundant and other nearby sensors. Predicted data includes uncertainty bounds including spatial association uncertainty and measurement noise and error. Using data from an instrumented cooling water flow loop testbed, the virtual sensor model has predictedmore » correct sensor measurements and the associated error corresponding to a faulty sensor.« less

  3. Virtual Sensor for Failure Detection, Identification and Recovery in the Transition Phase of a Morphing Aircraft

    PubMed Central

    Heredia, Guillermo; Ollero, Aníbal

    2010-01-01

    The Helicopter Adaptive Aircraft (HADA) is a morphing aircraft which is able to take-off as a helicopter and, when in forward flight, unfold the wings that are hidden under the fuselage, and transfer the power from the main rotor to a propeller, thus morphing from a helicopter to an airplane. In this process, the reliable folding and unfolding of the wings is critical, since a failure may determine the ability to perform a mission, and may even be catastrophic. This paper proposes a virtual sensor based Fault Detection, Identification and Recovery (FDIR) system to increase the reliability of the HADA aircraft. The virtual sensor is able to capture the nonlinear interaction between the folding/unfolding wings aerodynamics and the HADA airframe using the navigation sensor measurements. The proposed FDIR system has been validated using a simulation model of the HADA aircraft, which includes real phenomena as sensor noise and sampling characteristics and turbulence and wind perturbations. PMID:22294922

  4. Virtual sensor for failure detection, identification and recovery in the transition phase of a morphing aircraft.

    PubMed

    Heredia, Guillermo; Ollero, Aníbal

    2010-01-01

    The Helicopter Adaptive Aircraft (HADA) is a morphing aircraft which is able to take-off as a helicopter and, when in forward flight, unfold the wings that are hidden under the fuselage, and transfer the power from the main rotor to a propeller, thus morphing from a helicopter to an airplane. In this process, the reliable folding and unfolding of the wings is critical, since a failure may determine the ability to perform a mission, and may even be catastrophic. This paper proposes a virtual sensor based Fault Detection, Identification and Recovery (FDIR) system to increase the reliability of the HADA aircraft. The virtual sensor is able to capture the nonlinear interaction between the folding/unfolding wings aerodynamics and the HADA airframe using the navigation sensor measurements. The proposed FDIR system has been validated using a simulation model of the HADA aircraft, which includes real phenomena as sensor noise and sampling characteristics and turbulence and wind perturbations.

  5. SensorDB: a virtual laboratory for the integration, visualization and analysis of varied biological sensor data.

    PubMed

    Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T

    2015-01-01

    To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.

  6. Migrating EO/IR sensors to cloud-based infrastructure as service architectures

    NASA Astrophysics Data System (ADS)

    Berglie, Stephen T.; Webster, Steven; May, Christopher M.

    2014-06-01

    The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for providing virtual machines with direct access to hardware resources. The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased technologies to various extents in order to streamline infrastructure and service management. This paper details the challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and, ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG. A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics APls required by the NVIG and similar GPU-bound tools.

  7. Sensor-Augmented Virtual Labs: Using Physical Interactions with Science Simulations to Promote Understanding of Gas Behavior

    NASA Astrophysics Data System (ADS)

    Chao, Jie; Chiu, Jennifer L.; DeJaegher, Crystal J.; Pan, Edward A.

    2016-02-01

    Deep learning of science involves integration of existing knowledge and normative science concepts. Past research demonstrates that combining physical and virtual labs sequentially or side by side can take advantage of the unique affordances each provides for helping students learn science concepts. However, providing simultaneously connected physical and virtual experiences has the potential to promote connections among ideas. This paper explores the effect of augmenting a virtual lab with physical controls on high school chemistry students' understanding of gas laws. We compared students using the augmented virtual lab to students using a similar sensor-based physical lab with teacher-led discussions. Results demonstrate that students in the augmented virtual lab condition made significant gains from pretest and posttest and outperformed traditional students on some but not all concepts. Results provide insight into incorporating mixed-reality technologies into authentic classroom settings.

  8. Virtual sensors for active noise control in acoustic-structural coupled enclosures using structural sensing: robust virtual sensor design.

    PubMed

    Halim, Dunant; Cheng, Li; Su, Zhongqing

    2011-03-01

    The work was aimed to develop a robust virtual sensing design methodology for sensing and active control applications of vibro-acoustic systems. The proposed virtual sensor was designed to estimate a broadband acoustic interior sound pressure using structural sensors, with robustness against certain dynamic uncertainties occurring in an acoustic-structural coupled enclosure. A convex combination of Kalman sub-filters was used during the design, accommodating different sets of perturbed dynamic model of the vibro-acoustic enclosure. A minimax optimization problem was set up to determine an optimal convex combination of Kalman sub-filters, ensuring an optimal worst-case virtual sensing performance. The virtual sensing and active noise control performance was numerically investigated on a rectangular panel-cavity system. It was demonstrated that the proposed virtual sensor could accurately estimate the interior sound pressure, particularly the one dominated by cavity-controlled modes, by using a structural sensor. With such a virtual sensing technique, effective active noise control performance was also obtained even for the worst-case dynamics. © 2011 Acoustical Society of America

  9. Monitoring and Control Interface Based on Virtual Sensors

    PubMed Central

    Escobar, Ricardo F.; Adam-Medina, Manuel; García-Beltrán, Carlos D.; Olivares-Peregrino, Víctor H.; Juárez-Romero, David; Guerrero-Ramírez, Gerardo V.

    2014-01-01

    In this article, a toolbox based on a monitoring and control interface (MCI) is presented and applied in a heat exchanger. The MCI was programed in order to realize sensor fault detection and isolation and fault tolerance using virtual sensors. The virtual sensors were designed from model-based high-gain observers. To develop the control task, different kinds of control laws were included in the monitoring and control interface. These control laws are PID, MPC and a non-linear model-based control law. The MCI helps to maintain the heat exchanger under operation, even if a temperature outlet sensor fault occurs; in the case of outlet temperature sensor failure, the MCI will display an alarm. The monitoring and control interface is used as a practical tool to support electronic engineering students with heat transfer and control concepts to be applied in a double-pipe heat exchanger pilot plant. The method aims to teach the students through the observation and manipulation of the main variables of the process and by the interaction with the monitoring and control interface (MCI) developed in LabVIEW©. The MCI provides the electronic engineering students with the knowledge of heat exchanger behavior, since the interface is provided with a thermodynamic model that approximates the temperatures and the physical properties of the fluid (density and heat capacity). An advantage of the interface is the easy manipulation of the actuator for an automatic or manual operation. Another advantage of the monitoring and control interface is that all algorithms can be manipulated and modified by the users. PMID:25365462

  10. An Energy-Efficient Approach to Enhance Virtual Sensors Provisioning in Sensor Clouds Environments

    PubMed Central

    Filho, Raimir Holanda; Rabêlo, Ricardo de Andrade L.; de Carvalho, Carlos Giovanni N.; Mendes, Douglas Lopes de S.; Costa, Valney da Gama

    2018-01-01

    Virtual sensors provisioning is a central issue for sensors cloud middleware since it is responsible for selecting physical nodes, usually from Wireless Sensor Networks (WSN) of different owners, to handle user’s queries or applications. Recent works perform provisioning by clustering sensor nodes based on the correlation measurements and then selecting as few nodes as possible to preserve WSN energy. However, such works consider only homogeneous nodes (same set of sensors). Therefore, those works are not entirely appropriate for sensor clouds, which in most cases comprises heterogeneous sensor nodes. In this paper, we propose ACxSIMv2, an approach to enhance the provisioning task by considering heterogeneous environments. Two main algorithms form ACxSIMv2. The first one, ACASIMv1, creates multi-dimensional clusters of sensor nodes, taking into account the measurements correlations instead of the physical distance between nodes like most works on literature. Then, the second algorithm, ACOSIMv2, based on an Ant Colony Optimization system, selects an optimal set of sensors nodes from to respond user’s queries while attending all parameters and preserving the overall energy consumption. Results from initial experiments show that the approach reduces significantly the sensor cloud energy consumption compared to traditional works, providing a solution to be considered in sensor cloud scenarios. PMID:29495406

  11. An Energy-Efficient Approach to Enhance Virtual Sensors Provisioning in Sensor Clouds Environments.

    PubMed

    Lemos, Marcus Vinícius de S; Filho, Raimir Holanda; Rabêlo, Ricardo de Andrade L; de Carvalho, Carlos Giovanni N; Mendes, Douglas Lopes de S; Costa, Valney da Gama

    2018-02-26

    Virtual sensors provisioning is a central issue for sensors cloud middleware since it is responsible for selecting physical nodes, usually from Wireless Sensor Networks (WSN) of different owners, to handle user's queries or applications. Recent works perform provisioning by clustering sensor nodes based on the correlation measurements and then selecting as few nodes as possible to preserve WSN energy. However, such works consider only homogeneous nodes (same set of sensors). Therefore, those works are not entirely appropriate for sensor clouds, which in most cases comprises heterogeneous sensor nodes. In this paper, we propose ACxSIMv2, an approach to enhance the provisioning task by considering heterogeneous environments. Two main algorithms form ACxSIMv2. The first one, ACASIMv1, creates multi-dimensional clusters of sensor nodes, taking into account the measurements correlations instead of the physical distance between nodes like most works on literature. Then, the second algorithm, ACOSIMv2, based on an Ant Colony Optimization system, selects an optimal set of sensors nodes from to respond user's queries while attending all parameters and preserving the overall energy consumption. Results from initial experiments show that the approach reduces significantly the sensor cloud energy consumption compared to traditional works, providing a solution to be considered in sensor cloud scenarios.

  12. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System.

    PubMed

    de Moura, Karina de O A; Balbinot, Alexandre

    2018-05-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior.

  13. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System

    PubMed Central

    Balbinot, Alexandre

    2018-01-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior. PMID:29723994

  14. Air-condition Control System of Weaving Workshop Based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Song, Jian

    The project of air-condition measurement and control system based on LabVIEW is put forward for the sake of controlling effectively the environmental targets in the weaving workshop. In this project, which is based on the virtual instrument technology and in which LabVIEW development platform by NI is adopted, the system is constructed on the basis of the virtual instrument technology. It is composed of the upper PC, central control nodes based on CC2530, sensor nodes, sensor modules and executive device. Fuzzy control algorithm is employed to achieve the accuracy control of the temperature and humidity. A user-friendly man-machine interaction interface is designed with virtual instrument technology at the core of the software. It is shown by experiments that the measurement and control system can run stably and reliably and meet the functional requirements for controlling the weaving workshop.

  15. Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery.

    PubMed

    Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell

    2011-06-01

    This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information.

  16. Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery

    PubMed Central

    Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell

    2013-01-01

    This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information. PMID:24398557

  17. Secure, Autonomous, Intelligent Controller for Integrating Distributed Sensor Webs

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    2007-01-01

    This paper describes the infrastructure and protocols necessary to enable near-real-time commanding, access to space-based assets, and the secure interoperation between sensor webs owned and controlled by various entities. Select terrestrial and aeronautics-base sensor webs will be used to demonstrate time-critical interoperability between integrated, intelligent sensor webs both terrestrial and between terrestrial and space-based assets. For this work, a Secure, Autonomous, Intelligent Controller and knowledge generation unit is implemented using Virtual Mission Operation Center technology.

  18. A Survey on Virtualization of Wireless Sensor Networks

    PubMed Central

    Islam, Md. Motaharul; Hassan, Mohammad Mehedi; Lee, Ga-Won; Huh, Eui-Nam

    2012-01-01

    Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization. PMID:22438759

  19. A survey on virtualization of Wireless Sensor Networks.

    PubMed

    Islam, Md Motaharul; Hassan, Mohammad Mehedi; Lee, Ga-Won; Huh, Eui-Nam

    2012-01-01

    Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization.

  20. minimega v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crussell, Jonathan; Erickson, Jeremy; Fritz, David

    minimega is an emulytics platform for creating testbeds of networked devices. The platoform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. minimega allows experiments to be brought up quickly with almost no configuration. minimega also includes tools for simple cluster, management, as well as tools for creating Linux-based virtual machines. This release of minimega includes new emulated sensors for Android devices to improve the fidelity of testbeds that include mobile devices. Emulated sensors include GPS and

  1. Novel Virtual Environment for Alternative Treatment of Children with Cerebral Palsy

    PubMed Central

    de Oliveira, Juliana M.; Fernandes, Rafael Carneiro G.; Pinto, Cristtiano S.; Pinheiro, Plácido R.; Ribeiro, Sidarta

    2016-01-01

    Cerebral palsy is a severe condition usually caused by decreased brain oxygenation during pregnancy, at birth or soon after birth. Conventional treatments for cerebral palsy are often tiresome and expensive, leading patients to quit treatment. In this paper, we describe a virtual environment for patients to engage in a playful therapeutic game for neuropsychomotor rehabilitation, based on the experience of the occupational therapy program of the Nucleus for Integrated Medical Assistance (NAMI) at the University of Fortaleza, Brazil. Integration between patient and virtual environment occurs through the hand motion sensor “Leap Motion,” plus the electroencephalographic sensor “MindWave,” responsible for measuring attention levels during task execution. To evaluate the virtual environment, eight clinical experts on cerebral palsy were subjected to a questionnaire regarding the potential of the experimental virtual environment to promote cognitive and motor rehabilitation, as well as the potential of the treatment to enhance risks and/or negatively influence the patient's development. Based on the very positive appraisal of the experts, we propose that the experimental virtual environment is a promising alternative tool for the rehabilitation of children with cerebral palsy. PMID:27403154

  2. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  3. Virtual pyramid wavefront sensor for phase unwrapping.

    PubMed

    Akondi, Vyas; Vohnsen, Brian; Marcos, Susana

    2016-10-10

    Noise affects wavefront reconstruction from wrapped phase data. A novel method of phase unwrapping is proposed with the help of a virtual pyramid wavefront sensor. The method was tested on noisy wrapped phase images obtained experimentally with a digital phase-shifting point diffraction interferometer. The virtuality of the pyramid wavefront sensor allows easy tuning of the pyramid apex angle and modulation amplitude. It is shown that an optimal modulation amplitude obtained by monitoring the Strehl ratio helps in achieving better accuracy. Through simulation studies and iterative estimation, it is shown that the virtual pyramid wavefront sensor is robust to random noise.

  4. Virtual optical interfaces for the transportation industry

    NASA Astrophysics Data System (ADS)

    Hejmadi, Vic; Kress, Bernard

    2010-04-01

    We present a novel implementation of virtual optical interfaces for the transportation industry (automotive and avionics). This new implementation includes two functionalities in a single device; projection of a virtual interface and sensing of the position of the fingers on top of the virtual interface. Both functionalities are produced by diffraction of laser light. The device we are developing include both functionalities in a compact package which has no optical elements to align since all of them are pre-aligned on a single glass wafer through optical lithography. The package contains a CMOS sensor which diffractive objective lens is optimized for the projected interface color as well as for the IR finger position sensor based on structured illumination. Two versions are proposed: a version which senses the 2d position of the hand and a version which senses the hand position in 3d.

  5. Low-complexity piecewise-affine virtual sensors: theory and design

    NASA Astrophysics Data System (ADS)

    Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco

    2014-03-01

    This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.

  6. Enhanced Deployment Strategy for Role-Based Hierarchical Application Agents in Wireless Sensor Networks with Established Clusterheads

    ERIC Educational Resources Information Center

    Gendreau, Audrey

    2014-01-01

    Efficient self-organizing virtual clusterheads that supervise data collection based on their wireless connectivity, risk, and overhead costs, are an important element of Wireless Sensor Networks (WSNs). This function is especially critical during deployment when system resources are allocated to a subsequent application. In the presented research,…

  7. Implementation of a Virtual Microphone Array to Obtain High Resolution Acoustic Images

    PubMed Central

    Izquierdo, Alberto; Suárez, Luis; Suárez, David

    2017-01-01

    Using arrays with digital MEMS (Micro-Electro-Mechanical System) microphones and FPGA-based (Field Programmable Gate Array) acquisition/processing systems allows building systems with hundreds of sensors at a reduced cost. The problem arises when systems with thousands of sensors are needed. This work analyzes the implementation and performance of a virtual array with 6400 (80 × 80) MEMS microphones. This virtual array is implemented by changing the position of a physical array of 64 (8 × 8) microphones in a grid with 10 × 10 positions, using a 2D positioning system. This virtual array obtains an array spatial aperture of 1 × 1 m2. Based on the SODAR (SOund Detection And Ranging) principle, the measured beampattern and the focusing capacity of the virtual array have been analyzed, since beamforming algorithms assume to be working with spherical waves, due to the large dimensions of the array in comparison with the distance between the target (a mannequin) and the array. Finally, the acoustic images of the mannequin, obtained for different frequency and range values, have been obtained, showing high angular resolutions and the possibility to identify different parts of the body of the mannequin. PMID:29295485

  8. Elderly Healthcare Monitoring Using an Avatar-Based 3D Virtual Environment

    PubMed Central

    Pouke, Matti; Häkkilä, Jonna

    2013-01-01

    Homecare systems for elderly people are becoming increasingly important due to both economic reasons as well as patients’ preferences. Sensor-based surveillance technologies are an expected future trend, but research so far has devoted little attention to the User Interface (UI) design of such systems and the user-centric design approach. In this paper, we explore the possibilities of an avatar-based 3D visualization system, which exploits wearable sensors and human activity simulations. We present a technical prototype and the evaluation of alternative concept designs for UIs based on a 3D virtual world. The evaluation was conducted with homecare providers through focus groups and an online survey. Our results show firstly that systems taking advantage of 3D virtual world visualization techniques have potential especially due to the privacy preserving and simplified information presentation style, and secondly that simple representations and glancability should be emphasized in the design. The identified key use cases highlight that avatar-based 3D presentations can be helpful if they provide an overview as well as details on demand. PMID:24351747

  9. Sensor Webs in Digital Earth

    NASA Astrophysics Data System (ADS)

    Heavner, M. J.; Fatland, D. R.; Moeller, H.; Hood, E.; Schultz, M.

    2007-12-01

    The University of Alaska Southeast is currently implementing a sensor web identified as the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research (SEAMONSTER). From power systems and instrumentation through data management, visualization, education, and public outreach, SEAMONSTER is designed with modularity in mind. We are utilizing virtual earth infrastructures to enhance both sensor web management and data access. We will describe how the design philosophy of using open, modular components contributes to the exploration of different virtual earth environments. We will also describe the sensor web physical implementation and how the many components have corresponding virtual earth representations. This presentation will provide an example of the integration of sensor webs into a virtual earth. We suggest that IPY sensor networks and sensor webs may integrate into virtual earth systems and provide an IPY legacy easily accessible to both scientists and the public. SEAMONSTER utilizes geobrowsers for education and public outreach, sensor web management, data dissemination, and enabling collaboration. We generate near-real-time auto-updating geobrowser files of the data. In this presentation we will describe how we have implemented these technologies to date, the lessons learned, and our efforts towards greater OGC standard implementation. A major focus will be on demonstrating how geobrowsers have made this project possible.

  10. A Survey of Middleware for Sensor and Network Virtualization

    PubMed Central

    Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd.

    2014-01-01

    Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization. PMID:25615737

  11. A survey of middleware for sensor and network virtualization.

    PubMed

    Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd

    2014-12-12

    Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization.

  12. Measurements by A LEAP-Based Virtual Glove for the Hand Rehabilitation

    PubMed Central

    Cinque, Luigi; Polsinelli, Matteo; Spezialetti, Matteo

    2018-01-01

    Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications. PMID:29534448

  13. Measurements by A LEAP-Based Virtual Glove for the Hand Rehabilitation.

    PubMed

    Placidi, Giuseppe; Cinque, Luigi; Polsinelli, Matteo; Spezialetti, Matteo

    2018-03-10

    Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications.

  14. A Personal Inertial Navigation System Based on Multiple Distributed, Nine-Degrees-Of-Freedom, Inertial Measurement Units

    DTIC Science & Technology

    2016-12-01

    based complementary filter developed at the Naval Postgraduate School, is developed. The performance of a consumer-grade nine-degrees-of-freedom IMU...measurement unit, complementary filter , gait phase detection, zero velocity update, MEMS, IMU, AHRS, GPS denied, distributed sensor, virtual sensor...algorithm and quaternion-based complementary filter developed at the Naval Postgraduate School, is developed. The performance of a consumer-grade nine

  15. Augmented reality visualization of deformable tubular structures for surgical simulation.

    PubMed

    Ferrari, Vincenzo; Viglialoro, Rosanna Maria; Nicoli, Paola; Cutolo, Fabrizio; Condino, Sara; Carbone, Marina; Siesto, Mentore; Ferrari, Mauro

    2016-06-01

    Surgical simulation based on augmented reality (AR), mixing the benefits of physical and virtual simulation, represents a step forward in surgical training. However, available systems are unable to update the virtual anatomy following deformations impressed on actual anatomy. A proof-of-concept solution is described providing AR visualization of hidden deformable tubular structures using nitinol tubes sensorized with electromagnetic sensors. This system was tested in vitro on a setup comprised of sensorized cystic, left and right hepatic, and proper hepatic arteries. In the trial session, the surgeon deformed the tubular structures with surgical forceps in 10 positions. The mean, standard deviation, and maximum misalignment between virtual and real arteries were 0.35, 0.22, and 0.99 mm, respectively. The alignment accuracy obtained demonstrates the feasibility of the approach, which can be adopted in advanced AR simulations, in particular as an aid to the identification and isolation of tubular structures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Getting the point across: exploring the effects of dynamic virtual humans in an interactive museum exhibit on user perceptions.

    PubMed

    Rivera-Gutierrez, Diego; Ferdig, Rick; Li, Jian; Lok, Benjamin

    2014-04-01

    We have created “You, M.D.”, an interactive museum exhibit in which users learn about topics in public health literacy while interacting with virtual humans. You, M.D. is equipped with a weight sensor, a height sensor and a Microsoft Kinect that gather basic user information. Conceptually, You, M.D. could use this user information to dynamically select the appearance of the virtual humans in the interaction attempting to improve learning outcomes and user perception for each particular user. For this concept to be possible, a better understanding of how different elements of the visual appearance of a virtual human affects user perceptions is required. In this paper, we present the results of an initial user study with a large sample size (n =333) ran using You, M.D. The study measured users’ reactions based on the user’s gender and body-mass index (BMI) when facing virtual humans with BMI either concordant or discordant from the user’s BMI. The results of the study indicate that concordance between the users’ BMI and the virtual human’s BMI affects male and female users differently. The results also show that female users rate virtual humans as more knowledgeable than male users rate the same virtual humans.

  17. Virtual Passive Controller for Robot Systems Using Joint Torque Sensors

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.; Juang, Jer-Nan

    1997-01-01

    This paper presents a control method based on virtual passive dynamic control that will stabilize a robot manipulator using joint torque sensors and a simple joint model. The method does not require joint position or velocity feedback for stabilization. The proposed control method is stable in the sense of Lyaponov. The control method was implemented on several joints of a laboratory robot. The controller showed good stability robustness to system parameter error and to the exclusion of nonlinear dynamic effects on the joints. The controller enhanced position tracking performance and, in the absence of position control, dissipated joint energy.

  18. Virtual-Lattice Based Intrusion Detection Algorithm over Actuator-Assisted Underwater Wireless Sensor Networks

    PubMed Central

    Yan, Jing; Li, Xiaolei; Luo, Xiaoyuan; Guan, Xinping

    2017-01-01

    Due to the lack of a physical line of defense, intrusion detection becomes one of the key issues in applications of underwater wireless sensor networks (UWSNs), especially when the confidentiality has prime importance. However, the resource-constrained property of UWSNs such as sparse deployment and energy constraint makes intrusion detection a challenging issue. This paper considers a virtual-lattice-based approach to the intrusion detection problem in UWSNs. Different from most existing works, the UWSNs consist of two kinds of nodes, i.e., sensor nodes (SNs), which cannot move autonomously, and actuator nodes (ANs), which can move autonomously according to the performance requirement. With the cooperation of SNs and ANs, the intruder detection probability is defined. Then, a virtual lattice-based monitor (VLM) algorithm is proposed to detect the intruder. In order to reduce the redundancy of communication links and improve detection probability, an optimal and coordinative lattice-based monitor patrolling (OCLMP) algorithm is further provided for UWSNs, wherein an equal price search strategy is given for ANs to find the shortest patrolling path. Under VLM and OCLMP algorithms, the detection probabilities are calculated, while the topology connectivity can be guaranteed. Finally, simulation results are presented to show that the proposed method in this paper can improve the detection accuracy and save the energy consumption compared with the conventional methods. PMID:28531127

  19. Sensor Webs as Virtual Data Systems for Earth Science

    NASA Astrophysics Data System (ADS)

    Moe, K. L.; Sherwood, R.

    2008-05-01

    The NASA Earth Science Technology Office established a 3-year Advanced Information Systems Technology (AIST) development program in late 2006 to explore the technical challenges associated with integrating sensors, sensor networks, data assimilation and modeling components into virtual data systems called "sensor webs". The AIST sensor web program was initiated in response to a renewed emphasis on the sensor web concepts. In 2004, NASA proposed an Earth science vision for a more robust Earth observing system, coupled with remote sensing data analysis tools and advances in Earth system models. The AIST program is conducting the research and developing components to explore the technology infrastructure that will enable the visionary goals. A working statement for a NASA Earth science sensor web vision is the following: On-demand sensing of a broad array of environmental and ecological phenomena across a wide range of spatial and temporal scales, from a heterogeneous suite of sensors both in-situ and in orbit. Sensor webs will be dynamically organized to collect data, extract information from it, accept input from other sensor / forecast / tasking systems, interact with the environment based on what they detect or are tasked to perform, and communicate observations and results in real time. The focus on sensor webs is to develop the technology and prototypes to demonstrate the evolving sensor web capabilities. There are 35 AIST projects ranging from 1 to 3 years in duration addressing various aspects of sensor webs involving space sensors such as Earth Observing-1, in situ sensor networks such as the southern California earthquake network, and various modeling and forecasting systems. Some of these projects build on proof-of-concept demonstrations of sensor web capabilities like the EO-1 rapid fire response initially implemented in 2003. Other projects simulate future sensor web configurations to evaluate the effectiveness of sensor-model interactions for producing improved science predictions. Still other projects are maturing technology to support autonomous operations, communications and system interoperability. This paper will highlight lessons learned by various projects during the first half of the AIST program. Several sensor web demonstrations have been implemented and resulting experience with evolving standards, such as the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) among others, will be featured. The role of sensor webs in support of the intergovernmental Group on Earth Observations' Global Earth Observation System of Systems (GEOSS) will also be discussed. The GEOSS vision is a distributed system of systems that builds on international components to supply observing and processing systems that are, in the whole, comprehensive, coordinated and sustained. Sensor web prototypes are under development to demonstrate how remote sensing satellite data, in situ sensor networks and decision support systems collaborate in applications of interest to GEO, such as flood monitoring. Furthermore, the international Committee on Earth Observation Satellites (CEOS) has stepped up to the challenge to provide the space-based systems component for GEOSS. CEOS has proposed "virtual constellations" to address emerging data gaps in environmental monitoring, avoid overlap among observing systems, and make maximum use of existing space and ground assets. Exploratory applications that support the objectives of virtual constellations will also be discussed as a future role for sensor webs.

  20. Virtual IED sensor at an rf-biased electrode in low-pressure plasma

    NASA Astrophysics Data System (ADS)

    Bogdanova, Maria; Lopaev, Dmitry; Zyryanov, Sergey; Rakhimov, Alexander

    2016-09-01

    The majority of present-day technologies resort to ion-assisted processes in rf low-pressure plasma. In order to control the process precisely, the energy distribution of ions (IED) bombarding the sample placed on the rf-biased electrode should be tracked. In this work the ``Virtual IED sensor'' concept is considered. The idea is to obtain the IED ``virtually'' from the plasma sheath model including a set of externally measurable discharge parameters. The applicability of the ``Virtual IED sensor'' concept was studied for dual-frequency asymmetric ICP and CCP discharges. The IED measurements were carried out in Ar and H2 plasmas in a wide range of conditions. The calculated IEDs were compared to those measured by the Retarded Field Energy Analyzer. To calibrate the ``Virtual IED sensor'', the ion flux was measured by the pulsed self-bias method and then compared to plasma density measurements by Langmuir and hairpin probes. It is shown that if there is a reliable calibration procedure, the ``Virtual IED sensor'' can be successfully realized on the basis of analytical and semianalytical plasma sheath models including measurable discharge parameters. This research is supported by Russian Science Foundation (RSF) Grant 14-12-01012.

  1. Avatar - a multi-sensory system for real time body position monitoring.

    PubMed

    Jovanov, E; Hanish, N; Courson, V; Stidham, J; Stinson, H; Webb, C; Denny, K

    2009-01-01

    Virtual reality and computer assisted physical rehabilitation applications require an unobtrusive and inexpensive real time monitoring systems. Existing systems are usually complex and expensive and based on infrared monitoring. In this paper we propose Avatar, a hybrid system consisting of off-the-shelf components and sensors. Absolute positioning of a few reference points is determined using infrared diode on subject's body and a set of Wii Remotes as optical sensors. Individual body segments are monitored by intelligent inertial sensor nodes iSense. A network of inertial nodes is controlled by a master node that serves as a gateway for communication with a capture device. Each sensor features a 3D accelerometer and a 2 axis gyroscope. Avatar system is used for control of avatars in Virtual Reality applications, but could be used in a variety of augmented reality, gaming, and computer assisted physical rehabilitation applications.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tricaud, Christophe; Ernst, Timothy C.; Zigan, James A.

    The disclosure provides a waste heat recovery system with a system and method for calculation of the net output torque from the waste heat recovery system. The calculation uses inputs from existing pressure and speed sensors to create a virtual pump torque sensor and a virtual expander torque sensor, and uses these sensors to provide an accurate net torque output from the WHR system.

  3. Robust Online Monitoring for Calibration Assessment of Transmitters and Instrumentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Coble, Jamie B.; Shumaker, Brent

    Robust online monitoring (OLM) technologies are expected to enable the extension or elimination of periodic sensor calibration intervals in operating and new reactors. These advances in OLM technologies will improve the safety and reliability of current and planned nuclear power systems through improved accuracy and increased reliability of sensors used to monitor key parameters. In this article, we discuss an overview of research being performed within the Nuclear Energy Enabling Technologies (NEET)/Advanced Sensors and Instrumentation (ASI) program, for the development of OLM algorithms to use sensor outputs and, in combination with other available information, 1) determine whether one or moremore » sensors are out of calibration or failing and 2) replace a failing sensor with reliable, accurate sensor outputs. Algorithm development is focused on the following OLM functions: • Signal validation • Virtual sensing • Sensor response-time assessment These algorithms incorporate, at their base, a Gaussian Process-based uncertainty quantification (UQ) method. Various plant models (using kernel regression, GP, or hierarchical models) may be used to predict sensor responses under various plant conditions. These predicted responses can then be applied in fault detection (sensor output and response time) and in computing the correct value (virtual sensing) of a failing physical sensor. The methods being evaluated in this work can compute confidence levels along with the predicted sensor responses, and as a result, may have the potential for compensating for sensor drift in real-time (online recalibration). Evaluation was conducted using data from multiple sources (laboratory flow loops and plant data). Ongoing research in this project is focused on further evaluation of the algorithms, optimization for accuracy and computational efficiency, and integration into a suite of tools for robust OLM that are applicable to monitoring sensor calibration state in nuclear power plants.« less

  4. ROBUST ONLINE MONITORING FOR CALIBRATION ASSESSMENT OF TRANSMITTERS AND INSTRUMENTATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Tipireddy, Ramakrishna; Lerchen, Megan E.

    Robust online monitoring (OLM) technologies are expected to enable the extension or elimination of periodic sensor calibration intervals in operating and new reactors. Specifically, the next generation of OLM technology is expected to include newly developed advanced algorithms that improve monitoring of sensor/system performance and enable the use of plant data to derive information that currently cannot be measured. These advances in OLM technologies will improve the safety and reliability of current and planned nuclear power systems through improved accuracy and increased reliability of sensors used to monitor key parameters. In this paper, we discuss an overview of research beingmore » performed within the Nuclear Energy Enabling Technologies (NEET)/Advanced Sensors and Instrumentation (ASI) program, for the development of OLM algorithms to use sensor outputs and, in combination with other available information, 1) determine whether one or more sensors are out of calibration or failing and 2) replace a failing sensor with reliable, accurate sensor outputs. Algorithm development is focused on the following OLM functions: • Signal validation – fault detection and selection of acceptance criteria • Virtual sensing – signal value prediction and acceptance criteria • Response-time assessment – fault detection and acceptance criteria selection A GP-based uncertainty quantification (UQ) method previously developed for UQ in OLM, was adapted for use in sensor-fault detection and virtual sensing. For signal validation, the various components to the OLM residual (which is computed using an AAKR model) were explicitly defined and modeled using a GP. Evaluation was conducted using flow loop data from multiple sources. Results using experimental data from laboratory-scale flow loops indicate that the approach, while capable of detecting sensor drift, may be incapable of discriminating between sensor drift and model inadequacy. This may be due to a simplification applied in the initial modeling, where the sensor degradation is assumed to be stationary. In the case of virtual sensors, the GP model was used in a predictive mode to estimate the correct sensor reading for sensors that may have failed. Results have indicated the viability of using this approach for virtual sensing. However, the GP model has proven to be computationally expensive, and so alternative algorithms for virtual sensing are being evaluated. Finally, automated approaches to performing noise analysis for extracting sensor response time were developed. Evaluation of this technique using laboratory-scale data indicates that it compares well with manual techniques previously used for noise analysis. Moreover, the automated and manual approaches for noise analysis also compare well with the current “gold standard”, hydraulic ramp testing, for response time monitoring. Ongoing research in this project is focused on further evaluation of the algorithms, optimization for accuracy and computational efficiency, and integration into a suite of tools for robust OLM that are applicable to monitoring sensor calibration state in nuclear power plants.« less

  5. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks.

    PubMed

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-06-06

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.

  6. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    PubMed Central

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-01-01

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304

  7. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  8. Three-Dimensional Sensor Common Operating Picture (3-D Sensor COP)

    DTIC Science & Technology

    2017-01-01

    created. Additionally, a 3-D model of the sensor itself can be created. Using these 3-D models, along with emerging virtual and augmented reality tools...augmented reality 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 20 19a...iii Contents List of Figures iv 1. Introduction 1 2. The 3-D Sensor COP 2 3. Virtual Sensor Placement 7 4. Conclusions 10 5. References 11

  9. New virtual sonar and wireless sensor system concepts

    NASA Astrophysics Data System (ADS)

    Houston, B. H.; Bucaro, J. A.; Romano, A. J.

    2004-05-01

    Recently, exciting new sensor array concepts have been proposed which, if realized, could revolutionize how we approach surface mounted acoustic sensor systems for underwater vehicles. Two such schemes are so-called ``virtual sonar'' which is formulated around Helmholtz integral processing and ``wireless'' systems which transfer sensor information through radiated RF signals. The ``virtual sonar'' concept provides an interesting framework through which to combat the dilatory effects of the structure on surface mounted sensor systems including structure-borne vibration and variations in structure-backing impedance. The ``wireless'' concept would eliminate the necessity of a complex wiring or fiber-optic external network while minimizing vehicle penetrations. Such systems, however, would require a number of advances in sensor and RF waveguide technologies. In this presentation, we will discuss those sensor and sensor-related developments which are desired or required in order to make practical such new sensor system concepts, and we will present several underwater applications from the perspective of exploiting these new sonar concepts. [Work supported by ONR.

  10. Developing movement recognition application with the use of Shimmer sensor and Microsoft Kinect sensor.

    PubMed

    Guzsvinecz, Tibor; Szucs, Veronika; Sik Lányi, Cecília

    2015-01-01

    Nowadays the development of virtual reality-based application is one of the most dynamically growing areas. These applications have a wide user base, more and more devices which are providing several kinds of user interactions and are available on the market. In the applications where the not-handheld devices are not necessary, the potential is that these can be used in educational, entertainment and rehabilitation applications. The purpose of this paper is to examine the precision and the efficiency of the not-handheld devices with user interaction in the virtual reality-based applications. The first task of the developed application is to support the rehabilitation process of stroke patients in their homes. A newly developed application will be introduced in this paper, which uses the two popular devices, the Shimmer sensor and the Microsoft Kinect sensor. To identify and to validate the actions of the user these sensors are working together in parallel mode. For the problem solving, the application is available to record an educational pattern, and then the software compares this pattern to the action of the user. The goal of the current research is to examine the extent of the difference in the recognition of the gestures, how precisely the two sensors are identifying the predefined actions. This could affect the rehabilitation process of the stroke patients and influence the efficiency of the rehabilitation. This application was developed in C# programming language and uses the original Shimmer connecting application as a base. During the working of this application it is possible to teach five-five different movements with the use of the Shimmer and the Microsoft Kinect sensors. The application can recognize these actions at any later time. This application uses a file-based database and the runtime memory of the application to store the saved data in order to reach the actions easier. The conclusion is that much more precise data were collected from the Microsoft Kinect sensor than the Shimmer sensors.

  11. Automatic 3D virtual scenes modeling for multisensors simulation

    NASA Astrophysics Data System (ADS)

    Latger, Jean; Le Goff, Alain; Cathala, Thierry; Larive, Mathieu

    2006-05-01

    SEDRIS that stands for Synthetic Environment Data Representation and Interchange Specification is a DoD/DMSO initiative in order to federate and make interoperable 3D mocks up in the frame of virtual reality and simulation. This paper shows an original application of SEDRIS concept for research physical multi sensors simulation, when SEDRIS is more classically known for training simulation. CHORALE (simulated Optronic Acoustic Radar battlefield) is used by the French DGA/DCE (Directorate for Test and Evaluation of the French Ministry of Defense) to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes, and generate the physical signal received by a sensor, typically an IR sensor. In the scope of this CHORALE workshop, French DGA has decided to introduce a SEDRIS based new 3D terrain modeling tool that enables to create automatically 3D databases, directly usable by the physical sensor simulation CHORALE renderers. This AGETIM tool turns geographical source data (including GIS facilities) into meshed geometry enhanced with the sensor physical extensions, fitted to the ray tracing rendering of CHORALE, both for the infrared, electromagnetic and acoustic spectrum. The basic idea is to enhance directly the 2D source level with the physical data, rather than enhancing the 3D meshed level, which is more efficient (rapid database generation) and more reliable (can be generated many times, changing some parameters only). The paper concludes with the last current evolution of AGETIM in the scope mission rehearsal for urban war using sensors. This evolution includes indoor modeling for automatic generation of inner parts of buildings.

  12. Virtual Sensors in a Web 2.0 Digital Watershed

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Hill, D. J.; Marini, L.; Kooper, R.; Rodriguez, A.; Myers, J. D.

    2008-12-01

    The lack of rainfall data in many watersheds is one of the major barriers for modeling and studying many environmental and hydrological processes and supporting decision making. There are just not enough rain gages on the ground. To overcome this data scarcity issue, a Web 2.0 digital watershed is developed at NCSA(National Center for Supercomputing Applications), where users can point-and-click on a web-based google map interface and create new precipitation virtual sensors at any location within the same coverage region as a NEXRAD station. A set of scientific workflows are implemented to perform spatial, temporal and thematic transformations to the near-real-time NEXRAD Level II data. Such workflows can be triggered by the users' actions and generate either rainfall rate or rainfall accumulation streaming data at a user-specified time interval. We will discuss some underlying components of this digital watershed, which consists of a semantic content management middleware, a semantically enhanced streaming data toolkit, virtual sensor management functionality, and RESTful (REpresentational State Transfer) web service that can trigger the workflow execution. Such loosely coupled architecture presents a generic framework for constructing a Web 2.0 style digital watershed. An implementation of this architecture at the Upper Illinois Rive Basin will be presented. We will also discuss the implications of the virtual sensor concept for the broad environmental observatory community and how such concept will help us move towards a participatory digital watershed.

  13. Tailoring gas sensor arrays via the design of short peptides sequences as binding elements.

    PubMed

    Mascini, Marcello; Pizzoni, Daniel; Perez, German; Chiarappa, Emilio; Di Natale, Corrado; Pittia, Paola; Compagnone, Dario

    2017-07-15

    A semi-combinatorial virtual approach was used to prepare peptide-based gas sensors with binding properties towards five different chemical classes (alcohols, aldehydes, esters, hydrocarbons and ketones). Molecular docking simulations were conducted for a complete tripeptide library (8000 elements) versus 58 volatile compounds belonging to those five chemical classes. By maximizing the differences between chemical classes, a subset of 120 tripeptides was extracted and used as scaffolds for generating a combinatorial library of 7912 tetrapeptides. This library was processed in an analogous way to the former. Five tetrapeptides (IHRI, KSDS, LGFD, TGKF and WHVS) were chosen depending on their virtual affinity and cross-reactivity for the experimental step. The five peptides were covalently bound to gold nanoparticles by adding a terminal cysteine to each tetrapeptide and deposited onto 20MHz quartz crystal microbalances to construct the gas sensors. The behavior of peptides after this chemical modification was simulated at the pH range used in the immobilization step. ΔF signals analyzed by principal component analysis matched the virtually screened data. The array was able to clearly discriminate the 13 volatile compounds tested based on their hydrophobicity and hydrophilicity molecules as well as the molecular weight. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Intelligent approach to prognostic enhancements of diagnostic systems

    NASA Astrophysics Data System (ADS)

    Vachtsevanos, George; Wang, Peng; Khiripet, Noppadon; Thakker, Ash; Galie, Thomas R.

    2001-07-01

    This paper introduces a novel methodology to prognostics based on a dynamic wavelet neural network construct and notions from the virtual sensor area. This research has been motivated and supported by the U.S. Navy's active interest in integrating advanced diagnostic and prognostic algorithms in existing Naval digital control and monitoring systems. A rudimentary diagnostic platform is assumed to be available providing timely information about incipient or impending failure conditions. We focus on the development of a prognostic algorithm capable of predicting accurately and reliably the remaining useful lifetime of a failing machine or component. The prognostic module consists of a virtual sensor and a dynamic wavelet neural network as the predictor. The virtual sensor employs process data to map real measurements into difficult to monitor fault quantities. The prognosticator uses a dynamic wavelet neural network as a nonlinear predictor. Means to manage uncertainty and performance metrics are suggested for comparison purposes. An interface to an available shipboard Integrated Condition Assessment System is described and applications to shipboard equipment are discussed. Typical results from pump failures are presented to illustrate the effectiveness of the methodology.

  15. Real-time 3D visualization of volumetric video motion sensor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.; Stansfield, S.; Shawver, D.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less

  16. Modular mechatronic system for stationary bicycles interfaced with virtual environment for rehabilitation.

    PubMed

    Ranky, Richard G; Sivak, Mark L; Lewis, Jeffrey A; Gade, Venkata K; Deutsch, Judith E; Mavroidis, Constantinos

    2014-06-05

    Cycling has been used in the rehabilitation of individuals with both chronic and post-surgical conditions. Among the challenges with implementing bicycling for rehabilitation is the recruitment of both extremities, in particular when one is weaker or less coordinated. Feedback embedded in virtual reality (VR) augmented cycling may serve to address the requirement for efficacious cycling; specifically recruitment of both extremities and exercising at a high intensity. In this paper a mechatronic rehabilitation bicycling system with an interactive virtual environment, called Virtual Reality Augmented Cycling Kit (VRACK), is presented. Novel hardware components embedded with sensors were implemented on a stationary exercise bicycle to monitor physiological and biomechanical parameters of participants while immersing them in an augmented reality simulation providing the user with visual, auditory and haptic feedback. This modular and adaptable system attaches to commercially-available stationary bicycle systems and interfaces with a personal computer for simulation and data acquisition processes. The complete bicycle system includes: a) handle bars based on hydraulic pressure sensors; b) pedals that monitor pedal kinematics with an inertial measurement unit (IMU) and forces on the pedals while providing vibratory feedback; c) off the shelf electronics to monitor heart rate and d) customized software for rehabilitation. Bench testing for the handle and pedal systems is presented for calibration of the sensors detecting force and angle. The modular mechatronic kit for exercise bicycles was tested in bench testing and human tests. Bench tests performed on the sensorized handle bars and the instrumented pedals validated the measurement accuracy of these components. Rider tests with the VRACK system focused on the pedal system and successfully monitored kinetic and kinematic parameters of the rider's lower extremities. The VRACK system, a virtual reality mechatronic bicycle rehabilitation modular system was designed to convert most bicycles in virtual reality (VR) cycles. Preliminary testing of the augmented reality bicycle system was successful in demonstrating that a modular mechatronic kit can monitor and record kinetic and kinematic parameters of several riders.

  17. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image

    PubMed Central

    Wen, Wei; Khatibi, Siamak

    2017-01-01

    Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459

  18. Virtual sensor models for real-time applications

    NASA Astrophysics Data System (ADS)

    Hirsenkorn, Nils; Hanke, Timo; Rauch, Andreas; Dehlink, Bernhard; Rasshofer, Ralph; Biebl, Erwin

    2016-09-01

    Increased complexity and severity of future driver assistance systems demand extensive testing and validation. As supplement to road tests, driving simulations offer various benefits. For driver assistance functions the perception of the sensors is crucial. Therefore, sensors also have to be modeled. In this contribution, a statistical data-driven sensor-model, is described. The state-space based method is capable of modeling various types behavior. In this contribution, the modeling of the position estimation of an automotive radar system, including autocorrelations, is presented. For rendering real-time capability, an efficient implementation is presented.

  19. The Design of a Chemical Virtual Instrument Based on LabVIEW for Determining Temperatures and Pressures.

    PubMed

    Wang, Wen-Bin; Li, Jang-Yuan; Wu, Qi-Jun

    2007-01-01

    A LabVIEW-based self-constructed chemical virtual instrument (VI) has been developed for determining temperatures and pressures. It can be put together easily and quickly by selecting hardware modules, such as the PCI-DAQ card or serial port method, different kinds of sensors, signal-conditioning circuits or finished chemical instruments, and software modules such as data acquisition, saving, proceeding. The VI system provides individual and extremely flexible solutions for automatic measurements in physical chemistry research.

  20. The Design of a Chemical Virtual Instrument Based on LabVIEW for Determining Temperatures and Pressures

    PubMed Central

    Wang, Wen-Bin; Li, Jang-Yuan; Wu, Qi-Jun

    2007-01-01

    A LabVIEW-based self-constructed chemical virtual instrument (VI) has been developed for determining temperatures and pressures. It can be put together easily and quickly by selecting hardware modules, such as the PCI-DAQ card or serial port method, different kinds of sensors, signal-conditioning circuits or finished chemical instruments, and software modules such as data acquisition, saving, proceeding. The VI system provides individual and extremely flexible solutions for automatic measurements in physical chemistry research. PMID:17671611

  1. Virtual and remote robotic laboratory using EJS, MATLAB and LabVIEW.

    PubMed

    Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián

    2013-02-21

    This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.

  2. Virtual and Remote Robotic Laboratory Using EJS, MATLAB and Lab VIEW

    PubMed Central

    Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián

    2013-01-01

    This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented. PMID:23429578

  3. Highly stretchable and wearable graphene strain sensors with controllable sensitivity for human motion monitoring.

    PubMed

    Park, Jung Jin; Hyun, Woo Jin; Mun, Sung Cik; Park, Yong Tae; Park, O Ok

    2015-03-25

    Because of their outstanding electrical and mechanical properties, graphene strain sensors have attracted extensive attention for electronic applications in virtual reality, robotics, medical diagnostics, and healthcare. Although several strain sensors based on graphene have been reported, the stretchability and sensitivity of these sensors remain limited, and also there is a pressing need to develop a practical fabrication process. This paper reports the fabrication and characterization of new types of graphene strain sensors based on stretchable yarns. Highly stretchable, sensitive, and wearable sensors are realized by a layer-by-layer assembly method that is simple, low-cost, scalable, and solution-processable. Because of the yarn structures, these sensors exhibit high stretchability (up to 150%) and versatility, and can detect both large- and small-scale human motions. For this study, wearable electronics are fabricated with implanted sensors that can monitor diverse human motions, including joint movement, phonation, swallowing, and breathing.

  4. The Evolution of Sonic Ecosystems

    NASA Astrophysics Data System (ADS)

    McCormack, Jon

    This chapter describes a novel type of artistic artificial life software environment. Agents that have the ability to make and listen to sound populate a synthetic world. An evolvable, rule-based classifier system drives agent behavior. Agents compete for limited resources in a virtual environment that is influenced by the presence and movement of people observing the system. Electronic sensors create a link between the real and virtual spaces, virtual agents evolve implicitly to try to maintain the interest of the human audience, whose presence provides them with life-sustaining food.

  5. Oversampling in virtual visual sensors as a means to recover higher modes of vibration

    NASA Astrophysics Data System (ADS)

    Shariati, Ali; Schumacher, Thomas

    2015-03-01

    Vibration-based structural health monitoring (SHM) techniques require modal information from the monitored structure in order to estimate the location and severity of damage. Natural frequencies also provide useful information to calibrate finite element models. There are several types of physical sensors that can measure the response over a range of frequencies. For most of those sensors however, accessibility, limitation of measurement points, wiring, and high system cost represent major challenges. Recent optical sensing approaches offer advantages such as easy access to visible areas, distributed sensing capabilities, and comparatively inexpensive data recording while having no wiring issues. In this research we propose a novel methodology to measure natural frequencies of structures using digital video cameras based on virtual visual sensors (VVS). In our initial study where we worked with commercially available inexpensive digital video cameras we found that for multiple degrees of freedom systems it is difficult to detect all of the natural frequencies simultaneously due to low quantization resolution. In this study we show how oversampling enabled by the use of high-end high-frame-rate video cameras enable recovering all of the three natural frequencies from a three story lab-scale structure.

  6. Virtual odors to transmit emotions in virtual agents

    NASA Astrophysics Data System (ADS)

    Delgado-Mata, Carlos; Aylett, Ruth

    2003-04-01

    In this paper we describe an emotional-behvioral architecture. The emotional engine sits at a higher layer than the behavior system, and can alter behavior patterns, the engine is designed to simulate Emotionally-Intelligent Agents in a Virtual Environment, where each agent senses its own emotions, and other creature emotions through a virtual smell sensor; senses obstacles and other moving creatures in the environment and reacts to them. The architecture consists of an emotion engine, behavior synthesis system, a motor layer and a library of sensors.

  7. Development of a Locomotion Interface for Portable Virtual Environment Systems Using an Inertial/Magnetic Sensor-Based System and a Ranging Measurement System

    DTIC Science & Technology

    2014-03-01

    56 1. Motivation ...83 1. Motivation ...........................................................................................83 2. Environment Requirements...ENVIRONMENT SYSTEMS ......................................................97 A. BACKGROUND AND MOTIVATION

  8. Secure Autonomous Automated Scheduling (SAAS). Rev. 1.1

    NASA Technical Reports Server (NTRS)

    Walke, Jon G.; Dikeman, Larry; Sage, Stephen P.; Miller, Eric M.

    2010-01-01

    This report describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the UK-DMC, is used as the space-based sensor. The UK-DMC's availability is determined via machine-to-machine communications using SSTL's mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL's and Universal Space Network's (USN) ground assets. The availability and scheduling of USN's assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards

  9. An Architecture for Real-Time Interpretation and Visualization of Structural Sensor Data in a Laboratory Environment

    NASA Technical Reports Server (NTRS)

    Doggett, William; Vazquez, Sixto

    2000-01-01

    A visualization system is being developed out of the need to monitor, interpret, and make decisions based on the information from several thousand sensors during experimental testing to facilitate development and validation of structural health monitoring algorithms. As an added benefit the system will enable complete real-time sensor assessment of complex test specimens. Complex structural specimens are routinely tested that have hundreds or thousands of sensors. During a test, it is impossible for a single researcher to effectively monitor all the sensors and subsequently interesting phenomena occur that are not recognized until post-test analysis. The ability to detect and alert the researcher to these unexpected phenomena as the test progresses will significantly enhance the understanding and utilization of complex test articles. Utilization is increased by the ability to halt a test when the health monitoring algorithm response is not satisfactory or when an unexpected phenomenon occurs, enabling focused investigation potentially through the installation of additional sensors. Often if the test continues, structural changes make it impossible to reproduce the conditions that exhibited the phenomena. The prohibitive time and costs associated with fabrication, sensoring, and subsequent testing of additional test articles generally makes it impossible to further investigate the phenomena. A scalable architecture is described to address the complex computational demands of structural health monitoring algorithm development and laboratory experimental test monitoring. The researcher monitors the test using a photographic quality 3D graphical model with actual sensor locations identified. In addition, researchers can quickly activate plots displaying time or load versus selected sensor response along with the expected values and predefined limits. The architecture has several key features. First, distributed dissimilar computers may be seamlessly integrated into the information flow. Second, virtual sensors may be defined that are complex functions of existing sensors or other virtual sensors. Virtual sensors represent a calculated value not directly measured by particular physical instrument. They can be used, for example, to represent the maximum difference in a range of sensors or the calculated buckling load based on the current strains. Third, the architecture enables autonomous response to preconceived events, where by the system can be configured to suspend or abort a test if a failure is detected in the load introduction system. Fourth, the architecture is designed to allow cooperative monitoring and control of the test progression from multiple stations both remote and local to the test system. To illustrate the architecture, a preliminary implementation is described monitoring the Stitched Composite Wing recently tested at LaRC.

  10. The application of smart sensor techniques to a solid-state array multispectral sensor

    NASA Technical Reports Server (NTRS)

    Mcfadin, L. W.

    1978-01-01

    The solid-state array spectroradiometer (SAS) developed at JSC for remote sensing applications is a multispectral sensor which has no moving parts, is virtually maintenance-free, and has the ability to provide data which requires a minimum of processing. The instrument is based on the 42 x 342 element charge injection device (CID) detector. This system allows the combination of spectral scanning and across-track spatial scanning along with its associated digitization electronics into a single detector.

  11. Medicine in long duration space exploration: the role of virtual reality and broad bandwidth telecommunications networks

    NASA Technical Reports Server (NTRS)

    Ross, M. D.

    2001-01-01

    Safety of astronauts during long-term space exploration is a priority for NASA. This paper describes efforts to produce Earth-based models for providing expert medical advice when unforeseen medical emergencies occur on spacecraft. These models are Virtual Collaborative Clinics that reach into remote sites using telecommunications and emerging stereo-imaging and sensor technologies. c 2001. Elsevier Science Ltd. All rights reserved.

  12. TinyONet: A Cache-Based Sensor Network Bridge Enabling Sensing Data Reusability and Customized Wireless Sensor Network Services

    PubMed Central

    Jung, Eui-Hyun; Park, Yong-Jin

    2008-01-01

    In recent years, a few protocol bridge research projects have been announced to enable a seamless integration of Wireless Sensor Networks (WSNs) with the TCP/IP network. These studies have ensured the transparent end-to-end communication between two network sides in the node-centric manner. Researchers expect this integration will trigger the development of various application domains. However, prior research projects have not fully explored some essential features for WSNs, especially the reusability of sensing data and the data-centric communication. To resolve these issues, we suggested a new protocol bridge system named TinyONet. In TinyONet, virtual sensors play roles as virtual counterparts of physical sensors and they dynamically group to make a functional entity, Slice. Instead of direct interaction with individual physical sensors, each sensor application uses its own WSN service provided by Slices. If a new kind of service is required in TinyONet, the corresponding function can be dynamically added at runtime. Beside the data-centric communication, it also supports the node-centric communication and the synchronous access. In order to show the effectiveness of the system, we implemented TinyONet on an embedded Linux machine and evaluated it with several experimental scenarios. PMID:27873968

  13. Virtual Deformation Control of the X-56A Model with Simulated Fiber Optic Sensors

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.

    2014-01-01

    A robust control law design methodology is presented to stabilize the X-56A model and command its wing shape. The X-56A was purposely designed to experience flutter modes in its flight envelope. The methodology introduces three phases: the controller design phase, the modal filter design phase, and the reference signal design phase. A mu-optimal controller is designed and made robust to speed and parameter variations. A conversion technique is presented for generating sensor strain modes from sensor deformation mode shapes. The sensor modes are utilized for modal filtering and simulating fiber optic sensors for feedback to the controller. To generate appropriate virtual deformation reference signals, rigid-body corrections are introduced to the deformation mode shapes. After successful completion of the phases, virtual deformation control is demonstrated. The wing is deformed and it is shown that angle-ofattack changes occur which could potentially be used to an advantage. The X-56A program must demonstrate active flutter suppression. It is shown that the virtual deformation controller can achieve active flutter suppression on the X-56A simulation model.

  14. Virtual Deformation Control of the X-56A Model with Simulated Fiber Optic Sensors

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Chin, Alexander Wong

    2013-01-01

    A robust control law design methodology is presented to stabilize the X-56A model and command its wing shape. The X-56A was purposely designed to experience flutter modes in its flight envelope. The methodology introduces three phases: the controller design phase, the modal filter design phase, and the reference signal design phase. A mu-optimal controller is designed and made robust to speed and parameter variations. A conversion technique is presented for generating sensor strain modes from sensor deformation mode shapes. The sensor modes are utilized for modal filtering and simulating fiber optic sensors for feedback to the controller. To generate appropriate virtual deformation reference signals, rigid-body corrections are introduced to the deformation mode shapes. After successful completion of the phases, virtual deformation control is demonstrated. The wing is deformed and it is shown that angle-of-attack changes occur which could potentially be used to an advantage. The X-56A program must demonstrate active flutter suppression. It is shown that the virtual deformation controller can achieve active flutter suppression on the X-56A simulation model.

  15. An Improved Co-evolutionary Particle Swarm Optimization for Wireless Sensor Networks with Dynamic Deployment

    PubMed Central

    Wang, Xue; Wang, Sheng; Ma, Jun-Jie

    2007-01-01

    The effectiveness of wireless sensor networks (WSNs) depends on the coverage and target detection probability provided by dynamic deployment, which is usually supported by the virtual force (VF) algorithm. However, in the VF algorithm, the virtual force exerted by stationary sensor nodes will hinder the movement of mobile sensor nodes. Particle swarm optimization (PSO) is introduced as another dynamic deployment algorithm, but in this case the computation time required is the big bottleneck. This paper proposes a dynamic deployment algorithm which is named “virtual force directed co-evolutionary particle swarm optimization” (VFCPSO), since this algorithm combines the co-evolutionary particle swarm optimization (CPSO) with the VF algorithm, whereby the CPSO uses multiple swarms to optimize different components of the solution vectors for dynamic deployment cooperatively and the velocity of each particle is updated according to not only the historical local and global optimal solutions, but also the virtual forces of sensor nodes. Simulation results demonstrate that the proposed VFCPSO is competent for dynamic deployment in WSNs and has better performance with respect to computation time and effectiveness than the VF, PSO and VFPSO algorithms.

  16. Design of virtual three-dimensional instruments for sound control

    NASA Astrophysics Data System (ADS)

    Mulder, Axel Gezienus Elith

    An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object parameters. While the virtual instruments can be adapted to exploit many manipulation gestures, further work is required to reduce the need for technical expertise to realize adaptations. Better virtual object simulation techniques and faster sensor data acquisition will improve the performance of virtual instruments. The design environment which has been developed should prove useful as a (musical) instrument prototyping tool and as a tool for researching the optimal adaptation of machines to humans.

  17. Sensor Webs and Virtual Globes: Enabling Understanding of Changes in a partially Glaciated Watershed

    NASA Astrophysics Data System (ADS)

    Heavner, M.; Fatland, D. R.; Habermann, M.; Berner, L.; Hood, E.; Connor, C.; Galbraith, J.; Knuth, E.; O'Brien, W.

    2008-12-01

    The University of Alaska Southeast is currently implementing a sensor web identified as the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research (SEAMONSTER). SEAMONSTER is operating in the partially glaciated Mendenhall and Lemon Creek Watersheds, in the Juneau area, on the margins of the Juneau Icefield. These watersheds are studied for both 1. long term monitoring of changes, and 2. detection and analysis of transient events (such as glacier lake outburst floods). The heterogeneous sensors (meteorologic, dual frequency GPS, water quality, lake level, etc), power and bandwidth constraints, and competing time scales of interest require autonomous reactivity of the sensor web. They also present challenges for operational management of the sensor web. The harsh conditions on the glaciers provide additional operating constraints. The tight integration of the sensor web and virtual global enabling technology enhance the project in multiple ways. We are utilizing virtual globe infrastructures to enhance both sensor web management and data access. SEAMONSTER utilizes virtual globes for education and public outreach, sensor web management, data dissemination, and enabling collaboration. Using a PosgreSQL with GIS extensions database coupled to the Open Geospatial Consortium (OGC) Geoserver, we generate near-real-time auto-updating geobrowser files of the data in multiple OGC standard formats (e.g KML, WCS). Additionally, embedding wiki pages in this database allows the development of a geospatially aware wiki describing the projects for better public outreach and education. In this presentation we will describe how we have implemented these technologies to date, the lessons learned, and our efforts towards greater OGC standard implementation. A major focus will be on demonstrating how geobrowsers and virtual globes have made this project possible.

  18. Assessing Upper Extremity Motor Function in Practice of Virtual Activities of Daily Living

    PubMed Central

    Adams, Richard J.; Lichter, Matthew D.; Krepkovich, Eileen T.; Ellington, Allison; White, Marga; Diamond, Paul T.

    2015-01-01

    A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An Unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user’s avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman’s rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs. PMID:25265612

  19. Assessing upper extremity motor function in practice of virtual activities of daily living.

    PubMed

    Adams, Richard J; Lichter, Matthew D; Krepkovich, Eileen T; Ellington, Allison; White, Marga; Diamond, Paul T

    2015-03-01

    A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user's avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman's rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs.

  20. Integrating Flexible Sensor and Virtual Self-Organizing DC Grid Model With Cloud Computing for Blood Leakage Detection During Hemodialysis.

    PubMed

    Huang, Ping-Tzan; Jong, Tai-Lang; Li, Chien-Ming; Chen, Wei-Ling; Lin, Chia-Hung

    2017-08-01

    Blood leakage and blood loss are serious complications during hemodialysis. From the hemodialysis survey reports, these life-threatening events occur to attract nephrology nurses and patients themselves. When the venous needle and blood line are disconnected, it takes only a few minutes for an adult patient to lose over 40% of his / her blood, which is a sufficient amount of blood loss to cause the patient to die. Therefore, we propose integrating a flexible sensor and self-organizing algorithm to design a cloud computing-based warning device for blood leakage detection. The flexible sensor is fabricated via a screen-printing technique using metallic materials on a soft substrate in an array configuration. The self-organizing algorithm constructs a virtual direct current grid-based alarm unit in an embedded system. This warning device is employed to identify blood leakage levels via a wireless network and cloud computing. It has been validated experimentally, and the experimental results suggest specifications for its commercial designs. The proposed model can also be implemented in an embedded system.

  1. A method to align the coordinate system of accelerometers to the axes of a human body: The depitch algorithm.

    PubMed

    Gietzelt, Matthias; Schnabel, Stephan; Wolf, Klaus-Hendrik; Büsching, Felix; Song, Bianying; Rust, Stefan; Marschollek, Michael

    2012-05-01

    One of the key problems in accelerometry based gait analyses is that it may not be possible to attach an accelerometer to the lower trunk so that its axes are perfectly aligned to the axes of the subject. In this paper we will present an algorithm that was designed to virtually align the axes of the accelerometer to the axes of the subject during walking sections. This algorithm is based on a physically reasonable approach and built for measurements in unsupervised settings, where the test persons are applying the sensors by themselves. For evaluation purposes we conducted a study with 6 healthy subjects and measured their gait with a manually aligned and a skewed accelerometer attached to the subject's lower trunk. After applying the algorithm the intra-axis correlation of both sensors was on average 0.89±0.1 with a mean absolute error of 0.05g. We concluded that the algorithm was able to adjust the skewed sensor node virtually to the coordinate system of the subject. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. An Effective Massive Sensor Network Data Access Scheme Based on Topology Control for the Internet of Things.

    PubMed

    Yi, Meng; Chen, Qingkui; Xiong, Neal N

    2016-11-03

    This paper considers the distributed access and control problem of massive wireless sensor networks' data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate.

  3. Self-localization of wireless sensor networks using self-organizing maps

    NASA Astrophysics Data System (ADS)

    Ertin, Emre; Priddy, Kevin L.

    2005-03-01

    Recently there has been a renewed interest in the notion of deploying large numbers of networked sensors for applications ranging from environmental monitoring to surveillance. In a typical scenario a number of sensors are distributed in a region of interest. Each sensor is equipped with sensing, processing and communication capabilities. The information gathered from the sensors can be used to detect, track and classify objects of interest. For a number of locations the sensors location is crucial in interpreting the data collected from those sensors. Scalability requirements dictate sensor nodes that are inexpensive devices without a dedicated localization hardware such as GPS. Therefore the network has to rely on information collected within the network to self-localize. In the literature a number of algorithms has been proposed for network localization which uses measurements informative of range, angle, proximity between nodes. Recent work by Patwari and Hero relies on sensor data without explicit range estimates. The assumption is that the correlation structure in the data is a monotone function of the intersensor distances. In this paper we propose a new method based on unsupervised learning techniques to extract location information from the sensor data itself. We consider a grid consisting of virtual nodes and try to fit grid in the actual sensor network data using the method of self organizing maps. Then known sensor network geometry can be used to rotate and scale the grid to a global coordinate system. Finally, we illustrate how the virtual nodes location information can be used to track a target.

  4. Virtual Instrumentation for a Fiber-Optics-Based Artificial Nerve

    NASA Technical Reports Server (NTRS)

    Lyons, Donald R.; Kyaw, Thet Mon; Griffin, DeVon (Technical Monitor)

    2001-01-01

    A LabView-based computer interface for fiber-optic artificial nerves has been devised as a Masters thesis project. This project involves the use of outputs from wavelength multiplexed optical fiber sensors (artificial nerves), which are capable of producing dense optical data outputs for physical measurements. The potential advantages of using optical fiber sensors for sensory function restoration is the fact that well defined WDM-modulated signals can be transmitted to and from the sensing region allowing networked units to replace low-level nerve functions for persons desirous of "intelligent artificial limbs." Various FO sensors can be designed with high sensitivity and the ability to be interfaced with a wide range of devices including miniature shielded electrical conversion units. Our Virtual Instrument (VI) interface software package was developed using LabView's "Laboratory Virtual Instrument Engineering Workbench" package. The virtual instrument has been configured to arrange and encode the data to develop an intelligent response in the form of encoded digitized signal outputs. The architectural layout of our nervous system is such that different touch stimuli from different artificial fiber-optic nerve points correspond to gratings of a distinct resonant wavelength and physical location along the optical fiber. Thus, when an automated, tunable diode laser sends scans, the wavelength spectrum of the artificial nerve, it triggers responses that are encoded with different touch stimuli by way wavelength shifts in the reflected Bragg resonances. The reflected light is detected and a resulting analog signal is fed into ADC1 board and DAQ card. Finally, the software has been written such that the experimenter is able to set the response range during data acquisition.

  5. Sea-Based Automated Launch and Recovery System Virtual Testbed

    DTIC Science & Technology

    2013-12-02

    integrated with an Extended Kalman Filter to study sensor fusion in a fixed wing aircraft shipboard recovery scenario. 15. SUBJECT TERMS...the sensors and filter performance are graded both on pure estimation error, and by examining the touchdown performance of the aircraft on the ship...v, and w body-axis velocity components of the aircraft , while the velocities applied to the extremities are used to calculate estimated rotational

  6. Modular mechatronic system for stationary bicycles interfaced with virtual environment for rehabilitation

    PubMed Central

    2014-01-01

    Background Cycling has been used in the rehabilitation of individuals with both chronic and post-surgical conditions. Among the challenges with implementing bicycling for rehabilitation is the recruitment of both extremities, in particular when one is weaker or less coordinated. Feedback embedded in virtual reality (VR) augmented cycling may serve to address the requirement for efficacious cycling; specifically recruitment of both extremities and exercising at a high intensity. Methods In this paper a mechatronic rehabilitation bicycling system with an interactive virtual environment, called Virtual Reality Augmented Cycling Kit (VRACK), is presented. Novel hardware components embedded with sensors were implemented on a stationary exercise bicycle to monitor physiological and biomechanical parameters of participants while immersing them in an augmented reality simulation providing the user with visual, auditory and haptic feedback. This modular and adaptable system attaches to commercially-available stationary bicycle systems and interfaces with a personal computer for simulation and data acquisition processes. The complete bicycle system includes: a) handle bars based on hydraulic pressure sensors; b) pedals that monitor pedal kinematics with an inertial measurement unit (IMU) and forces on the pedals while providing vibratory feedback; c) off the shelf electronics to monitor heart rate and d) customized software for rehabilitation. Bench testing for the handle and pedal systems is presented for calibration of the sensors detecting force and angle. Results The modular mechatronic kit for exercise bicycles was tested in bench testing and human tests. Bench tests performed on the sensorized handle bars and the instrumented pedals validated the measurement accuracy of these components. Rider tests with the VRACK system focused on the pedal system and successfully monitored kinetic and kinematic parameters of the rider’s lower extremities. Conclusions The VRACK system, a virtual reality mechatronic bicycle rehabilitation modular system was designed to convert most bicycles in virtual reality (VR) cycles. Preliminary testing of the augmented reality bicycle system was successful in demonstrating that a modular mechatronic kit can monitor and record kinetic and kinematic parameters of several riders. PMID:24902780

  7. Design of an Intelligent Front-End Signal Conditioning Circuit for IR Sensors

    NASA Astrophysics Data System (ADS)

    de Arcas, G.; Ruiz, M.; Lopez, J. M.; Gutierrez, R.; Villamayor, V.; Gomez, L.; Montojo, Mª. T.

    2008-02-01

    This paper presents the design of an intelligent front-end signal conditioning system for IR sensors. The system has been developed as an interface between a PbSe IR sensor matrix and a TMS320C67x digital signal processor. The system architecture ensures its scalability so it can be used for sensors with different matrix sizes. It includes an integrator based signal conditioning circuit, a data acquisition converter block, and a FPGA based advanced control block that permits including high level image preprocessing routines such as faulty pixel detection and sensor calibration in the signal conditioning front-end. During the design phase virtual instrumentation technologies proved to be a very valuable tool for prototyping when choosing the best A/D converter type for the application. Development time was significantly reduced due to the use of this technology.

  8. Physical environment virtualization for human activities recognition

    NASA Astrophysics Data System (ADS)

    Poshtkar, Azin; Elangovan, Vinayak; Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen

    2015-05-01

    Human activity recognition research relies heavily on extensive datasets to verify and validate performance of activity recognition algorithms. However, obtaining real datasets are expensive and highly time consuming. A physics-based virtual simulation can accelerate the development of context based human activity recognition algorithms and techniques by generating relevant training and testing videos simulating diverse operational scenarios. In this paper, we discuss in detail the requisite capabilities of a virtual environment to aid as a test bed for evaluating and enhancing activity recognition algorithms. To demonstrate the numerous advantages of virtual environment development, a newly developed virtual environment simulation modeling (VESM) environment is presented here to generate calibrated multisource imagery datasets suitable for development and testing of recognition algorithms for context-based human activities. The VESM environment serves as a versatile test bed to generate a vast amount of realistic data for training and testing of sensor processing algorithms. To demonstrate the effectiveness of VESM environment, we present various simulated scenarios and processed results to infer proper semantic annotations from the high fidelity imagery data for human-vehicle activity recognition under different operational contexts.

  9. Virtual sensors for on-line wheel wear and part roughness measurement in the grinding process.

    PubMed

    Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A; Cabanes, Itziar; Pombo, Iñigo

    2014-05-19

    Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations.

  10. Vehicle Lateral State Estimation Based on Measured Tyre Forces

    PubMed Central

    Tuononen, Ari J.

    2009-01-01

    Future active safety systems need more accurate information about the state of vehicles. This article proposes a method to evaluate the lateral state of a vehicle based on measured tyre forces. The tyre forces of two tyres are estimated from optically measured tyre carcass deflections and transmitted wirelessly to the vehicle body. The two remaining tyres are so-called virtual tyre sensors, the forces of which are calculated from the real tyre sensor estimates. The Kalman filter estimator for lateral vehicle state based on measured tyre forces is presented, together with a simple method to define adaptive measurement error covariance depending on the driving condition of the vehicle. The estimated yaw rate and lateral velocity are compared with the validation sensor measurements. PMID:22291535

  11. Force Sensitive Handles and Capacitive Touch Sensor for Driving a Flexible Haptic-Based Immersive System

    PubMed Central

    Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto

    2013-01-01

    In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape. PMID:24113680

  12. A source-attractor approach to network detection of radiation sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Barry, M. L..; Grieme, M.

    Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less

  13. Force sensitive handles and capacitive touch sensor for driving a flexible haptic-based immersive system.

    PubMed

    Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto

    2013-10-09

    In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape.

  14. A Self-Referenced Optical Intensity Sensor Network Using POFBGs for Biomedical Applications

    PubMed Central

    Moraleda, Alberto Tapetado; Montero, David Sánchez; Webb, David J.; García, Carmen Vázquez

    2014-01-01

    This work bridges the gap between the remote interrogation of multiple optical sensors and the advantages of using inherently biocompatible low-cost polymer optical fiber (POF)-based photonic sensing. A novel hybrid sensor network combining both silica fiber Bragg gratings (FBG) and polymer FBGs (POFBG) is analyzed. The topology is compatible with WDM networks so multiple remote sensors can be addressed providing high scalability. A central monitoring unit with virtual data processing is implemented, which could be remotely located up to units of km away. The feasibility of the proposed solution for potential medical environments and biomedical applications is shown. PMID:25615736

  15. A self-referenced optical intensity sensor network using POFBGs for biomedical applications.

    PubMed

    Tapetado Moraleda, Alberto; Sánchez Montero, David; Webb, David J; Vázquez García, Carmen

    2014-12-12

    This work bridges the gap between the remote interrogation of multiple optical sensors and the advantages of using inherently biocompatible low-cost polymer optical fiber (POF)-based photonic sensing. A novel hybrid sensor network combining both silica fiber Bragg gratings (FBG) and polymer FBGs (POFBG) is analyzed. The topology is compatible with WDM networks so multiple remote sensors can be addressed providing high scalability. A central monitoring unit with virtual data processing is implemented, which could be remotely located up to units of km away. The feasibility of the proposed solution for potential medical environments and biomedical applications is shown.

  16. Real and virtual explorations of the environment and interactive tracking of movable objects for the blind on the basis of tactile-acoustical maps and 3D environment models.

    PubMed

    Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas

    2008-01-01

    PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.

  17. Enhancing Autonomy of Aerial Systems Via Integration of Visual Sensors into Their Avionics Suite

    DTIC Science & Technology

    2016-09-01

    aerial platform for subsequent visual sensor integration. 14. SUBJECT TERMS autonomous system, quadrotors, direct method, inverse ...CONTROLLER ARCHITECTURE .....................................................43 B. INVERSE DYNAMICS IN THE VIRTUAL DOMAIN ......................45 1...control station GPS Global-Positioning System IDVD inverse dynamics in the virtual domain ILP integer linear program INS inertial-navigation system

  18. Digital Signal Processing by Virtual Instrumentation of a MEMS Magnetic Field Sensor for Biomedical Applications

    PubMed Central

    Juárez-Aguirre, Raúl; Domínguez-Nicolás, Saúl M.; Manjarrez, Elías; Tapia, Jesús A.; Figueras, Eduard; Vázquez-Leal, Héctor; Aguilera-Cortés, Luz A.; Herrera-May, Agustín L.

    2013-01-01

    We present a signal processing system with virtual instrumentation of a MEMS sensor to detect magnetic flux density for biomedical applications. This system consists of a magnetic field sensor, electronic components implemented on a printed circuit board (PCB), a data acquisition (DAQ) card, and a virtual instrument. It allows the development of a semi-portable prototype with the capacity to filter small electromagnetic interference signals through digital signal processing. The virtual instrument includes an algorithm to implement different configurations of infinite impulse response (IIR) filters. The PCB contains a precision instrumentation amplifier, a demodulator, a low-pass filter (LPF) and a buffer with operational amplifier. The proposed prototype is used for real-time non-invasive monitoring of magnetic flux density in the thoracic cage of rats. The response of the rat respiratory magnetogram displays a similar behavior as the rat electromyogram (EMG). PMID:24196434

  19. Digital signal processing by virtual instrumentation of a MEMS magnetic field sensor for biomedical applications.

    PubMed

    Juárez-Aguirre, Raúl; Domínguez-Nicolás, Saúl M; Manjarrez, Elías; Tapia, Jesús A; Figueras, Eduard; Vázquez-Leal, Héctor; Aguilera-Cortés, Luz A; Herrera-May, Agustín L

    2013-11-05

    We present a signal processing system with virtual instrumentation of a MEMS sensor to detect magnetic flux density for biomedical applications. This system consists of a magnetic field sensor, electronic components implemented on a printed circuit board (PCB), a data acquisition (DAQ) card, and a virtual instrument. It allows the development of a semi-portable prototype with the capacity to filter small electromagnetic interference signals through digital signal processing. The virtual instrument includes an algorithm to implement different configurations of infinite impulse response (IIR) filters. The PCB contains a precision instrumentation amplifier, a demodulator, a low-pass filter (LPF) and a buffer with operational amplifier. The proposed prototype is used for real-time non-invasive monitoring of magnetic flux density in the thoracic cage of rats. The response of the rat respiratory magnetogram displays a similar behavior as the rat electromyogram (EMG).

  20. Virtual Induction Loops Based on Cooperative Vehicular Communications

    PubMed Central

    Gramaglia, Marco; Bernardos, Carlos J.; Calderon, Maria

    2013-01-01

    Induction loop detectors have become the most utilized sensors in traffic management systems. The gathered traffic data is used to improve traffic efficiency (i.e., warning users about congested areas or planning new infrastructures). Despite their usefulness, their deployment and maintenance costs are expensive. Vehicular networks are an emerging technology that can support novel strategies for ubiquitous and more cost-effective traffic data gathering. In this article, we propose and evaluate VIL (Virtual Induction Loop), a simple and lightweight traffic monitoring system based on cooperative vehicular communications. The proposed solution has been experimentally evaluated through simulation using real vehicular traces. PMID:23348033

  1. Resilient Sensor Networks with Spatiotemporal Interpolation of Missing Sensors: An Example of Space Weather Forecasting by Multiple Satellites

    PubMed Central

    Tokumitsu, Masahiro; Hasegawa, Keisuke; Ishida, Yoshiteru

    2016-01-01

    This paper attempts to construct a resilient sensor network model with an example of space weather forecasting. The proposed model is based on a dynamic relational network. Space weather forecasting is vital for a satellite operation because an operational team needs to make a decision for providing its satellite service. The proposed model is resilient to failures of sensors or missing data due to the satellite operation. In the proposed model, the missing data of a sensor is interpolated by other sensors associated. This paper demonstrates two examples of space weather forecasting that involves the missing observations in some test cases. In these examples, the sensor network for space weather forecasting continues a diagnosis by replacing faulted sensors with virtual ones. The demonstrations showed that the proposed model is resilient against sensor failures due to suspension of hardware failures or technical reasons. PMID:27092508

  2. Resilient Sensor Networks with Spatiotemporal Interpolation of Missing Sensors: An Example of Space Weather Forecasting by Multiple Satellites.

    PubMed

    Tokumitsu, Masahiro; Hasegawa, Keisuke; Ishida, Yoshiteru

    2016-04-15

    This paper attempts to construct a resilient sensor network model with an example of space weather forecasting. The proposed model is based on a dynamic relational network. Space weather forecasting is vital for a satellite operation because an operational team needs to make a decision for providing its satellite service. The proposed model is resilient to failures of sensors or missing data due to the satellite operation. In the proposed model, the missing data of a sensor is interpolated by other sensors associated. This paper demonstrates two examples of space weather forecasting that involves the missing observations in some test cases. In these examples, the sensor network for space weather forecasting continues a diagnosis by replacing faulted sensors with virtual ones. The demonstrations showed that the proposed model is resilient against sensor failures due to suspension of hardware failures or technical reasons.

  3. An Effective Massive Sensor Network Data Access Scheme Based on Topology Control for the Internet of Things

    PubMed Central

    Yi, Meng; Chen, Qingkui; Xiong, Neal N.

    2016-01-01

    This paper considers the distributed access and control problem of massive wireless sensor networks’ data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate. PMID:27827878

  4. An Improved Method of Pose Estimation for Lighthouse Base Station Extension.

    PubMed

    Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang

    2017-10-22

    In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal.

  5. An Improved Method of Pose Estimation for Lighthouse Base Station Extension

    PubMed Central

    Yang, Yi; Weng, Dongdong; Li, Dong; Xun, Hang

    2017-01-01

    In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal. PMID:29065509

  6. A Plug-and-Play Human-Centered Virtual TEDS Architecture for the Web of Things.

    PubMed

    Hernández-Rojas, Dixys L; Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Escudero, Carlos J

    2018-06-27

    This article presents a Virtual Transducer Electronic Data Sheet (VTEDS)-based framework for the development of intelligent sensor nodes with plug-and-play capabilities in order to contribute to the evolution of the Internet of Things (IoT) toward the Web of Things (WoT). It makes use of new lightweight protocols that allow sensors to self-describe, auto-calibrate, and auto-register. Such protocols enable the development of novel IoT solutions while guaranteeing low latency, low power consumption, and the required Quality of Service (QoS). Thanks to the developed human-centered tools, it is possible to configure and modify dynamically IoT device firmware, managing the active transducers and their communication protocols in an easy and intuitive way, without requiring any prior programming knowledge. In order to evaluate the performance of the system, it was tested when using Bluetooth Low Energy (BLE) and Ethernet-based smart sensors in different scenarios. Specifically, user experience was quantified empirically (i.e., how fast the system shows collected data to a user was measured). The obtained results show that the proposed VTED architecture is very fast, with some smart sensors (located in Europe) able to self-register and self-configure in a remote cloud (in South America) in less than 3 s and to display data to remote users in less than 2 s.

  7. A Prototype Land Information Sensor Web: Design, Implementation and Implication for the SMAP Mission

    NASA Astrophysics Data System (ADS)

    Su, H.; Houser, P.; Tian, Y.; Geiger, J. K.; Kumar, S. V.; Gates, L.

    2009-12-01

    Land Surface Model (LSM) predictions are regular in time and space, but these predictions are influenced by errors in model structure, input variables, parameters and inadequate treatment of sub-grid scale spatial variability. Consequently, LSM predictions are significantly improved through observation constraints made in a data assimilation framework. Several multi-sensor satellites are currently operating which provide multiple global observations of the land surface, and its related near-atmospheric properties. However, these observations are not optimal for addressing current and future land surface environmental problems. To meet future earth system science challenges, NASA will develop constellations of smart satellites in sensor web configurations which provide timely on-demand data and analysis to users, and can be reconfigured based on the changing needs of science and available technology. A sensor web is more than a collection of satellite sensors. That means a sensor web is a system composed of multiple platforms interconnected by a communication network for the purpose of performing specific observations and processing data required to support specific science goals. Sensor webs can eclipse the value of disparate sensor components by reducing response time and increasing scientific value, especially when the two-way interaction between the model and the sensor web is enabled. The study of a prototype Land Information Sensor Web (LISW) is sponsored by NASA, trying to integrate the Land Information System (LIS) in a sensor web framework which allows for optimal 2-way information flow that enhances land surface modeling using sensor web observations, and in turn allows sensor web reconfiguration to minimize overall system uncertainty. This prototype is based on a simulated interactive sensor web, which is then used to exercise and optimize the sensor web modeling interfaces. The Land Information Sensor Web Service-Oriented Architecture (LISW-SOA) has been developed and it is the very first sensor web framework developed especially for the land surface studies. Synthetic experiments based on the LISW-SOA and the virtual sensor web provide a controlled environment in which to examine the end-to-end performance of the prototype, the impact of various sensor web design trade-offs and the eventual value of sensor webs for a particular prediction or decision support. In this paper, the design, implementation of the LISW-SOA and the implication for the Soil Moisture Active and Passive (SMAP) mission is presented. Particular attention is focused on examining the relationship between the economic investment on a sensor web (space and air borne, ground based) and the accuracy of the model predicted soil moisture, which can be achieved by using such sensor observations. The Study of Virtual Land Information Sensor Web (LISW) is expected to provide some necessary a priori knowledge for designing and deploying the next generation Global Earth Observing System of systems (GEOSS).

  8. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.

  9. Virtual Sensors for On-line Wheel Wear and Part Roughness Measurement in the Grinding Process

    PubMed Central

    Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A.; Cabanes, Itziar; Pombo, Iñigo

    2014-01-01

    Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations. PMID:24854055

  10. Virtual Reality-Based Center of Mass-Assisted Personalized Balance Training System.

    PubMed

    Kumar, Deepesh; González, Alejandro; Das, Abhijit; Dutta, Anirban; Fraisse, Philippe; Hayashibe, Mitsuhiro; Lahiri, Uttama

    2017-01-01

    Poststroke hemiplegic patients often show altered weight distribution with balance disorders, increasing their risk of fall. Conventional balance training, though powerful, suffers from scarcity of trained therapists, frequent visits to clinics to get therapy, one-on-one therapy sessions, and monotony of repetitive exercise tasks. Thus, technology-assisted balance rehabilitation can be an alternative solution. Here, we chose virtual reality as a technology-based platform to develop motivating balance tasks. This platform was augmented with off-the-shelf available sensors such as Nintendo Wii balance board and Kinect to estimate one's center of mass (CoM). The virtual reality-based CoM-assisted balance tasks (Virtual CoMBaT) was designed to be adaptive to one's individualized weight-shifting capability quantified through CoM displacement. Participants were asked to interact with Virtual CoMBaT that offered tasks of varying challenge levels while adhering to ankle strategy for weight shifting. To facilitate the patients to use ankle strategy during weight-shifting, we designed a heel lift detection module. A usability study was carried out with 12 hemiplegic patients. Results indicate the potential of our system to contribute to improving one's overall performance in balance-related tasks belonging to different difficulty levels.

  11. Hardware Support for Malware Defense and End-to-End Trust

    DTIC Science & Technology

    2017-02-01

    IoT) sensors and actuators, mobile devices and servers; cloud based, stand alone, and traditional mainframes. The prototype developed demonstrated...virtual machines. For mobile platforms we developed and prototyped an architecture supporting separation of personalities on the same platform...4 3.1. MOBILE

  12. Bluetooth-based distributed measurement system

    NASA Astrophysics Data System (ADS)

    Tang, Baoping; Chen, Zhuo; Wei, Yuguo; Qin, Xiaofeng

    2007-07-01

    A novel distributed wireless measurement system, which is consisted of a base station, wireless intelligent sensors and relay nodes etc, is established by combining of Bluetooth-based wireless transmission, virtual instrument, intelligent sensor, and network. The intelligent sensors mounted on the equipments to be measured acquire various parameters and the Bluetooth relay nodes get the acquired data modulated and sent to the base station, where data analysis and processing are done so that the operational condition of the equipment can be evaluated. The establishment of the distributed measurement system is discussed with a measurement flow chart for the distributed measurement system based on Bluetooth technology, and the advantages and disadvantages of the system are analyzed at the end of the paper and the measurement system has successfully been used in Daqing oilfield, China for measurement of parameters, such as temperature, flow rate and oil pressure at an electromotor-pump unit.

  13. Virtual Wireless Sensor Networks: Adaptive Brain-Inspired Configuration for Internet of Things Applications

    PubMed Central

    Toyonaga, Shinya; Kominami, Daichi; Murata, Masayuki

    2016-01-01

    Many researchers are devoting attention to the so-called “Internet of Things” (IoT), and wireless sensor networks (WSNs) are regarded as a critical technology for realizing the communication infrastructure of the future, including the IoT. Against this background, virtualization is a crucial technique for the integration of multiple WSNs. Designing virtualized WSNs for actual environments will require further detailed studies. Within the IoT environment, physical networks can undergo dynamic change, and so, many problems exist that could prevent applications from running without interruption when using the existing approaches. In this paper, we show an overall architecture that is suitable for constructing and running virtual wireless sensor network (VWSN) services within a VWSN topology. Our approach provides users with a reliable VWSN network by assigning redundant resources according to each user’s demand and providing a recovery method to incorporate environmental changes. We tested this approach by simulation experiment, with the results showing that the VWSN network is reliable in many cases, although physical deployment of sensor nodes and the modular structure of the VWSN will be quite important to the stability of services within the VWSN topology. PMID:27548177

  14. Virtual Wireless Sensor Networks: Adaptive Brain-Inspired Configuration for Internet of Things Applications.

    PubMed

    Toyonaga, Shinya; Kominami, Daichi; Murata, Masayuki

    2016-08-19

    Many researchers are devoting attention to the so-called "Internet of Things" (IoT), and wireless sensor networks (WSNs) are regarded as a critical technology for realizing the communication infrastructure of the future, including the IoT. Against this background, virtualization is a crucial technique for the integration of multiple WSNs. Designing virtualized WSNs for actual environments will require further detailed studies. Within the IoT environment, physical networks can undergo dynamic change, and so, many problems exist that could prevent applications from running without interruption when using the existing approaches. In this paper, we show an overall architecture that is suitable for constructing and running virtual wireless sensor network (VWSN) services within a VWSN topology. Our approach provides users with a reliable VWSN network by assigning redundant resources according to each user's demand and providing a recovery method to incorporate environmental changes. We tested this approach by simulation experiment, with the results showing that the VWSN network is reliable in many cases, although physical deployment of sensor nodes and the modular structure of the VWSN will be quite important to the stability of services within the VWSN topology.

  15. Robot Position Sensor Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.

    1997-01-01

    Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. A new method is proposed that utilizes analytical redundancy to allow for continued operation during joint position sensor failure. Joint torque sensors are used with a virtual passive torque controller to make the robot joint stable without position feedback and improve position tracking performance in the presence of unknown link dynamics and end-effector loading. Two Cartesian accelerometer based methods are proposed to determine the position of the joint. The joint specific position determination method utilizes two triaxial accelerometers attached to the link driven by the joint with the failed position sensor. The joint specific method is not computationally complex and the position error is bounded. The system wide position determination method utilizes accelerometers distributed on different robot links and the end-effector to determine the position of sets of multiple joints. The system wide method requires fewer accelerometers than the joint specific method to make all joint position sensors fault tolerant but is more computationally complex and has lower convergence properties. Experiments were conducted on a laboratory manipulator. Both position determination methods were shown to track the actual position satisfactorily. A controller using the position determination methods and the virtual passive torque controller was able to servo the joints to a desired position during position sensor failure.

  16. Prediction of dynamic strains on a monopile offshore wind turbine using virtual sensors

    NASA Astrophysics Data System (ADS)

    Iliopoulos, A. N.; Weijtjens, W.; Van Hemelrijck, D.; Devriendt, C.

    2015-07-01

    The monitoring of the condition of the offshore wind turbine during its operational states offers the possibility of performing accurate assessments of the remaining life-time as well as supporting maintenance decisions during its entire life. The efficacy of structural monitoring in the case of the offshore wind turbine, though, is undermined by the practical limitations connected to the measurement system in terms of cost, weight and feasibility of sensor mounting (e.g. at muddline level 30m below the water level). This limitation is overcome by reconstructing the full-field response of the structure based on the limited number of measured accelerations and a calibrated Finite Element Model of the system. A modal decomposition and expansion approach is used for reconstructing the responses at all degrees of freedom of the finite element model. The paper will demonstrate the possibility to predict dynamic strains from acceleration measurements based on the aforementioned methodology. These virtual dynamic strains will then be evaluated and validated based on actual strain measurements obtained from a monitoring campaign on an offshore Vestas V90 3 MW wind turbine on a monopile foundation.

  17. Virtual reality 3D headset based on DMD light modulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  18. Virtual Instrument for Emissions Measurement of Internal Combustion Engines

    PubMed Central

    Pérez, Armando; Montero, Gisela; Coronado, Marcos; García, Conrado; Pérez, Rubén

    2016-01-01

    The gases emissions measurement systems in internal combustion engines are strict and expensive nowadays. For this reason, a virtual instrument was developed to measure the combustion emissions from an internal combustion diesel engine, running with diesel-biodiesel mixtures. This software is called virtual instrument for emissions measurement (VIEM), and it was developed in the platform of LabVIEW 2010® virtual programming. VIEM works with sensors connected to a signal conditioning system, and a data acquisition system is used as interface for a computer in order to measure and monitor in real time the emissions of O2, NO, CO, SO2, and CO2 gases. This paper shows the results of the VIEM programming, the integrated circuits diagrams used for the signal conditioning of sensors, and the sensors characterization of O2, NO, CO, SO2, and CO2. VIEM is a low-cost instrument and is simple and easy to use. Besides, it is scalable, making it flexible and defined by the user. PMID:27034893

  19. Sensing and Virtual Worlds - A Survey of Research Opportunities

    NASA Technical Reports Server (NTRS)

    Moore, Dana

    2012-01-01

    Virtual Worlds (VWs) have been used effectively in live and constructive military training. An area that remains fertile ground for exploration and a new vision involves integrating various traditional and now non-traditional sensors into virtual worlds. In this paper, we will assert that the benefits of this integration are several. First, we maintain that virtual worlds offer improved sensor deployment planning through improved visualization and stimulation of the model, using geo-specific terrain and structure. Secondly, we assert that VWs enhance the mission rehearsal process, and that using a mix of live avatars, non-player characters, and live sensor feeds (e.g. real time meteorology) can help visualization of the area of operations. Finally, tactical operations are improved via better collaboration and integration of real world sensing capabilities, and in most situations, 30 VWs improve the state of the art over current "dots on a map" 20 geospatial visualization. However, several capability gaps preclude a fuller realization of this vision. In this paper, we identify many of these gaps and suggest research directions

  20. Experimental Robot Position Sensor Fault Tolerance Using Accelerometers and Joint Torque Sensors

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.; Juang, Jer-Nan

    1997-01-01

    Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. The proposed method uses joint torque sensors found in most existing advanced robot designs along with easily locatable, lightweight accelerometers to provide a joint position sensor fault recovery mode. This mode uses the torque sensors along with a virtual passive control law for stability and accelerometers for joint position information. Two methods for conversion from Cartesian acceleration to joint position based on robot kinematics, not integration, are presented. The fault tolerant control method was tested on several joints of a laboratory robot. The controllers performed well with noisy, biased data and a model with uncertain parameters.

  1. Structural health management of aerospace hotspots under fatigue loading

    NASA Astrophysics Data System (ADS)

    Soni, Sunilkumar

    Sustainability and life-cycle assessments of aerospace systems, such as aircraft structures and propulsion systems, represent growing challenges in engineering. Hence, there has been an increasing demand in using structural health monitoring (SHM) techniques for continuous monitoring of these systems in an effort to improve safety and reduce maintenance costs. The current research is part of an ongoing multidisciplinary effort to develop a robust SHM framework resulting in improved models for damage-state awareness and life prediction, and enhancing capability of future aircraft systems. Lug joints, a typical structural hotspot, were chosen as the test article for the current study. The thesis focuses on integrated SHM techniques for damage detection and characterization in lug joints. Piezoelectric wafer sensors (PZTs) are used to generate guided Lamb waves as they can be easily used for onboard applications. Sensor placement in certain regions of a structural component is not feasible due to the inaccessibility of the area to be monitored. Therefore, a virtual sensing concept is introduced to acquire sensor data from finite element (FE) models. A full three dimensional FE analysis of lug joints with piezoelectric transducers, accounting for piezoelectrical-mechanical coupling, was performed in Abaqus and the sensor signals were simulated. These modeled sensors are called virtual sensors. A combination of real data from PZTs and virtual sensing data from FE analysis is used to monitor and detect fatigue damage in aluminum lug joints. Experiments were conducted on lug joints under fatigue loads and sensor signals collected were used to validate the simulated sensor response. An optimal sensor placement methodology for lug joints is developed based on a detection theory framework to maximize the detection rate and minimize the false alarm rate. The placement technique is such that the sensor features can be directly correlated to damage. The technique accounts for a number of factors, such as actuation frequency and strength, minimum damage size, damage detection scheme, material damping, signal to noise ratio and sensing radius. Advanced information processing methodologies are discussed for damage diagnosis. A new, instantaneous approach for damage detection, localization and quantification is proposed for applications to practical problems associated with changes in reference states under different environmental and operational conditions. Such an approach improves feature extraction for state awareness, resulting in robust life prediction capabilities.

  2. Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments.

    PubMed

    Rincon, J A; Poza-Lujan, Jose-Luis; Julian, V; Posadas-Yagüe, Juan-Luis; Carrascosa, C

    2016-01-01

    This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system.

  3. Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments

    PubMed Central

    2016-01-01

    This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system. PMID:26926691

  4. Ubiquitous virtual private network: a solution for WSN seamless integration.

    PubMed

    Villa, David; Moya, Francisco; Villanueva, Félix Jesús; Aceña, Óscar; López, Juan Carlos

    2014-01-06

    Sensor networks are becoming an essential part of ubiquitous systems and applications. However, there are no well-defined protocols or mechanisms to access the sensor network from the enterprise information system. We consider this issue as a heterogeneous network interconnection problem, and as a result, the same concepts may be applied. Specifically, we propose the use of object-oriented middlewares to provide a virtual private network in which all involved elements (sensor nodes or computer applications) will be able to communicate as if all of them were in a single and uniform network.

  5. Reactor protection system with automatic self-testing and diagnostic

    DOEpatents

    Gaubatz, Donald C.

    1996-01-01

    A reactor protection system having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically "identical" values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic.

  6. Reactor protection system with automatic self-testing and diagnostic

    DOEpatents

    Gaubatz, D.C.

    1996-12-17

    A reactor protection system is disclosed having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically ``identical`` values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic. 16 figs.

  7. Constructing new seismograms from old earthquakes: Retrospective seismology at multiple length scales

    NASA Astrophysics Data System (ADS)

    Entwistle, Elizabeth; Curtis, Andrew; Galetti, Erica; Baptie, Brian; Meles, Giovanni

    2015-04-01

    If energy emitted by a seismic source such as an earthquake is recorded on a suitable backbone array of seismometers, source-receiver interferometry (SRI) is a method that allows those recordings to be projected to the location of another target seismometer, providing an estimate of the seismogram that would have been recorded at that location. Since the other seismometer may not have been deployed at the time the source occurred, this renders possible the concept of 'retrospective seismology' whereby the installation of a sensor at one period of time allows the construction of virtual seismograms as though that sensor had been active before or after its period of installation. Using the benefit of hindsight of earthquake location or magnitude estimates, SRI can establish new measurement capabilities closer to earthquake epicenters, thus potentially improving earthquake location estimates. Recently we showed that virtual SRI seismograms can be constructed on target sensors in both industrial seismic and earthquake seismology settings, using both active seismic sources and ambient seismic noise to construct SRI propagators, and on length scales ranging over 5 orders of magnitude from ~40 m to ~2500 km[1]. Here we present the results from earthquake seismology by comparing virtual earthquake seismograms constructed at target sensors by SRI to those actually recorded on the same sensors. We show that spatial integrations required by interferometric theory can be calculated over irregular receiver arrays by embedding these arrays within 2D spatial Voronoi cells, thus improving spatial interpolation and interferometric results. The results of SRI are significantly improved by restricting the backbone receiver array to include approximately those receivers that provide a stationary phase contribution to the interferometric integrals. We apply both correlation-correlation and correlation-convolution SRI, and show that the latter constructs virtual seismograms with fewer non-physical arrivals. Finally we reconstruct earthquake seismograms at sensors that were previously active but were subsequently removed before the earthquakes occurred; thus we create virtual earthquake seismograms at those sensors, truly retrospectively. Such SRI seismograms can be used to create a catalogue of new, virtual earthquake seismograms that are available to complement real earthquake data in future earthquake seismology studies. [1]E. Entwistle, Curtis, A., Galetti, E., Baptie, B., Meles, G., Constructing new seismograms from old earthquakes: Retrospective seismology at multiple length scales, JGR, in press.

  8. Creation of 3D Multi-Body Orthodontic Models by Using Independent Imaging Sensors

    PubMed Central

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-01-01

    In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning. PMID:23385416

  9. Creation of 3D multi-body orthodontic models by using independent imaging sensors.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-02-05

    In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.

  10. Core body temperature control by total liquid ventilation using a virtual lung temperature sensor.

    PubMed

    Nadeau, Mathieu; Micheau, Philippe; Robert, Raymond; Avoine, Olivier; Tissier, Renaud; Germim, Pamela Samanta; Vandamme, Jonathan; Praud, Jean-Paul; Walti, Herve

    2014-12-01

    In total liquid ventilation (TLV), the lungs are filled with a breathable liquid perfluorocarbon (PFC) while a liquid ventilator ensures proper gas exchange by renewal of a tidal volume of oxygenated and temperature-controlled PFC. Given the rapid changes in core body temperature generated by TLV using the lung has a heat exchanger, it is crucial to have accurate and reliable core body temperature monitoring and control. This study presents the design of a virtual lung temperature sensor to control core temperature. In the first step, the virtual sensor, using expired PFC to estimate lung temperature noninvasively, was validated both in vitro and in vivo. The virtual lung temperature was then used to rapidly and automatically control core temperature. Experimentations were performed using the Inolivent-5.0 liquid ventilator with a feedback controller to modulate inspired PFC temperature thereby controlling lung temperature. The in vivo experimental protocol was conducted on seven newborn lambs instrumented with temperature sensors at the femoral artery, pulmonary artery, oesophagus, right ear drum, and rectum. After stabilization in conventional mechanical ventilation, TLV was initiated with fast hypothermia induction, followed by slow posthypothermic rewarming for 1 h, then by fast rewarming to normothermia and finally a second fast hypothermia induction phase. Results showed that the virtual lung temperature was able to provide an accurate estimation of systemic arterial temperature. Results also demonstrate that TLV can precisely control core body temperature and can be favorably compared to extracorporeal circulation in terms of speed.

  11. A novel vibration structure for dynamic balancing measurement

    NASA Astrophysics Data System (ADS)

    Qin, Peng; Cai, Ping; Hu, Qinghan; Li, Yingxia

    2006-11-01

    Based on the conception of instantaneous motion center in theoretical mechanics, the paper presents a novel virtual vibration structure for dynamic balancing measurement with high precision. The structural features and the unbalancing response characteristics of this vibration structure are analyzed in depth. The relation between the real measuring system and the virtual one is emphatically expounded. Theoretical analysis indicates that the flexibly hinged integrative plate spring sets holds fixed vibration center, with the result that this vibration system has the most excellent effect of plane separation. In addition, the sensors are mounted on the same longitudinal section. Thus the influence of phase error on the primary unbalance reduction ratio is eliminated. Furthermore, the performance changes in sensors caused by environmental factor have less influence on the accuracy of the measurement. The result for this system is more accurate measurement with lower requirement for a second correction run.

  12. Hybrid Feedforward-Feedback Noise Control Using Virtual Sensors

    NASA Technical Reports Server (NTRS)

    Bean, Jacob; Fuller, Chris; Schiller, Noah

    2016-01-01

    Several approaches to active noise control using virtual sensors are evaluated for eventual use in an active headrest. Specifically, adaptive feedforward, feedback, and hybrid control structures are compared. Each controller incorporates the traditional filtered-x least mean squares algorithm. The feedback controller is arranged in an internal model configuration to draw comparisons with standard feedforward control theory results. Simulation and experimental results are presented that illustrate each controllers ability to minimize the pressure at both physical and virtual microphone locations. The remote microphone technique is used to obtain pressure estimates at the virtual locations. It is shown that a hybrid controller offers performance benefits over the traditional feedforward and feedback controllers. Stability issues associated with feedback and hybrid controllers are also addressed. Experimental results show that 15-20 dB reduction in broadband disturbances can be achieved by minimizing the measured pressure, whereas 10-15 dB reduction is obtained when minimizing the estimated pressure at a virtual location.

  13. Ubiquitous Virtual Private Network: A Solution for WSN Seamless Integration

    PubMed Central

    Villa, David; Moya, Francisco; Villanueva, Félix Jesús; Aceña, Óscar; López, Juan Carlos

    2014-01-01

    Sensor networks are becoming an essential part of ubiquitous systems and applications. However, there are no well-defined protocols or mechanisms to access the sensor network from the enterprise information system. We consider this issue as a heterogeneous network interconnection problem, and as a result, the same concepts may be applied. Specifically, we propose the use of object-oriented middlewares to provide a virtual private network in which all involved elements (sensor nodes or computer applications) will be able to communicate as if all of them were in a single and uniform network. PMID:24399154

  14. Assessing Arthroscopic Skills Using Wireless Elbow-Worn Motion Sensors.

    PubMed

    Kirby, Georgina S J; Guyver, Paul; Strickland, Louise; Alvand, Abtin; Yang, Guang-Zhong; Hargrove, Caroline; Lo, Benny P L; Rees, Jonathan L

    2015-07-01

    Assessment of surgical skill is a critical component of surgical training. Approaches to assessment remain predominantly subjective, although more objective measures such as Global Rating Scales are in use. This study aimed to validate the use of elbow-worn, wireless, miniaturized motion sensors to assess the technical skill of trainees performing arthroscopic procedures in a simulated environment. Thirty participants were divided into three groups on the basis of their surgical experience: novices (n = 15), intermediates (n = 10), and experts (n = 5). All participants performed three standardized tasks on an arthroscopic virtual reality simulator while wearing wireless wrist and elbow motion sensors. Video output was recorded and a validated Global Rating Scale was used to assess performance; dexterity metrics were recorded from the simulator. Finally, live motion data were recorded via Bluetooth from the wireless wrist and elbow motion sensors and custom algorithms produced an arthroscopic performance score. Construct validity was demonstrated for all tasks, with Global Rating Scale scores and virtual reality output metrics showing significant differences between novices, intermediates, and experts (p < 0.001). The correlation of the virtual reality path length to the number of hand movements calculated from the wireless sensors was very high (p < 0.001). A comparison of the arthroscopic performance score levels with virtual reality output metrics also showed highly significant differences (p < 0.01). Comparisons of the arthroscopic performance score levels with the Global Rating Scale scores showed strong and highly significant correlations (p < 0.001) for both sensor locations, but those of the elbow-worn sensors were stronger and more significant (p < 0.001) than those of the wrist-worn sensors. A new wireless assessment of surgical performance system for objective assessment of surgical skills has proven valid for assessing arthroscopic skills. The elbow-worn sensors were shown to achieve an accurate assessment of surgical dexterity and performance. The validation of an entirely objective assessment of arthroscopic skill with wireless elbow-worn motion sensors introduces, for the first time, a feasible assessment system for the live operating theater with the added potential to be applied to other surgical and interventional specialties. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.

  15. Hybrid architecture for building secure sensor networks

    NASA Astrophysics Data System (ADS)

    Owens, Ken R., Jr.; Watkins, Steve E.

    2012-04-01

    Sensor networks have various communication and security architectural concerns. Three approaches are defined to address these concerns for sensor networks. The first area is the utilization of new computing architectures that leverage embedded virtualization software on the sensor. Deploying a small, embedded virtualization operating system on the sensor nodes that is designed to communicate to low-cost cloud computing infrastructure in the network is the foundation to delivering low-cost, secure sensor networks. The second area focuses on securing the sensor. Sensor security components include developing an identification scheme, and leveraging authentication algorithms and protocols that address security assurance within the physical, communication network, and application layers. This function will primarily be accomplished through encrypting the communication channel and integrating sensor network firewall and intrusion detection/prevention components to the sensor network architecture. Hence, sensor networks will be able to maintain high levels of security. The third area addresses the real-time and high priority nature of the data that sensor networks collect. This function requires that a quality-of-service (QoS) definition and algorithm be developed for delivering the right data at the right time. A hybrid architecture is proposed that combines software and hardware features to handle network traffic with diverse QoS requirements.

  16. Smart sensors and virtual physiology human approach as a basis of personalized therapies in diabetes mellitus.

    PubMed

    Fernández Peruchena, Carlos M; Prado-Velasco, Manuel

    2010-01-01

    Diabetes mellitus (DM) has a growing incidence and prevalence in modern societies, pushed by the aging and change of life styles. Despite the huge resources dedicated to improve their quality of life, mortality and morbidity rates, these are still very poor. In this work, DM pathology is revised from clinical and metabolic points of view, as well as mathematical models related to DM, with the aim of justifying an evolution of DM therapies towards the correction of the physiological metabolic loops involved. We analyze the reliability of mathematical models, under the perspective of virtual physiological human (VPH) initiatives, for generating and integrating customized knowledge about patients, which is needed for that evolution. Wearable smart sensors play a key role in this frame, as they provide patient's information to the models.A telehealthcare computational architecture based on distributed smart sensors (first processing layer) and personalized physiological mathematical models integrated in Human Physiological Images (HPI) computational components (second processing layer), is presented. This technology was designed for a renal disease telehealthcare in earlier works and promotes crossroads between smart sensors and the VPH initiative. We suggest that it is able to support a truly personalized, preventive, and predictive healthcare model for the delivery of evolved DM therapies.

  17. Smart Sensors and Virtual Physiology Human Approach as a Basis of Personalized Therapies in Diabetes Mellitus

    PubMed Central

    Fernández Peruchena, Carlos M; Prado-Velasco, Manuel

    2010-01-01

    Diabetes mellitus (DM) has a growing incidence and prevalence in modern societies, pushed by the aging and change of life styles. Despite the huge resources dedicated to improve their quality of life, mortality and morbidity rates, these are still very poor. In this work, DM pathology is revised from clinical and metabolic points of view, as well as mathematical models related to DM, with the aim of justifying an evolution of DM therapies towards the correction of the physiological metabolic loops involved. We analyze the reliability of mathematical models, under the perspective of virtual physiological human (VPH) initiatives, for generating and integrating customized knowledge about patients, which is needed for that evolution. Wearable smart sensors play a key role in this frame, as they provide patient’s information to the models. A telehealthcare computational architecture based on distributed smart sensors (first processing layer) and personalized physiological mathematical models integrated in Human Physiological Images (HPI) computational components (second processing layer), is presented. This technology was designed for a renal disease telehealthcare in earlier works and promotes crossroads between smart sensors and the VPH initiative. We suggest that it is able to support a truly personalized, preventive, and predictive healthcare model for the delivery of evolved DM therapies. PMID:21625646

  18. Virtual Reality-Based Center of Mass-Assisted Personalized Balance Training System

    PubMed Central

    Kumar, Deepesh; González, Alejandro; Das, Abhijit; Dutta, Anirban; Fraisse, Philippe; Hayashibe, Mitsuhiro; Lahiri, Uttama

    2018-01-01

    Poststroke hemiplegic patients often show altered weight distribution with balance disorders, increasing their risk of fall. Conventional balance training, though powerful, suffers from scarcity of trained therapists, frequent visits to clinics to get therapy, one-on-one therapy sessions, and monotony of repetitive exercise tasks. Thus, technology-assisted balance rehabilitation can be an alternative solution. Here, we chose virtual reality as a technology-based platform to develop motivating balance tasks. This platform was augmented with off-the-shelf available sensors such as Nintendo Wii balance board and Kinect to estimate one’s center of mass (CoM). The virtual reality-based CoM-assisted balance tasks (Virtual CoMBaT) was designed to be adaptive to one’s individualized weight-shifting capability quantified through CoM displacement. Participants were asked to interact with Virtual CoMBaT that offered tasks of varying challenge levels while adhering to ankle strategy for weight shifting. To facilitate the patients to use ankle strategy during weight-shifting, we designed a heel lift detection module. A usability study was carried out with 12 hemiplegic patients. Results indicate the potential of our system to contribute to improving one’s overall performance in balance-related tasks belonging to different difficulty levels. PMID:29359128

  19. ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation.

  20. Sensor-based balance training with motion feedback in people with mild cognitive impairment.

    PubMed

    Schwenk, Michael; Sabbagh, Marwan; Lin, Ivy; Morgan, Pharah; Grewal, Gurtej S; Mohler, Jane; Coon, David W; Najafi, Bijan

    2016-01-01

    Some individuals with mild cognitive impairment (MCI) experience not only cognitive deficits but also a decline in motor function, including postural balance. This pilot study sought to estimate the feasibility, user experience, and effects of a novel sensor-based balance training program. Patients with amnestic MCI (mean age 78.2 yr) were randomized to an intervention group (IG, n = 12) or control group (CG, n = 10). The IG underwent balance training (4 wk, twice a week) that included weight shifting and virtual obstacle crossing. Real-time visual/audio lower-limb motion feedback was provided from wearable sensors. The CG received no training. User experience was measured by a questionnaire. Postintervention effects on balance (center of mass sway during standing with eyes open [EO] and eyes closed), gait (speed, variability), cognition, and fear of falling were measured. Eleven participants (92%) completed the training and expressed fun, safety, and helpfulness of sensor feedback. Sway (EO, p = 0.04) and fear of falling (p = 0.02) were reduced in the IG compared to the CG. Changes in other measures were nonsignificant. Results suggest that the sensor-based training paradigm is well accepted in the target population and beneficial for improving postural control. Future studies should evaluate the added value of the sensor-based training compared to traditional training.

  1. A Compact Energy Harvesting System for Outdoor Wireless Sensor Nodes Based on a Low-Cost In Situ Photovoltaic Panel Characterization-Modelling Unit

    PubMed Central

    Antolín, Diego; Calvo, Belén; Martínez, Pedro A.

    2017-01-01

    This paper presents a low-cost high-efficiency solar energy harvesting system to power outdoor wireless sensor nodes. It is based on a Voltage Open Circuit (VOC) algorithm that estimates the open-circuit voltage by means of a multilayer perceptron neural network model trained using local experimental characterization data, which are acquired through a novel low cost characterization system incorporated into the deployed node. Both units—characterization and modelling—are controlled by the same low-cost microcontroller, providing a complete solution which can be understood as a virtual pilot cell, with identical characteristics to those of the specific small solar cell installed on the sensor node, that besides allows an easy adaptation to changes in the actual environmental conditions, panel aging, etc. Experimental comparison to a classical pilot panel based VOC algorithm show better efficiency under the same tested conditions. PMID:28777330

  2. A Compact Energy Harvesting System for Outdoor Wireless Sensor Nodes Based on a Low-Cost In Situ Photovoltaic Panel Characterization-Modelling Unit.

    PubMed

    Antolín, Diego; Medrano, Nicolás; Calvo, Belén; Martínez, Pedro A

    2017-08-04

    This paper presents a low-cost high-efficiency solar energy harvesting system to power outdoor wireless sensor nodes. It is based on a Voltage Open Circuit (VOC) algorithm that estimates the open-circuit voltage by means of a multilayer perceptron neural network model trained using local experimental characterization data, which are acquired through a novel low cost characterization system incorporated into the deployed node. Both units-characterization and modelling-are controlled by the same low-cost microcontroller, providing a complete solution which can be understood as a virtual pilot cell, with identical characteristics to those of the specific small solar cell installed on the sensor node, that besides allows an easy adaptation to changes in the actual environmental conditions, panel aging, etc. Experimental comparison to a classical pilot panel based VOC algorithm show better efficiency under the same tested conditions.

  3. Telemedicine, virtual reality, and surgery

    NASA Technical Reports Server (NTRS)

    Mccormack, Percival D.; Charles, Steve

    1994-01-01

    Two types of synthetic experience are covered: virtual reality (VR) and surgery, and telemedicine. The topics are presented in viewgraph form and include the following: geometric models; physiological sensors; surgical applications; virtual cadaver; VR surgical simulation; telesurgery; VR Surgical Trainer; abdominal surgery pilot study; advanced abdominal simulator; examples of telemedicine; and telemedicine spacebridge.

  4. A Study on Immersion and Presence of a Portable Hand Haptic System for Immersive Virtual Reality

    PubMed Central

    Kim, Mingyu; Jeon, Changyu; Kim, Jinmo

    2017-01-01

    This paper proposes a portable hand haptic system using Leap Motion as a haptic interface that can be used in various virtual reality (VR) applications. The proposed hand haptic system was designed as an Arduino-based sensor architecture to enable a variety of tactile senses at low cost, and is also equipped with a portable wristband. As a haptic system designed for tactile feedback, the proposed system first identifies the left and right hands and then sends tactile senses (vibration and heat) to each fingertip (thumb and index finger). It is incorporated into a wearable band-type system, making its use easy and convenient. Next, hand motion is accurately captured using the sensor of the hand tracking system and is used for virtual object control, thus achieving interaction that enhances immersion. A VR application was designed with the purpose of testing the immersion and presence aspects of the proposed system. Lastly, technical and statistical tests were carried out to assess whether the proposed haptic system can provide a new immersive presence to users. According to the results of the presence questionnaire and the simulator sickness questionnaire, we confirmed that the proposed hand haptic system, in comparison to the existing interaction that uses only the hand tracking system, provided greater presence and a more immersive environment in the virtual reality. PMID:28513545

  5. A Study on Immersion and Presence of a Portable Hand Haptic System for Immersive Virtual Reality.

    PubMed

    Kim, Mingyu; Jeon, Changyu; Kim, Jinmo

    2017-05-17

    This paper proposes a portable hand haptic system using Leap Motion as a haptic interface that can be used in various virtual reality (VR) applications. The proposed hand haptic system was designed as an Arduino-based sensor architecture to enable a variety of tactile senses at low cost, and is also equipped with a portable wristband. As a haptic system designed for tactile feedback, the proposed system first identifies the left and right hands and then sends tactile senses (vibration and heat) to each fingertip (thumb and index finger). It is incorporated into a wearable band-type system, making its use easy and convenient. Next, hand motion is accurately captured using the sensor of the hand tracking system and is used for virtual object control, thus achieving interaction that enhances immersion. A VR application was designed with the purpose of testing the immersion and presence aspects of the proposed system. Lastly, technical and statistical tests were carried out to assess whether the proposed haptic system can provide a new immersive presence to users. According to the results of the presence questionnaire and the simulator sickness questionnaire, we confirmed that the proposed hand haptic system, in comparison to the existing interaction that uses only the hand tracking system, provided greater presence and a more immersive environment in the virtual reality.

  6. In-home virtual reality videogame telerehabilitation in adolescents with hemiplegic cerebral palsy.

    PubMed

    Golomb, Meredith R; McDonald, Brenna C; Warden, Stuart J; Yonkman, Janell; Saykin, Andrew J; Shirley, Bridget; Huber, Meghan; Rabin, Bryan; Abdelbaky, Moustafa; Nwosu, Michelle E; Barkat-Masih, Monica; Burdea, Grigore C

    2010-01-01

    Golomb MR, McDonald BC, Warden SJ, Yonkman J, Saykin AJ, Shirley B, Huber M, Rabin B, AbdelBaky M, Nwosu ME, Barkat-Masih M, Burdea GC. In-home virtual reality videogame telerehabilitation in adolescents with hemiplegic cerebral palsy. To investigate whether in-home remotely monitored virtual reality videogame-based telerehabilitation in adolescents with hemiplegic cerebral palsy can improve hand function and forearm bone health, and demonstrate alterations in motor circuitry activation. A 3-month proof-of-concept pilot study. Virtual reality videogame-based rehabilitation systems were installed in the homes of 3 participants and networked via secure Internet connections to the collaborating engineering school and children's hospital. Adolescents (N=3) with severe hemiplegic cerebral palsy. Participants were asked to exercise the plegic hand 30 minutes a day, 5 days a week using a sensor glove fitted to the plegic hand and attached to a remotely monitored videogame console installed in their home. Games were custom developed, focused on finger movement, and included a screen avatar of the hand. Standardized occupational therapy assessments, remote assessment of finger range of motion (ROM) based on sensor glove readings, assessment of plegic forearm bone health with dual-energy x-ray absorptiometry (DXA) and peripheral quantitative computed tomography (pQCT), and functional magnetic resonance imaging (fMRI) of hand grip task. All 3 adolescents showed improved function of the plegic hand on occupational therapy testing, including increased ability to lift objects, and improved finger ROM based on remote measurements. The 2 adolescents who were most compliant showed improvements in radial bone mineral content and area in the plegic arm. For all 3 adolescents, fMRI during grip task contrasting the plegic and nonplegic hand showed expanded spatial extent of activation at posttreatment relative to baseline in brain motor circuitry (eg, primary motor cortex and cerebellum). Use of remotely monitored virtual reality videogame telerehabilitation appears to produce improved hand function and forearm bone health (as measured by DXA and pQCT) in adolescents with chronic disability who practice regularly. Improved hand function appears to be reflected in functional brain changes. Copyright (c) 2010 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  7. VERDEX: A virtual environment demonstrator for remote driving applications

    NASA Technical Reports Server (NTRS)

    Stone, Robert J.

    1991-01-01

    One of the key areas of the National Advanced Robotics Centre's enabling technologies research program is that of the human system interface, phase 1 of which started in July 1989 and is currently addressing the potential of virtual environments to permit intuitive and natural interactions between a human operator and a remote robotic vehicle. The aim of the first 12 months of this program (to September, 1990) is to develop a virtual human-interface demonstrator for use later as a test bed for human factors experimentation. This presentation will describe the current state of development of the test bed, and will outline some human factors issues and problems for more general discussion. In brief, the virtual telepresence system for remote driving has been designed to take the following form. The human operator will be provided with a helmet-mounted stereo display assembly, facilities for speech recognition and synthesis (using the Marconi Macrospeak system), and a VPL DataGlove Model 2 unit. The vehicle to be used for the purposes of remote driving is a Cybermotion Navmaster K2A system, which will be equipped with a stereo camera and microphone pair, mounted on a motorized high-speed pan-and-tilt head incorporating a closed-loop laser ranging sensor for camera convergence control (currently under contractual development). It will be possible to relay information to and from the vehicle and sensory system via an umbilical or RF link. The aim is to develop an interactive audio-visual display system capable of presenting combined stereo TV pictures and virtual graphics windows, the latter featuring control representations appropriate for vehicle driving and interaction using a graphical 'hand,' slaved to the flex and tracking sensors of the DataGlove and an additional helmet-mounted Polhemus IsoTrack sensor. Developments planned for the virtual environment test bed include transfer of operator control between remote driving and remote manipulation, dexterous end effector integration, virtual force and tactile sensing (also the focus of a current ARRL contract, initially employing a 14-pneumatic bladder glove attachment), and sensor-driven world modeling for total virtual environment generation and operator-assistance in remote scene interrogation.

  8. Ant-Based Cyber Defense (also known as

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glenn Fink, PNNL

    2015-09-29

    ABCD is a four-level hierarchy with human supervisors at the top, a top-level agent called a Sergeant controlling each enclave, Sentinel agents located at each monitored host, and mobile Sensor agents that swarm through the enclaves to detect cyber malice and misconfigurations. The code comprises four parts: (1) the core agent framework, (2) the user interface and visualization, (3) test-range software to create a network of virtual machines including a simulated Internet and user and host activity emulation scripts, and (4) a test harness to allow the safe running of adversarial code within the framework of monitored virtual machines.

  9. Virtual environment assessment for laser-based vision surface profiling

    NASA Astrophysics Data System (ADS)

    ElSoussi, Adnane; Al Alami, Abed ElRahman; Abu-Nabah, Bassam A.

    2015-03-01

    Oil and gas businesses have been raising the demand from original equipment manufacturers (OEMs) to implement a reliable metrology method in assessing surface profiles of welds before and after grinding. This certainly mandates the deviation from the commonly used surface measurement gauges, which are not only operator dependent, but also limited to discrete measurements along the weld. Due to its potential accuracy and speed, the use of laser-based vision surface profiling systems have been progressively rising as part of manufacturing quality control. This effort presents a virtual environment that lends itself for developing and evaluating existing laser vision sensor (LVS) calibration and measurement techniques. A combination of two known calibration techniques is implemented to deliver a calibrated LVS system. System calibration is implemented virtually and experimentally to scan simulated and 3D printed features of known profiles, respectively. Scanned data is inverted and compared with the input profiles to validate the virtual environment capability for LVS surface profiling and preliminary assess the measurement technique for weld profiling applications. Moreover, this effort brings 3D scanning capability a step closer towards robust quality control applications in a manufacturing environment.

  10. Fast in-situ tool inspection based on inverse fringe projection and compact sensor heads

    NASA Astrophysics Data System (ADS)

    Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard

    2016-11-01

    Inspection of machine elements is an important task in production processes in order to ensure the quality of produced parts and to gather feedback for the continuous improvement process. A new measuring system is presented, which is capable of performing the inspection of critical tool geometries, such as gearing elements, inside the forming machine. To meet the constraints on sensor head size and inspection time imposed by the limited space inside the machine and the cycle time of the process, the measuring device employs a combination of endoscopy techniques with the fringe projection principle. Compact gradient index lenses enable a compact design of the sensor head, which is connected to a CMOS camera and a flexible micro-mirror based projector via flexible fiber bundles. Using common fringe projection patterns, the system achieves measuring times of less than five seconds. To further reduce the time required for inspection, the generation of inverse fringe projection patterns has been implemented for the system. Inverse fringe projection speeds up the inspection process by employing object-adapted patterns, which enable the detection of geometry deviations in a single image. Two different approaches to generate object adapted patterns are presented. The first approach uses a reference measurement of a manufactured tool master to generate the inverse pattern. The second approach is based on a virtual master geometry in the form of a CAD file and a ray-tracing model of the measuring system. Virtual modeling of the measuring device and inspection setup allows for geometric tolerancing for free-form surfaces by the tool designer in the CAD-file. A new approach is presented, which uses virtual tolerance specifications and additional simulation steps to enable fast checking of metric tolerances. Following the description of the pattern generation process, the image processing steps required for inspection are demonstrated on captures of gearing geometries.

  11. "Virtual Feel" Capaciflectors

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    2006-01-01

    The term "virtual feel" denotes a type of capaciflector (an advanced capacitive proximity sensor) and a methodology for designing and using a sensor of this type to guide a robot in manipulating a tool (e.g., a wrench socket) into alignment with a mating fastener (e.g., a bolt head) or other electrically conductive object. A capaciflector includes at least one sensing electrode, excited with an alternating voltage, that puts out a signal indicative of the capacitance between that electrode and a proximal object.

  12. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  13. Simulation of Smart Home Activity Datasets

    PubMed Central

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-01-01

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation. PMID:26087371

  14. Simulation of Smart Home Activity Datasets.

    PubMed

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-06-16

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  15. High-fidelity simulation capability for virtual testing of seismic and acoustic sensors

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.

    2005-05-01

    This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.

  16. Experimental Verification of Buffet Calculation Procedure Using Unsteady PSP

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta

    2016-01-01

    Typically a limited number of dynamic pressure sensors are employed to determine the unsteady aerodynamic forces on large, slender aerospace structures. The estimated forces are known to be very sensitive to the number of the dynamic pressure sensors and the details of the integration scheme. This report describes a robust calculation procedure, based on frequency-specific correlation lengths, that is found to produce good estimation of fluctuating forces from a few dynamic pressure sensors. The validation test was conducted on a flat panel, placed on the floor of a wind tunnel, and was subjected to vortex shedding from a rectangular bluff-body. The panel was coated with fast response Pressure Sensitive Paint (PSP), which allowed time-resolved measurements of unsteady pressure fluctuations on a dense grid of spatial points. The first part of the report describes the detail procedure used to analyze the high-speed, PSP camera images. The procedure includes steps to reduce contamination by electronic shot noise, correction for spatial non-uniformities, and lamp brightness variation, and finally conversion of fluctuating light intensity to fluctuating pressure. The latter involved applying calibration constants from a few dynamic pressure sensors placed at selective points on the plate. Excellent comparison in the spectra, coherence and phase, calculated via PSP and dynamic pressure sensors validated the PSP processing steps. The second part of the report describes the buffet validation process, for which the first step was to use pressure histories from all PSP points to determine the "true" force fluctuations. In the next step only a selected number of pixels were chosen as "virtual sensors" and a correlation-length based buffet calculation procedure was applied to determine "modeled" force fluctuations. By progressively decreasing the number of virtual sensors it was observed that the present calculation procedure was able to make a close estimate of the "true" unsteady forces only from four sensors. It is believed that the present work provides the first validation of the buffet calculation procedure which has been used for the development of many space vehicles.

  17. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  18. An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising

    PubMed Central

    Guo, Muran; Chen, Tao; Wang, Ben

    2017-01-01

    Co-prime arrays can estimate the directions of arrival (DOAs) of O(MN) sources with O(M+N) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach. PMID:28509886

  19. An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising.

    PubMed

    Guo, Muran; Chen, Tao; Wang, Ben

    2017-05-16

    Co-prime arrays can estimate the directions of arrival (DOAs) of O ( M N ) sources with O ( M + N ) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach.

  20. Interacting With A Near Real-Time Urban Digital Watershed Using Emerging Geospatial Web Technologies

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Fazio, D. J.; Abdelzaher, T.; Minsker, B.

    2007-12-01

    The value of real-time hydrologic data dissemination including river stage, streamflow, and precipitation for operational stormwater management efforts is particularly high for communities where flash flooding is common and costly. Ideally, such data would be presented within a watershed-scale geospatial context to portray a holistic view of the watershed. Local hydrologic sensor networks usually lack comprehensive integration with sensor networks managed by other agencies sharing the same watershed due to administrative, political, but mostly technical barriers. Recent efforts on providing unified access to hydrological data have concentrated on creating new SOAP-based web services and common data format (e.g. WaterML and Observation Data Model) for users to access the data (e.g. HIS and HydroSeek). Geospatial Web technology including OGC sensor web enablement (SWE), GeoRSS, Geo tags, Geospatial browsers such as Google Earth and Microsoft Virtual Earth and other location-based service tools provides possibilities for us to interact with a digital watershed in near-real-time. OGC SWE proposes a revolutionary concept towards a web-connected/controllable sensor networks. However, these efforts have not provided the capability to allow dynamic data integration/fusion among heterogeneous sources, data filtering and support for workflows or domain specific applications where both push and pull mode of retrieving data may be needed. We propose a light weight integration framework by extending SWE with open source Enterprise Service Bus (e.g., mule) as a backbone component to dynamically transform, transport, and integrate both heterogeneous sensor data sources and simulation model outputs. We will report our progress on building such framework where multi-agencies" sensor data and hydro-model outputs (with map layers) will be integrated and disseminated in a geospatial browser (e.g. Microsoft Virtual Earth). This is a collaborative project among NCSA, USGS Illinois Water Science Center, Computer Science Department at UIUC funded by the Adaptive Environmental Infrastructure Sensing and Information Systems initiative at UIUC.

  1. A Movement-Assisted Deployment of Collaborating Autonomous Sensors for Indoor and Outdoor Environment Monitoring

    PubMed Central

    Niewiadomska-Szynkiewicz, Ewa; Sikora, Andrzej; Marks, Michał

    2016-01-01

    Using mobile robots or unmanned vehicles to assist optimal wireless sensors deployment in a working space can significantly enhance the capability to investigate unknown environments. This paper addresses the issues of the application of numerical optimization and computer simulation techniques to on-line calculation of a wireless sensor network topology for monitoring and tracking purposes. We focus on the design of a self-organizing and collaborative mobile network that enables a continuous data transmission to the data sink (base station) and automatically adapts its behavior to changes in the environment to achieve a common goal. The pre-defined and self-configuring approaches to the mobile-based deployment of sensors are compared and discussed. A family of novel algorithms for the optimal placement of mobile wireless devices for permanent monitoring of indoor and outdoor dynamic environments is described. They employ a network connectivity-maintaining mobility model utilizing the concept of the virtual potential function for calculating the motion trajectories of platforms carrying sensors. Their quality and utility have been justified through simulation experiments and are discussed in the final part of the paper. PMID:27649186

  2. A Movement-Assisted Deployment of Collaborating Autonomous Sensors for Indoor and Outdoor Environment Monitoring.

    PubMed

    Niewiadomska-Szynkiewicz, Ewa; Sikora, Andrzej; Marks, Michał

    2016-09-14

    Using mobile robots or unmanned vehicles to assist optimal wireless sensors deployment in a working space can significantly enhance the capability to investigate unknown environments. This paper addresses the issues of the application of numerical optimization and computer simulation techniques to on-line calculation of a wireless sensor network topology for monitoring and tracking purposes. We focus on the design of a self-organizing and collaborative mobile network that enables a continuous data transmission to the data sink (base station) and automatically adapts its behavior to changes in the environment to achieve a common goal. The pre-defined and self-configuring approaches to the mobile-based deployment of sensors are compared and discussed. A family of novel algorithms for the optimal placement of mobile wireless devices for permanent monitoring of indoor and outdoor dynamic environments is described. They employ a network connectivity-maintaining mobility model utilizing the concept of the virtual potential function for calculating the motion trajectories of platforms carrying sensors. Their quality and utility have been justified through simulation experiments and are discussed in the final part of the paper.

  3. Virtualization of event sources in wireless sensor networks for the internet of things.

    PubMed

    Lucas Martínez, Néstor; Martínez, José-Fernán; Hernández Díaz, Vicente

    2014-12-01

    Wireless Sensor Networks (WSNs) are generally used to collect information from the environment. The gathered data are delivered mainly to sinks or gateways that become the endpoints where applications can retrieve and process such data. However, applications would also expect from a WSN an event-driven operational model, so that they can be notified whenever occur some specific environmental changes instead of continuously analyzing the data provided periodically. In either operational model, WSNs represent a collection of interconnected objects, as outlined by the Internet of Things. Additionally, in order to fulfill the Internet of Things principles, Wireless Sensor Networks must have a virtual representation that allows indirect access to their resources, a model that should also include the virtualization of event sources in a WSN. Thus, in this paper a model for a virtual representation of event sources in a WSN is proposed. They are modeled as internet resources that are accessible by any internet application, following an Internet of Things approach. The model has been tested in a real implementation where a WSN has been deployed in an open neighborhood environment. Different event sources have been identified in the proposed scenario, and they have been represented following the proposed model.

  4. Virtual Sensors: Using Data Mining to Efficiently Estimate Spectra

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok; Oza, Nikunj; Stroeve, Julienne

    2004-01-01

    Detecting clouds within a satellite image is essential for retrieving surface geophysical parameters, such as albedo and temperature, from optical and thermal imagery because the retrieval methods tend to be valid for clear skies only. Thus, routine satellite data processing requires reliable automated cloud detection algorithms that are applicable to many surface types. Unfortunately, cloud detection over snow and ice is difficult due to the lack of spectral contrast between clouds and snow. Snow and clouds are both highly reflective in the visible wavelen,ats and often show little contrast in the thermal Infrared. However, at 1.6 microns, the spectral signatures of snow and clouds differ enough to allow improved snow/ice/cloud discrimination. The recent Terra and Aqua Moderate Resolution Imaging Spectro-Radiometer (MODIS) sensors have a channel (channel 6) at 1.6 microns. Presently the most comprehensive, long-term information on surface albedo and temperature over snow- and ice-covered surfaces comes from the Advanced Very High Resolution Radiometer ( AVHRR) sensor that has been providing imagery since July 1981. The earlier AVHRR sensors (e.g. AVHRR/2) did not however have a channel designed for discriminating clouds from snow, such as the 1.6 micron channel available on the more recent AVHRR/3 or the MODIS sensors. In the absence of the 1.6 micron channel, the AVHRR Polar Pathfinder (APP) product performs cloud detection using a combination of time-series analysis and multispectral threshold tests based on the satellite's measuring channels to produce a cloud mask. The method has been found to work reasonably well over sea ice, but not so well over the ice sheets. Thus, improving the cloud mask in the APP dataset would be extremely helpful toward increasing the accuracy of the albedo and temperature retrievals, as well as extending the time-series of albedo and temperature retrievals from the more recent sensors to the historical ones. In this work, we use data mining methods to construct a model of MODIS channel 6 as a function of other channels that are common to both MODIS and AVHRR. The idea is to use the model to generate the equivalent of MODIS channel 6 for AVHRR as a function of the AVHRR equivalents to MODIS channels. We call this a Virtual Sensor because it predicts unmeasured spectra. The goal is to use this virtual channel 6. to yield a cloud mask superior to what is currently used in APP . Our results show that several data mining methods such as multilayer perceptrons (MLPs), ensemble methods (e.g., bagging), and kernel methods (e.g., support vector machines) generate channel 6 for unseen MODIS images with high accuracy. Because the true channel 6 is not available for AVHRR images, we qualitatively assess the virtual channel 6 for several AVHRR images.

  5. Piezoelectric power generation for sensor applications: design of a battery-less wireless tire pressure sensor

    NASA Astrophysics Data System (ADS)

    Makki, Noaman; Pop-Iliev, Remon

    2011-06-01

    An in-wheel wireless and battery-less piezo-powered tire pressure sensor is developed. Where conventional battery powered Tire Pressure Monitoring Systems (TPMS) are marred by the limited battery life, TPMS based on power harvesting modules provide virtually unlimited sensor life. Furthermore, the elimination of a permanent energy reservoir simplifies the overall sensor design through the exclusion of extra circuitry required to sense vehicle motion and conserve precious battery capacity during vehicle idling periods. In this paper, two design solutions are presented, 1) with very low cost highly flexible piezoceramic (PZT) bender elements bonded directly to the tire to generate power required to run the sensor and, 2) a novel rim mounted PZT harvesting unit that can be used to power pressure sensors incorporated into the valve stem requiring minimal change to the presently used sensors. While both the designs eliminate the use of environmentally unfriendly battery from the TPMS design, they offer advantages of being very low cost, service free and easily replaceable during tire repair and replacement.

  6. Optimal Deployment of Sensor Nodes Based on Performance Surface of Underwater Acoustic Communication

    PubMed Central

    Choi, Jee Woong

    2017-01-01

    The underwater acoustic sensor network (UWASN) is a system that exchanges data between numerous sensor nodes deployed in the sea. The UWASN uses an underwater acoustic communication technique to exchange data. Therefore, it is important to design a robust system that will function even in severely fluctuating underwater communication conditions, along with variations in the ocean environment. In this paper, a new algorithm to find the optimal deployment positions of underwater sensor nodes is proposed. The algorithm uses the communication performance surface, which is a map showing the underwater acoustic communication performance of a targeted area. A virtual force-particle swarm optimization algorithm is then used as an optimization technique to find the optimal deployment positions of the sensor nodes, using the performance surface information to estimate the communication radii of the sensor nodes in each generation. The algorithm is evaluated by comparing simulation results between two different seasons (summer and winter) for an area located off the eastern coast of Korea as the selected targeted area. PMID:29053569

  7. Virtual microphone sensing through vibro-acoustic modelling and Kalman filtering

    NASA Astrophysics Data System (ADS)

    van de Walle, A.; Naets, F.; Desmet, W.

    2018-05-01

    This work proposes a virtual microphone methodology which enables full field acoustic measurements for vibro-acoustic systems. The methodology employs a Kalman filtering framework in order to combine a reduced high-fidelity vibro-acoustic model with a structural excitation measurement and small set of real microphone measurements on the system under investigation. By employing model order reduction techniques, a high order finite element model can be converted in a much smaller model which preserves the desired accuracy and maintains the main physical properties of the original model. Due to the low order of the reduced-order model, it can be effectively employed in a Kalman filter. The proposed methodology is validated experimentally on a strongly coupled vibro-acoustic system. The virtual sensor vastly improves the accuracy with respect to regular forward simulation. The virtual sensor also allows to recreate the full sound field of the system, which is very difficult/impossible to do through classical measurements.

  8. Wavelets and Elman Neural Networks for monitoring environmental variables

    NASA Astrophysics Data System (ADS)

    Ciarlini, Patrizia; Maniscalco, Umberto

    2008-11-01

    An application in cultural heritage is introduced. Wavelet decomposition and Neural Networks like virtual sensors are jointly used to simulate physical and chemical measurements in specific locations of a monument. Virtual sensors, suitably trained and tested, can substitute real sensors in monitoring the monument surface quality, while the real ones should be installed for a long time and at high costs. The application of the wavelet decomposition to the environmental data series allows getting the treatment of underlying temporal structure at low frequencies. Consequently a separate training of suitable Elman Neural Networks for high/low components can be performed, thus improving the networks convergence in learning time and measurement accuracy in working time.

  9. Insects modify their behaviour depending on the feedback sensor used when walking on a trackball in virtual reality.

    PubMed

    Taylor, Gavin J; Paulk, Angelique C; Pearson, Thomas W J; Moore, Richard J D; Stacey, Jacqui A; Ball, David; van Swinderen, Bruno; Srinivasan, Mandyam V

    2015-10-01

    When using virtual-reality paradigms to study animal behaviour, careful attention must be paid to how the animal's actions are detected. This is particularly relevant in closed-loop experiments where the animal interacts with a stimulus. Many different sensor types have been used to measure aspects of behaviour, and although some sensors may be more accurate than others, few studies have examined whether, and how, such differences affect an animal's behaviour in a closed-loop experiment. To investigate this issue, we conducted experiments with tethered honeybees walking on an air-supported trackball and fixating a visual object in closed-loop. Bees walked faster and along straighter paths when the motion of the trackball was measured in the classical fashion - using optical motion sensors repurposed from computer mice - than when measured more accurately using a computer vision algorithm called 'FicTrac'. When computer mouse sensors were used to measure bees' behaviour, the bees modified their behaviour and achieved improved control of the stimulus. This behavioural change appears to be a response to a systematic error in the computer mouse sensor that reduces the sensitivity of this sensor system under certain conditions. Although the large perceived inertia and mass of the trackball relative to the honeybee is a limitation of tethered walking paradigms, observing differences depending on the sensor system used to measure bee behaviour was not expected. This study suggests that bees are capable of fine-tuning their motor control to improve the outcome of the task they are performing. Further, our findings show that caution is required when designing virtual-reality experiments, as animals can potentially respond to the artificial scenario in unexpected and unintended ways. © 2015. Published by The Company of Biologists Ltd.

  10. Terrain Model Registration for Single Cycle Instrument Placement

    NASA Technical Reports Server (NTRS)

    Deans, Matthew; Kunz, Clay; Sargent, Randy; Pedersen, Liam

    2003-01-01

    This paper presents an efficient and robust method for registration of terrain models created using stereo vision on a planetary rover. Our approach projects two surface models into a virtual depth map, rendering the models as they would be seen from a single range sensor. Correspondence is established based on which points project to the same location in the virtual range sensor. A robust norm of the deviations in observed depth is used as the objective function, and the algorithm searches for the rigid transformation which minimizes the norm. An initial coarse search is done using rover pose information from odometry and orientation sensing. A fine search is done using Levenberg-Marquardt. Our method enables a planetary rover to keep track of designated science targets as it moves, and to hand off targets from one set of stereo cameras to another. These capabilities are essential for the rover to autonomously approach a science target and place an instrument in contact in a single command cycle.

  11. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation

    PubMed Central

    2011-01-01

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces. PMID:21791054

  12. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation.

    PubMed

    Boulos, Maged N Kamel; Blanchard, Bryan J; Walker, Cory; Montero, Julio; Tripathy, Aalap; Gutierrez-Osuna, Ricardo

    2011-07-26

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces.

  13. Phase unwrapping with a virtual Hartmann-Shack wavefront sensor.

    PubMed

    Akondi, Vyas; Falldorf, Claas; Marcos, Susana; Vohnsen, Brian

    2015-10-05

    The use of a spatial light modulator for implementing a digital phase-shifting (PS) point diffraction interferometer (PDI) allows tunability in fringe spacing and in achieving PS without the need for mechanically moving parts. However, a small amount of detector or scatter noise could affect the accuracy of wavefront sensing. Here, a novel method of wavefront reconstruction incorporating a virtual Hartmann-Shack (HS) wavefront sensor is proposed that allows easy tuning of several wavefront sensor parameters. The proposed method was tested and compared with a Fourier unwrapping method implemented on a digital PS PDI. The rewrapping of the Fourier reconstructed wavefronts resulted in phase maps that matched well the original wrapped phase and the performance was found to be more stable and accurate than conventional methods. Through simulation studies, the superiority of the proposed virtual HS phase unwrapping method is shown in comparison with the Fourier unwrapping method in the presence of noise. Further, combining the two methods could improve accuracy when the signal-to-noise ratio is sufficiently high.

  14. Kinect-based virtual rehabilitation and evaluation system for upper limb disorders: A case study.

    PubMed

    Ding, W L; Zheng, Y Z; Su, Y P; Li, X L

    2018-04-19

    To help patients with disabilities of the arm and shoulder recover the accuracy and stability of movements, a novel and simple virtual rehabilitation and evaluation system called the Kine-VRES system was developed using Microsoft Kinect. First, several movements and virtual tasks were designed to increase the coordination, control and speed of the arm movements. The movements of the patients were then captured using the Kinect sensor, and kinematics-based interaction and real-time feedback were integrated into the system to enhance the motivation and self-confidence of the patient. Finally, a quantitative evaluation method of upper limb movements was provided using the recorded kinematics during hand-to-hand movement. A preliminary study of this rehabilitation system indicates that the shoulder movements of two participants with ataxia became smoother after three weeks of training (one hour per day). This case study demonstrated the effectiveness of the designed system, which could be promising for the rehabilitation of patients with upper limb disorders.

  15. Virtual Proprioception for eccentric training.

    PubMed

    LeMoyne, Robert; Mastroianni, Timothy

    2017-07-01

    Wireless inertial sensors enable quantified feedback, which can be applied to evaluate the efficacy of therapy and rehabilitation. In particular eccentric training promotes a beneficial rehabilitation and strength training strategy. Virtual Proprioception for eccentric training applies real-time feedback from a wireless gyroscope platform enabled through a software application for a smartphone. Virtual Proprioception for eccentric training is applied to the eccentric phase of a biceps brachii strength training and contrasted to a biceps brachii strength training scenario without feedback. During the operation of Virtual Proprioception for eccentric training the intent is to not exceed a prescribed gyroscope signal threshold based on the real-time presentation of the gyroscope signal, in order to promote the eccentric aspect of the strength training endeavor. The experimental trial data is transmitted wireless through connectivity to the Internet as an email attachment for remote post-processing. A feature set is derived from the gyroscope signal for machine learning classification of the two scenarios of Virtual Proprioception real-time feedback for eccentric training and eccentric training without feedback. Considerable classification accuracy is achieved through the application of a multilayer perceptron neural network for distinguishing between the Virtual Proprioception real-time feedback for eccentric training and eccentric training without feedback.

  16. Monitoring and Discovery for Self-Organized Network Management in Virtualized and Software Defined Networks

    PubMed Central

    Valdivieso Caraguay, Ángel Leonardo; García Villalba, Luis Javier

    2017-01-01

    This paper presents the Monitoring and Discovery Framework of the Self-Organized Network Management in Virtualized and Software Defined Networks SELFNET project. This design takes into account the scalability and flexibility requirements needed by 5G infrastructures. In this context, the present framework focuses on gathering and storing the information (low-level metrics) related to physical and virtual devices, cloud environments, flow metrics, SDN traffic and sensors. Similarly, it provides the monitoring data as a generic information source in order to allow the correlation and aggregation tasks. Our design enables the collection and storing of information provided by all the underlying SELFNET sublayers, including the dynamically onboarded and instantiated SDN/NFV Apps, also known as SELFNET sensors. PMID:28362346

  17. Monitoring and Discovery for Self-Organized Network Management in Virtualized and Software Defined Networks.

    PubMed

    Caraguay, Ángel Leonardo Valdivieso; Villalba, Luis Javier García

    2017-03-31

    This paper presents the Monitoring and Discovery Framework of the Self-Organized Network Management in Virtualized and Software Defined Networks SELFNET project. This design takes into account the scalability and flexibility requirements needed by 5G infrastructures. In this context, the present framework focuses on gathering and storing the information (low-level metrics) related to physical and virtual devices, cloud environments, flow metrics, SDN traffic and sensors. Similarly, it provides the monitoring data as a generic information source in order to allow the correlation and aggregation tasks. Our design enables the collection and storing of information provided by all the underlying SELFNET sublayers, including the dynamically onboarded and instantiated SDN/NFV Apps, also known as SELFNET sensors.

  18. Nonlinear bias compensation of ZiYuan-3 satellite imagery with cubic splines

    NASA Astrophysics Data System (ADS)

    Cao, Jinshan; Fu, Jianhong; Yuan, Xiuxiao; Gong, Jianya

    2017-11-01

    Like many high-resolution satellites such as the ALOS, MOMS-2P, QuickBird, and ZiYuan1-02C satellites, the ZiYuan-3 satellite suffers from different levels of attitude oscillations. As a result of such oscillations, the rational polynomial coefficients (RPCs) obtained using a terrain-independent scenario often have nonlinear biases. In the sensor orientation of ZiYuan-3 imagery based on a rational function model (RFM), these nonlinear biases cannot be effectively compensated by an affine transformation. The sensor orientation accuracy is thereby worse than expected. In order to eliminate the influence of attitude oscillations on the RFM-based sensor orientation, a feasible nonlinear bias compensation approach for ZiYuan-3 imagery with cubic splines is proposed. In this approach, no actual ground control points (GCPs) are required to determine the cubic splines. First, the RPCs are calculated using a three-dimensional virtual control grid generated based on a physical sensor model. Second, one cubic spline is used to model the residual errors of the virtual control points in the row direction and another cubic spline is used to model the residual errors in the column direction. Then, the estimated cubic splines are used to compensate the nonlinear biases in the RPCs. Finally, the affine transformation parameters are used to compensate the residual biases in the RPCs. Three ZiYuan-3 images were tested. The experimental results showed that before the nonlinear bias compensation, the residual errors of the independent check points were nonlinearly biased. Even if the number of GCPs used to determine the affine transformation parameters was increased from 4 to 16, these nonlinear biases could not be effectively compensated. After the nonlinear bias compensation with the estimated cubic splines, the influence of the attitude oscillations could be eliminated. The RFM-based sensor orientation accuracies of the three ZiYuan-3 images reached 0.981 pixels, 0.890 pixels, and 1.093 pixels, which were respectively 42.1%, 48.3%, and 54.8% better than those achieved before the nonlinear bias compensation.

  19. Virtual reality as a method for evaluation and therapy after traumatic hand surgery.

    PubMed

    Nica, Adriana Sarah; Brailescu, Consuela Monica; Scarlet, Rodica Gabriela

    2013-01-01

    In the last decade, Virtual Reality has encountered a continuous development concerning medical purposes and there are a lot of devices based on the classic "cyberglove" concept that are used as new therapeutic method for upper limb pathology, especially neurologic problems [1;2;3]. One of the VR devices is Pablo (Tyromotion), with very sensitive sensors that can measure the hand grip strenght and the pinch force, also the ROM (range of motion) for all the joints of the upper limb (shoulder, elbow, wrist) and offering the possibility of interactive games based on Virtual Reality concept with application in occupational therapy programs. We used Pablo in our study on patients with hand surgery as an objective tool for assessment and as additional therapeutic method to the classic Rehabilitation program [4;5]. The results of the study proved that Pablo represents a modern option for evaluation of hand deficits and dysfunctions, with objective measurement replacement of classic goniometry and dynamometry, with computerized data base of patients with monitoring of parameters during the recovery program and with better muscular and neuro-cognitive feedback during the interactive therapeutic modules.

  20. Virtual environment application with partial gravity simulation

    NASA Technical Reports Server (NTRS)

    Ray, David M.; Vanchau, Michael N.

    1994-01-01

    To support manned missions to the surface of Mars and missions requiring manipulation of payloads and locomotion in space, a training facility is required to simulate the conditions of both partial and microgravity. A partial gravity simulator (Pogo) which uses pneumatic suspension is being studied for use in virtual reality training. Pogo maintains a constant partial gravity simulation with a variation of simulated body force between 2.2 and 10 percent, depending on the type of locomotion inputs. this paper is based on the concept and application of a virtual environment system with Pogo including a head-mounted display and glove. The reality engine consists of a high end SGI workstation and PC's which drive Pogo's sensors and data acquisition hardware used for tracking and control. The tracking system is a hybrid of magnetic and optical trackers integrated for this application.

  1. Reliability modelling and analysis of thermal MEMS

    NASA Astrophysics Data System (ADS)

    Muratet, Sylvaine; Lavu, Srikanth; Fourniols, Jean-Yves; Bell, George; Desmulliez, Marc P. Y.

    2006-04-01

    This paper presents a MEMS reliability study methodology based on the novel concept of 'virtual prototyping'. This methodology can be used for the development of reliable sensors or actuators and also to characterize their behaviour in specific use conditions and applications. The methodology is demonstrated on the U-shaped micro electro thermal actuator used as test vehicle. To demonstrate this approach, a 'virtual prototype' has been developed with the modeling tools MatLab and VHDL-AMS. A best practice FMEA (Failure Mode and Effect Analysis) is applied on the thermal MEMS to investigate and assess the failure mechanisms. Reliability study is performed by injecting the identified defaults into the 'virtual prototype'. The reliability characterization methodology predicts the evolution of the behavior of these MEMS as a function of the number of cycles of operation and specific operational conditions.

  2. A New Continent of Ideas

    NASA Technical Reports Server (NTRS)

    1990-01-01

    While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Autonomic Intelligent Cyber Sensor (AICS) provides cyber security and industrial network state awareness for Ethernet based control network implementations. The AICS utilizes collaborative mechanisms based on Autonomic Research and a Service Oriented Architecture (SOA) to: 1) identify anomalous network traffic; 2) discover network entity information; 3) deploy deceptive virtual hosts; and 4) implement self-configuring modules. AICS achieves these goals by dynamically reacting to the industrial human-digital ecosystem in which it resides. Information is transported internally and externally on a standards based, flexible two-level communication structure.

  4. The Virtual Tablet: Virtual Reality as a Control System

    NASA Technical Reports Server (NTRS)

    Chronister, Andrew

    2016-01-01

    In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.

  5. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata.

    PubMed

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-16

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included.

  6. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata

    PubMed Central

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-01

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included. PMID:29337877

  7. An adaptive process-based cloud infrastructure for space situational awareness applications

    NASA Astrophysics Data System (ADS)

    Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce

    2014-06-01

    Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.

  8. Ultrasonic imaging of material flaws exploiting multipath information

    NASA Astrophysics Data System (ADS)

    Shen, Xizhong; Zhang, Yimin D.; Demirli, Ramazan; Amin, Moeness G.

    2011-05-01

    In this paper, we consider ultrasonic imaging for the visualization of flaws in a material. Ultrasonic imaging is a powerful nondestructive testing (NDT) tool which assesses material conditions via the detection, localization, and classification of flaws inside a structure. Multipath exploitations provide extended virtual array apertures and, in turn, enhance imaging capability beyond the limitation of traditional multisensor approaches. We utilize reflections of ultrasonic signals which occur when encountering different media and interior discontinuities. The waveforms observed at the physical as well as virtual sensors yield additional measurements corresponding to different aspect angles. Exploitation of multipath information addresses unique issues observed in ultrasonic imaging. (1) Utilization of physical and virtual sensors significantly extends the array aperture for image enhancement. (2) Multipath signals extend the angle of view of the narrow beamwidth of the ultrasound transducers, allowing improved visibility and array design flexibility. (3) Ultrasonic signals experience difficulty in penetrating a flaw, thus the aspect angle of the observation is limited unless access to other sides is available. The significant extension of the aperture makes it possible to yield flaw observation from multiple aspect angles. We show that data fusion of physical and virtual sensor data significantly improves the detection and localization performance. The effectiveness of the proposed multipath exploitation approach is demonstrated through experimental studies.

  9. Virtualization of Event Sources in Wireless Sensor Networks for the Internet of Things

    PubMed Central

    Martínez, Néstor Lucas; Martínez, José-Fernán; Díaz, Vicente Hernández

    2014-01-01

    Wireless Sensor Networks (WSNs) are generally used to collect information from the environment. The gathered data are delivered mainly to sinks or gateways that become the endpoints where applications can retrieve and process such data. However, applications would also expect from a WSN an event-driven operational model, so that they can be notified whenever occur some specific environmental changes instead of continuously analyzing the data provided periodically. In either operational model, WSNs represent a collection of interconnected objects, as outlined by the Internet of Things. Additionally, in order to fulfill the Internet of Things principles, Wireless Sensor Networks must have a virtual representation that allows indirect access to their resources, a model that should also include the virtualization of event sources in a WSN. Thus, in this paper a model for a virtual representation of event sources in a WSN is proposed. They are modeled as internet resources that are accessible by any internet application, following an Internet of Things approach. The model has been tested in a real implementation where a WSN has been deployed in an open neighborhood environment. Different event sources have been identified in the proposed scenario, and they have been represented following the proposed model. PMID:25470489

  10. Practical design and evaluation methods of omnidirectional vision sensors

    NASA Astrophysics Data System (ADS)

    Ohte, Akira; Tsuzuki, Osamu

    2012-01-01

    A practical omnidirectional vision sensor, consisting of a curved mirror, a mirror-supporting structure, and a megapixel digital imaging system, can view a field of 360 deg horizontally and 135 deg vertically. The authors theoretically analyzed and evaluated several curved mirrors, namely, a spherical mirror, an equidistant mirror, and a single viewpoint mirror (hyperboloidal mirror). The focus of their study was mainly on the image-forming characteristics, position of the virtual images, and size of blur spot images. The authors propose here a practical design method that satisfies the required characteristics. They developed image-processing software for converting circular images to images of the desired characteristics in real time. They also developed several prototype vision sensors using spherical mirrors. Reports dealing with virtual images and blur-spot size of curved mirrors are few; therefore, this paper will be very useful for the development of omnidirectional vision sensors.

  11. Fast Markerless Tracking for Augmented Reality in Planar Environment

    NASA Astrophysics Data System (ADS)

    Basori, Ahmad Hoirul; Afif, Fadhil Noer; Almazyad, Abdulaziz S.; AbuJabal, Hamza Ali S.; Rehman, Amjad; Alkawaz, Mohammed Hazim

    2015-12-01

    Markerless tracking for augmented reality should not only be accurate but also fast enough to provide a seamless synchronization between real and virtual beings. Current reported methods showed that a vision-based tracking is accurate but requires high computational power. This paper proposes a real-time hybrid-based method for tracking unknown environments in markerless augmented reality. The proposed method provides collaboration of vision-based approach with accelerometers and gyroscopes sensors as camera pose predictor. To align the augmentation relative to camera motion, the tracking method is done by substituting feature-based camera estimation with combination of inertial sensors with complementary filter to provide more dynamic response. The proposed method managed to track unknown environment with faster processing time compared to available feature-based approaches. Moreover, the proposed method can sustain its estimation in a situation where feature-based tracking loses its track. The collaboration of sensor tracking managed to perform the task for about 22.97 FPS, up to five times faster than feature-based tracking method used as comparison. Therefore, the proposed method can be used to track unknown environments without depending on amount of features on scene, while requiring lower computational cost.

  12. A distributed geo-routing algorithm for wireless sensor networks.

    PubMed

    Joshi, Gyanendra Prasad; Kim, Sung Won

    2009-01-01

    Geographic wireless sensor networks use position information for greedy routing. Greedy routing works well in dense networks, whereas in sparse networks it may fail and require a recovery algorithm. Recovery algorithms help the packet to get out of the communication void. However, these algorithms are generally costly for resource constrained position-based wireless sensor networks (WSNs). In this paper, we propose a void avoidance algorithm (VAA), a novel idea based on upgrading virtual distance. VAA allows wireless sensor nodes to remove all stuck nodes by transforming the routing graph and forwarding packets using only greedy routing. In VAA, the stuck node upgrades distance unless it finds a next hop node that is closer to the destination than it is. VAA guarantees packet delivery if there is a topologically valid path. Further, it is completely distributed, immediately responds to node failure or topology changes and does not require planarization of the network. NS-2 is used to evaluate the performance and correctness of VAA and we compare its performance to other protocols. Simulations show our proposed algorithm consumes less energy, has an efficient path and substantially less control overheads.

  13. Long wave infrared cavity-enhanced sensors using quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Taubman, Matthew S.; Scott, David C.; Myers, Tanya L.; Cannon, Bret D.

    2005-11-01

    Quantum cascade lasers (QCLs) are becoming well known as convenient and stable semiconductor laser sources operating in the mid- to long-wave infrared, and are able to be fabricated to operate virtually anywhere in the 3.5 to 25 micron region. This makes them an ideal choice for infrared chemical sensing, a topic of great interest at present, spanning at least three critical areas: national security, environmental monitoring and protection, and the early diagnosis of disease through breath analysis. There are many different laser-based spectroscopic chemical sensor architectures in use today, from simple direct detection through to more complex and highly sensitive systems. Many current sensor needs can be met by combining QCLs and appropriate sensor architectures, those needs ranging from UAV-mounted surveillance systems, through to larger ultra-sensitive systems for airport security. In this paper we provide an overview of various laser-based spectroscopic sensing techniques, pointing out advantages and disadvantages of each. As part of this process, we include our own results and observations for techniques under development at PNNL. We also present the latest performance of our ultra-quiet QCL control electronics now being commercialized, and explore how using optimized supporting electronics enables increased sensor performance and decreased sensor footprint for given applications.

  14. Temporally coherent 4D video segmentation for teleconferencing

    NASA Astrophysics Data System (ADS)

    Ehmann, Jana; Guleryuz, Onur G.

    2013-09-01

    We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap- plications on emerging mobile devices. Our algorithm extracts users from their environments and places them onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac- tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth sensors, while providing useful information for segmentation, result in noisy depth maps with a large number of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other- wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve- ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video conferencing scenarios.

  15. Magnetosensitive e-skins with directional perception for augmented reality

    PubMed Central

    Cañón Bermúdez, Gilbert Santiago; Karnaushenko, Dmitriy D.; Karnaushenko, Daniil; Lebanov, Ana; Bischoff, Lothar; Kaltenbrunner, Martin; Fassbender, Jürgen; Schmidt, Oliver G.; Makarov, Denys

    2018-01-01

    Electronic skins equipped with artificial receptors are able to extend our perception beyond the modalities that have naturally evolved. These synthetic receptors offer complimentary information on our surroundings and endow us with novel means of manipulating physical or even virtual objects. We realize highly compliant magnetosensitive skins with directional perception that enable magnetic cognition, body position tracking, and touchless object manipulation. Transfer printing of eight high-performance spin valve sensors arranged into two Wheatstone bridges onto 1.7-μm-thick polyimide foils ensures mechanical imperceptibility. This resembles a new class of interactive devices extracting information from the surroundings through magnetic tags. We demonstrate this concept in augmented reality systems with virtual knob-turning functions and the operation of virtual dialing pads, based on the interaction with magnetic fields. This technology will enable a cornucopia of applications from navigation, motion tracking in robotics, regenerative medicine, and sports and gaming to interaction in supplemented reality. PMID:29376121

  16. Planar maneuvering control of underwater snake robots using virtual holonomic constraints.

    PubMed

    Kohl, Anna M; Kelasidi, Eleni; Mohammadi, Alireza; Maggiore, Manfredi; Pettersen, Kristin Y

    2016-11-24

    This paper investigates the problem of planar maneuvering control for bio-inspired underwater snake robots that are exposed to unknown ocean currents. The control objective is to make a neutrally buoyant snake robot which is subject to hydrodynamic forces and ocean currents converge to a desired planar path and traverse the path with a desired velocity. The proposed feedback control strategy enforces virtual constraints which encode biologically inspired gaits on the snake robot configuration. The virtual constraints, parametrized by states of dynamic compensators, are used to regulate the orientation and forward speed of the snake robot. A two-state ocean current observer based on relative velocity sensors is proposed. It enables the robot to follow the path in the presence of unknown constant ocean currents. The efficacy of the proposed control algorithm for several biologically inspired gaits is verified both in simulations for different path geometries and in experiments.

  17. Encountered-Type Haptic Interface for Representation of Shape and Rigidity of 3D Virtual Objects.

    PubMed

    Takizawa, Naoki; Yano, Hiroaki; Iwata, Hiroo; Oshiro, Yukio; Ohkohchi, Nobuhiro

    2017-01-01

    This paper describes the development of an encountered-type haptic interface that can generate the physical characteristics, such as shape and rigidity, of three-dimensional (3D) virtual objects using an array of newly developed non-expandable balloons. To alter the rigidity of each non-expandable balloon, the volume of air in it is controlled through a linear actuator and a pressure sensor based on Hooke's law. Furthermore, to change the volume of each balloon, its exposed surface area is controlled by using another linear actuator with a trumpet-shaped tube. A position control mechanism is constructed to display virtual objects using the balloons. The 3D position of each balloon is controlled using a flexible tube and a string. The performance of the system is tested and the results confirm the effectiveness of the proposed principle and interface.

  18. The Virtual Environment for Rapid Prototyping of the Intelligent Environment

    PubMed Central

    Bouzouane, Abdenour; Gaboury, Sébastien

    2017-01-01

    Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants’ behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs. PMID:29112175

  19. The Virtual Environment for Rapid Prototyping of the Intelligent Environment.

    PubMed

    Francillette, Yannick; Boucher, Eric; Bouzouane, Abdenour; Gaboury, Sébastien

    2017-11-07

    Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants' behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs.

  20. A New User Interface for On-Demand Customizable Data Products for Sensors in a SensorWeb

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel; Cappelaere, Pat; Frye, Stuart; Sohlberg, Rob; Ly, Vuong; Chien, Steve; Sullivan, Don

    2011-01-01

    A SensorWeb is a set of sensors, which can consist of ground, airborne and space-based sensors interoperating in an automated or autonomous collaborative manner. The NASA SensorWeb toolbox, developed at NASA/GSFC in collaboration with NASA/JPL, NASA/Ames and other partners, is a set of software and standards that (1) enables users to create virtual private networks of sensors over open networks; (2) provides the capability to orchestrate their actions; (3) provides the capability to customize the output data products and (4) enables automated delivery of the data products to the users desktop. A recent addition to the SensorWeb Toolbox is a new user interface, together with web services co-resident with the sensors, to enable rapid creation, loading and execution of new algorithms for processing sensor data. The web service along with the user interface follows the Open Geospatial Consortium (OGC) standard called Web Coverage Processing Service (WCPS). This presentation will detail the prototype that was built and how the WCPS was tested against a HyspIRI flight testbed and an elastic computation cloud on the ground with EO-1 data. HyspIRI is a future NASA decadal mission. The elastic computation cloud stores EO-1 data and runs software similar to Amazon online shopping.

  1. Hyperspectral target detection analysis of a cluttered scene from a virtual airborne sensor platform using MuSES

    NASA Astrophysics Data System (ADS)

    Packard, Corey D.; Viola, Timothy S.; Klein, Mark D.

    2017-10-01

    The ability to predict spectral electro-optical (EO) signatures for various targets against realistic, cluttered backgrounds is paramount for rigorous signature evaluation. Knowledge of background and target signatures, including plumes, is essential for a variety of scientific and defense-related applications including contrast analysis, camouflage development, automatic target recognition (ATR) algorithm development and scene material classification. The capability to simulate any desired mission scenario with forecast or historical weather is a tremendous asset for defense agencies, serving as a complement to (or substitute for) target and background signature measurement campaigns. In this paper, a systematic process for the physical temperature and visible-through-infrared radiance prediction of several diverse targets in a cluttered natural environment scene is presented. The ability of a virtual airborne sensor platform to detect and differentiate targets from a cluttered background, from a variety of sensor perspectives and across numerous wavelengths in differing atmospheric conditions, is considered. The process described utilizes the thermal and radiance simulation software MuSES and provides a repeatable, accurate approach for analyzing wavelength-dependent background and target (including plume) signatures in multiple band-integrated wavebands (multispectral) or hyperspectrally. The engineering workflow required to combine 3D geometric descriptions, thermal material properties, natural weather boundary conditions, all modes of heat transfer and spectral surface properties is summarized. This procedure includes geometric scene creation, material and optical property attribution, and transient physical temperature prediction. Radiance renderings, based on ray-tracing and the Sandford-Robertson BRDF model, are coupled with MODTRAN for the inclusion of atmospheric effects. This virtual hyperspectral/multispectral radiance prediction methodology has been extensively validated and provides a flexible process for signature evaluation and algorithm development.

  2. Real-Time Mapping: Contemporary Challenges and the Internet of Things as the Way Forward

    NASA Astrophysics Data System (ADS)

    Bęcek, Kazimierz

    2016-12-01

    The Internet of Things (IoT) is an emerging technology that was conceived in 1999. The key components of the IoT are intelligent sensors, which represent objects of interest. The adjective `intelligent' is used here in the information gathering sense, not the psychological sense. Some 30 billion sensors that `know' the current status of objects they represent are already connected to the Internet. Various studies indicate that the number of installed sensors will reach 212 billion by 2020. Various scenarios of IoT projects show sensors being able to exchange data with the network as well as between themselves. In this contribution, we discuss the possibility of deploying the IoT in cartography for real-time mapping. A real-time map is prepared using data harvested through querying sensors representing geographical objects, and the concept of a virtual sensor for abstract objects, such as a land parcel, is presented. A virtual sensor may exist as a data record in the cloud. Sensors are identified by an Internet Protocol address (IP address), which implies that geographical objects through their sensors would also have an IP address. This contribution is an updated version of a conference paper presented by the author during the International Federation of Surveyors 2014 Congress in Kuala Lumpur. The author hopes that the use of the IoT for real-time mapping will be considered by the mapmaking community.

  3. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  4. A Miniature System for Separating Aerosol Particles and Measuring Mass Concentrations

    PubMed Central

    Liang, Dao; Shih, Wen-Pin; Chen, Chuin-Shan; Dai, Chi-An

    2010-01-01

    We designed and fabricated a new sensing system which consists of two virtual impactors and two quartz-crystal microbalance (QCM) sensors for measuring particle mass concentration and size distribution. The virtual impactors utilized different inertial forces of particles in air flow to classify different particle sizes. They were designed to classify particle diameter, d, into three different ranges: d < 2.28 μm, 2.28 μm ≤ d ≤ 3.20 μm, d > 3.20 μm. The QCM sensors were coated with a hydrogel, which was found to be a reliable adhesive for capturing aerosol particles. The QCM sensor coated with hydrogel was used to measure the mass loading of particles by utilizing its characteristic of resonant frequency shift. An integrated system has been demonstrated. PMID:22319317

  5. Advanced Networks in Dental Rich Online MEDiA (ANDROMEDA)

    NASA Astrophysics Data System (ADS)

    Elson, Bruce; Reynolds, Patricia; Amini, Ardavan; Burke, Ezra; Chapman, Craig

    There is growing demand for dental education and training not only in terms of knowledge but also skills. This demand is driven by continuing professional development requirements in the more developed economies, personnel shortages and skills differences across the European Union (EU) accession states and more generally in the developing world. There is an excellent opportunity for the EU to meet this demand by developing an innovative online flexible learning platform (FLP). Current clinical online systems are restricted to the delivery of general, knowledge-based training with no easy method of personalization or delivery of skill-based training. The PHANTOM project, headed by Kings College London is developing haptic-based virtual reality training systems for clinical dental training. ANDROMEDA seeks to build on this and establish a Flexible Learning Platform that can integrate the haptic and sensor based training with rich media knowledge transfer, whilst using sophisticated technologies such as including service-orientated architecture (SOA), Semantic Web technologies, knowledge-based engineering, business intelligence (BI) and virtual worlds for personalization.

  6. Front end design of smartphone-based mobile health

    NASA Astrophysics Data System (ADS)

    Zhang, Changfan; He, Lingsong; Gao, Zhiqiang; Ling, Cong; Du, Jianhao

    2015-02-01

    Mobile health has been a new trend all over the world with the rapid development of intelligent terminals and mobile internet. It can help patients monitor health in-house and is convenient for doctors to diagnose remotely. Smart-phone-based mobile health has big advantages in cost and data sharing. Front end design of it mainly focuses on two points: one is implementation of medical sensors aimed at measuring kinds of medical signal; another is acquisition of medical signal from sensors to smart phone. In this paper, the above two aspects were both discussed. First, medical sensor implementation was proposed to refer to mature measurement solutions with ECG (electrocardiograph) sensor design taken for example. And integrated chip using can simplify design. Then second, typical data acquisition architecture of smart phones, namely Bluetooth and MIC (microphone)-based architecture, were compared. Bluetooth architecture should be equipped with an acquisition card; MIC design uses sound card of smart phone instead. Smartphone-based virtual instrument app design corresponding to above acquisition architecture was discussed. In experiments, Bluetooth and MIC architecture were used to acquire blood pressure and ECG data respectively. The results showed that Bluetooth design can guarantee high accuracy during the acquisition and transmission process, and MIC design is competitive because of low cost and convenience.

  7. Design and Development of Card-Sized Virtual Keyboard Using Permanent Magnets and Hall Sensors

    NASA Astrophysics Data System (ADS)

    Demachi, Kazuyuki; Ohyama, Makoto; Kanemoto, Yoshiki; Masaie, Issei

    This paper proposes a method to distinguish the key-type of human fingers attached with the small permanent magnets. The Hall sensors arrayed in the credit card size area feel the distribution of the magnetic field due to the key-typing movement of the human fingers as if the keyboard exists, and the signal is analyzed using the generic algorithm or the neural network algorism to distinguish the typed keys. By this method, the keyboard can be miniaturized to the credit card size (54mm×85mm). We called this system `The virtual keyboard system'.

  8. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    PubMed

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  9. Omics approaches to individual variation: modeling networks and the virtual patient.

    PubMed

    Lehrach, Hans

    2016-09-01

    Every human is unique. We differ in our genomes, environment, behavior, disease history, and past and current medical treatment-a complex catalog of differences that often leads to variations in the way each of us responds to a particular therapy. We argue here that true personalization of drug therapies will rely on "virtual patient" models based on a detailed characterization of the individual patient by molecular, imaging, and sensor techniques. The models will be based, wherever possible, on the molecular mechanisms of disease processes and drug action but can also expand to hybrid models including statistics/machine learning/artificial intelligence-based elements trained on available data to address therapeutic areas or therapies for which insufficient information on mechanisms is available. Depending on the disease, its mechanisms, and the therapy, virtual patient models can be implemented at a fairly high level of abstraction, with molecular models representing cells, cell types, or organs relevant to the clinical question, interacting not only with each other but also the environment. In the future, "virtual patient/in-silico self" models may not only become a central element of our health care system, reducing otherwise unavoidable mistakes and unnecessary costs, but also act as "guardian angels" accompanying us through life to protect us against dangers and to help us to deal intelligently with our own health and wellness.

  10. Omics approaches to individual variation: modeling networks and the virtual patient

    PubMed Central

    Lehrach, Hans

    2016-01-01

    Every human is unique. We differ in our genomes, environment, behavior, disease history, and past and current medical treatment—a complex catalog of differences that often leads to variations in the way each of us responds to a particular therapy. We argue here that true personalization of drug therapies will rely on “virtual patient” models based on a detailed characterization of the individual patient by molecular, imaging, and sensor techniques. The models will be based, wherever possible, on the molecular mechanisms of disease processes and drug action but can also expand to hybrid models including statistics/machine learning/artificial intelligence-based elements trained on available data to address therapeutic areas or therapies for which insufficient information on mechanisms is available. Depending on the disease, its mechanisms, and the therapy, virtual patient models can be implemented at a fairly high level of abstraction, with molecular models representing cells, cell types, or organs relevant to the clinical question, interacting not only with each other but also the environment. In the future, “virtual patient/in-silico self” models may not only become a central element of our health care system, reducing otherwise unavoidable mistakes and unnecessary costs, but also act as “guardian angels” accompanying us through life to protect us against dangers and to help us to deal intelligently with our own health and wellness. PMID:27757060

  11. Virtual reality 3D headset based on DMD light modulators

    NASA Astrophysics Data System (ADS)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-01

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micromirror devices (DMD). Current methods for presenting information for virtual reality are focused on either polarizationbased modulators such as liquid crystal on silicon (LCoS) devices, or miniature LCD or LED displays often using lenses to place the image at infinity. LCoS modulators are an area of active research and development, and reduce the amount of viewing light by 50% due to the use of polarization. Viewable LCD or LED screens may suffer low resolution, cause eye fatigue, and exhibit a "screen door" or pixelation effect due to the low pixel fill factor. Our approach leverages a mature technology based on silicon micro mirrors delivering 720p resolution displays in a small form-factor with high fill factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high-definition resolution and low power consumption, and many of the design methods developed for DMD projector applications can be adapted to display use. Potential applications include night driving with natural depth perception, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design concept is described in which light from the DMD is imaged to infinity and the user's own eye lens forms a real image on the user's retina resulting in a virtual retinal display.

  12. Using Amazon Web Services (AWS) to enable real-time, remote sensing of biophysical and anthropogenic conditions in green infrastructure systems in Philadelphia, an ultra-urban application of the Internet of Things (IoT)

    NASA Astrophysics Data System (ADS)

    Montalto, F. A.; Yu, Z.; Soldner, K.; Israel, A.; Fritch, M.; Kim, Y.; White, S.

    2017-12-01

    Urban stormwater utilities are increasingly using decentralized "green" infrastructure (GI) systems to capture stormwater and achieve compliance with regulations. Because environmental conditions, and design varies by GSI facility, monitoring of GSI systems under a range of conditions is essential. Conventional monitoring efforts can be costly because in-field data logging requires intense data transmission rates. The Internet of Things (IoT) can be used to more cost-effectively collect, store, and publish GSI monitoring data. Using 3G mobile networks, a cloud-based database was built on an Amazon Web Services (AWS) EC2 virtual machine to store and publish data collected with environmental sensors deployed in the field. This database can store multi-dimensional time series data, as well as photos and other observations logged by citizen scientists through a public engagement mobile app through a new Application Programming Interface (API). Also on the AWS EC2 virtual machine, a real-time QAQC flagging algorithm was developed to validate the sensor data streams.

  13. A miniature disposable radio (MiDR) for unattended ground sensor systems (UGSS) and munitions

    NASA Astrophysics Data System (ADS)

    Wells, Jeffrey S.; Wurth, Timothy J.

    2004-09-01

    Unattended and tactical sensors are used by the U.S. Army"s Future Combat Systems (FCS) and Objective Force Warrior (OFW) to detect and identify enemy targets on the battlefield. The radios being developed as part of the Networked Sensors for the Objective Force (NSOF) are too costly and too large to deploy in missions requiring throw-away hardware. A low-cost miniature radio is required to satisfy the communication needs for unmanned sensor and munitions systems that are deployed in a disposable manner. A low cost miniature disposable communications suite is leveraged using the commercial off-the-shelf market and employing a miniature universal frequency conversion architecture. Employing the technology of universal frequency architecture in a commercially available communication unit delivers a robust disposable transceiver that can operate at virtually any frequency. A low-cost RF communication radio has applicability in the commercial, homeland defense, military, and other government markets. Specific uses include perimeter monitoring, infrastructure defense, unattended ground sensors, tactical sensors, and border patrol. This paper describes a low-cost radio architecture to meet the requirements of throw-away radios that can be easily modified or tuned to virtually any operating frequency required for the specific mission.

  14. Skills based evaluation of alternative input methods to command a semi-autonomous electric wheelchair.

    PubMed

    Rojas, Mario; Ponce, Pedro; Molina, Arturo

    2016-08-01

    This paper presents the evaluation, under standardized metrics, of alternative input methods to steer and maneuver a semi-autonomous electric wheelchair. The Human-Machine Interface (HMI), which includes a virtual joystick, head movements and speech recognition controls, was designed to facilitate mobility skills for severely disabled people. Thirteen tasks, which are common to all the wheelchair users, were attempted five times by controlling it with the virtual joystick and the hands-free interfaces in different areas for disabled and non-disabled people. Even though the prototype has an intelligent navigation control, based on fuzzy logic and ultrasonic sensors, the evaluation was done without assistance. The scored values showed that both controls, the head movements and the virtual joystick have similar capabilities, 92.3% and 100%, respectively. However, the 54.6% capacity score obtained for the speech control interface indicates the needs of the navigation assistance to accomplish some of the goals. Furthermore, the evaluation time indicates those skills which require more user's training with the interface and specifications to improve the total performance of the wheelchair.

  15. Toward brain-computer interface based wheelchair control utilizing tactually-evoked event-related potentials

    PubMed Central

    2014-01-01

    Background People with severe disabilities, e.g. due to neurodegenerative disease, depend on technology that allows for accurate wheelchair control. For those who cannot operate a wheelchair with a joystick, brain-computer interfaces (BCI) may offer a valuable option. Technology depending on visual or auditory input may not be feasible as these modalities are dedicated to processing of environmental stimuli (e.g. recognition of obstacles, ambient noise). Herein we thus validated the feasibility of a BCI based on tactually-evoked event-related potentials (ERP) for wheelchair control. Furthermore, we investigated use of a dynamic stopping method to improve speed of the tactile BCI system. Methods Positions of four tactile stimulators represented navigation directions (left thigh: move left; right thigh: move right; abdomen: move forward; lower neck: move backward) and N = 15 participants delivered navigation commands by focusing their attention on the desired tactile stimulus in an oddball-paradigm. Results Participants navigated a virtual wheelchair through a building and eleven participants successfully completed the task of reaching 4 checkpoints in the building. The virtual wheelchair was equipped with simulated shared-control sensors (collision avoidance), yet these sensors were rarely needed. Conclusion We conclude that most participants achieved tactile ERP-BCI control sufficient to reliably operate a wheelchair and dynamic stopping was of high value for tactile ERP classification. Finally, this paper discusses feasibility of tactile ERPs for BCI based wheelchair control. PMID:24428900

  16. Colonoscope navigation system using colonoscope tracking method based on line registration

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Kondo, Hiroaki; Kitasaka, Takayuki; Furukawa, Kazuhiro; Miyahara, Ryoji; Hirooka, Yoshiki; Goto, Hidemi; Navab, Nassir; Mori, Kensaku

    2014-03-01

    This paper presents a new colonoscope navigation system. CT colonography is utilized for colon diagnosis based on CT images. If polyps are found while CT colonography, colonoscopic polypectomy can be performed to remove them. While performing a colonoscopic examination, a physician controls colonoscope based on his/her experience. Inexperienced physicians may occur complications such as colon perforation while colonoscopic examinations. To reduce complications, a navigation system of colonoscope while performing the colonoscopic examinations is necessary. We propose a colonoscope navigation system. This system has a new colonoscope tracking method. This method obtains a colon centerline from a CT volume of a patient. A curved line (colonoscope line) representing the shape of colonoscope inserted to the colon is obtained by using electromagnetic sensors. A coordinate system registration process that employs the ICP algorithm is performed to register the CT and sensor coordinate systems. The colon centerline and colonoscope line are registered by using a line registration method. The position of the colonoscope tip in the colon is obtained from the line registration result. Our colonoscope navigation system displays virtual colonoscopic views generated from the CT volumes. A viewpoint of the virtual colonoscopic view is a point on the centerline that corresponds to the colonoscope tip. Experimental results using a colon phantom showed that the proposed colonoscope tracking method can track the colonoscope tip with small tracking errors.

  17. Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing

    PubMed Central

    Invitto, Sara; Faggiano, Chiara; Sammarco, Silvia; De Luca, Valerio; De Paolis, Lucio T.

    2016-01-01

    In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user’s hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning. PMID:26999151

  18. Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing.

    PubMed

    Invitto, Sara; Faggiano, Chiara; Sammarco, Silvia; De Luca, Valerio; De Paolis, Lucio T

    2016-03-18

    In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user's hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning.

  19. The U.S. Air Force Transformation Flight Plan

    DTIC Science & Technology

    2003-11-01

    at Buckley Air Force Base, Colorado. Reserve Associate and Active Associate units have proven that this concept works and benef its the Active and...munitions manufactured from nano-particles, whose virtually all-surface structure yields unprecedented “burn-rates” (extreme explosiveness), promise far...systems for a common operating system, and a suite of remotely operated sensors, weapons, and robotics . Also included are a group of non-lethal weapon

  20. Reliable Geographical Forwarding in Cognitive Radio Sensor Networks Using Virtual Clusters

    PubMed Central

    Zubair, Suleiman; Fisal, Norsheila

    2014-01-01

    The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme. PMID:24854362

  1. Design and implementation of visual-haptic assistive control system for virtual rehabilitation exercise and teleoperation manipulation.

    PubMed

    Veras, Eduardo J; De Laurentis, Kathryn J; Dubey, Rajiv

    2008-01-01

    This paper describes the design and implementation of a control system that integrates visual and haptic information to give assistive force feedback through a haptic controller (Omni Phantom) to the user. A sensor-based assistive function and velocity scaling program provides force feedback that helps the user complete trajectory following exercises for rehabilitation purposes. This system also incorporates a PUMA robot for teleoperation, which implements a camera and a laser range finder, controlled in real time by a PC, were implemented into the system to help the user to define the intended path to the selected target. The real-time force feedback from the remote robot to the haptic controller is made possible by using effective multithreading programming strategies in the control system design and by novel sensor integration. The sensor-based assistant function concept applied to teleoperation as well as shared control enhances the motion range and manipulation capabilities of the users executing rehabilitation exercises such as trajectory following along a sensor-based defined path. The system is modularly designed to allow for integration of different master devices and sensors. Furthermore, because this real-time system is versatile the haptic component can be used separately from the telerobotic component; in other words, one can use the haptic device for rehabilitation purposes for cases in which assistance is needed to perform tasks (e.g., stroke rehab) and also for teleoperation with force feedback and sensor assistance in either supervisory or automatic modes.

  2. Automatic detection and visualisation of MEG ripple oscillations in epilepsy.

    PubMed

    van Klink, Nicole; van Rosmalen, Frank; Nenonen, Jukka; Burnos, Sergey; Helle, Liisa; Taulu, Samu; Furlong, Paul Lawrence; Zijlmans, Maeike; Hillebrand, Arjan

    2017-01-01

    High frequency oscillations (HFOs, 80-500 Hz) in invasive EEG are a biomarker for the epileptic focus. Ripples (80-250 Hz) have also been identified in non-invasive MEG, yet detection is impeded by noise, their low occurrence rates, and the workload of visual analysis. We propose a method that identifies ripples in MEG through noise reduction, beamforming and automatic detection with minimal user effort. We analysed 15 min of presurgical resting-state interictal MEG data of 25 patients with epilepsy. The MEG signal-to-noise was improved by using a cross-validation signal space separation method, and by calculating ~ 2400 beamformer-based virtual sensors in the grey matter. Ripples in these sensors were automatically detected by an algorithm optimized for MEG. A small subset of the identified ripples was visually checked. Ripple locations were compared with MEG spike dipole locations and the resection area if available. Running the automatic detection algorithm resulted in on average 905 ripples per patient, of which on average 148 ripples were visually reviewed. Reviewing took approximately 5 min per patient, and identified ripples in 16 out of 25 patients. In 14 patients the ripple locations showed good or moderate concordance with the MEG spikes. For six out of eight patients who had surgery, the ripple locations showed concordance with the resection area: 4/5 with good outcome and 2/3 with poor outcome. Automatic ripple detection in beamformer-based virtual sensors is a feasible non-invasive tool for the identification of ripples in MEG. Our method requires minimal user effort and is easily applicable in a clinical setting.

  3. Software as a service approach to sensor simulation software deployment

    NASA Astrophysics Data System (ADS)

    Webster, Steven; Miller, Gordon; Mayott, Gregory

    2012-05-01

    Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.

  4. Development of Virtual Resource Based IoT Proxy for Bridging Heterogeneous Web Services in IoT Networks.

    PubMed

    Jin, Wenquan; Kim, DoHyeun

    2018-05-26

    The Internet of Things is comprised of heterogeneous devices, applications, and platforms using multiple communication technologies to connect the Internet for providing seamless services ubiquitously. With the requirement of developing Internet of Things products, many protocols, program libraries, frameworks, and standard specifications have been proposed. Therefore, providing a consistent interface to access services from those environments is difficult. Moreover, bridging the existing web services to sensor and actuator networks is also important for providing Internet of Things services in various industry domains. In this paper, an Internet of Things proxy is proposed that is based on virtual resources to bridge heterogeneous web services from the Internet to the Internet of Things network. The proxy enables clients to have transparent access to Internet of Things devices and web services in the network. The proxy is comprised of server and client to forward messages for different communication environments using the virtual resources which include the server for the message sender and the client for the message receiver. We design the proxy for the Open Connectivity Foundation network where the virtual resources are discovered by the clients as Open Connectivity Foundation resources. The virtual resources represent the resources which expose services in the Internet by web service providers. Although the services are provided by web service providers from the Internet, the client can access services using the consistent communication protocol in the Open Connectivity Foundation network. For discovering the resources to access services, the client also uses the consistent discovery interface to discover the Open Connectivity Foundation devices and virtual resources.

  5. Intelligent Elements for ISHM

    NASA Technical Reports Server (NTRS)

    Schmalzel, John L.; Morris, Jon; Turowski, Mark; Figueroa, Fernando; Oostdyk, Rebecca

    2008-01-01

    There are a number of architecture models for implementing Integrated Systems Health Management (ISHM) capabilities. For example, approaches based on the OSA-CBM and OSA-EAI models, or specific architectures developed in response to local needs. NASA s John C. Stennis Space Center (SSC) has developed one such version of an extensible architecture in support of rocket engine testing that integrates a palette of functions in order to achieve an ISHM capability. Among the functional capabilities that are supported by the framework are: prognostic models, anomaly detection, a data base of supporting health information, root cause analysis, intelligent elements, and integrated awareness. This paper focuses on the role that intelligent elements can play in ISHM architectures. We define an intelligent element as a smart element with sufficient computing capacity to support anomaly detection or other algorithms in support of ISHM functions. A smart element has the capabilities of supporting networked implementations of IEEE 1451.x smart sensor and actuator protocols. The ISHM group at SSC has been actively developing intelligent elements in conjunction with several partners at other Centers, universities, and companies as part of our ISHM approach for better supporting rocket engine testing. We have developed several implementations. Among the key features for these intelligent sensors is support for IEEE 1451.1 and incorporation of a suite of algorithms for determination of sensor health. Regardless of the potential advantages that can be achieved using intelligent sensors, existing large-scale systems are still based on conventional sensors and data acquisition systems. In order to bring the benefits of intelligent sensors to these environments, we have also developed virtual implementations of intelligent sensors.

  6. Sensor supervision and multiagent commanding by means of projective virtual reality

    NASA Astrophysics Data System (ADS)

    Rossmann, Juergen

    1998-10-01

    When autonomous systems with multiple agents are considered, conventional control- and supervision technologies are often inadequate because the amount of information available is often presented in a way that the user is effectively overwhelmed by the displayed data. New virtual reality (VR) techniques can help to cope with this problem, because VR offers the chance to convey information in an intuitive manner and can combine supervision capabilities and new, intuitive approaches to the control of autonomous systems. In the approach taken, control and supervision issues were equally stressed and finally led to the new ideas and the general framework for Projective Virtual Reality. The key idea of this new approach for an intuitively operable man machine interface for decentrally controlled multi-agent systems is to let the user act in the virtual world, detect the changes and have an action planning component automatically generate task descriptions for the agents involved to project actions that have been carried out by users in the virtual world into the physical world, e.g. with the help of robots. Thus the Projective Virtual Reality approach is to split the job between the task deduction in the VR and the task `projection' onto the physical automation components by the automatic action planning component. Besides describing the realized projective virtual reality system, the paper will also describe in detail the metaphors and visualization aids used to present different types of (e.g. sensor-) information in an intuitively comprehensible manner.

  7. Workflow-Oriented Cyberinfrastructure for Sensor Data Analytics

    NASA Astrophysics Data System (ADS)

    Orcutt, J. A.; Rajasekar, A.; Moore, R. W.; Vernon, F.

    2015-12-01

    Sensor streams comprise an increasingly large part of Earth Science data. Analytics based on sensor data require an easy way to perform operations such as acquisition, conversion to physical units, metadata linking, sensor fusion, analysis and visualization on distributed sensor streams. Furthermore, embedding real-time sensor data into scientific workflows is of growing interest. We have implemented a scalable networked architecture that can be used to dynamically access packets of data in a stream from multiple sensors, and perform synthesis and analysis across a distributed network. Our system is based on the integrated Rule Oriented Data System (irods.org), which accesses sensor data from the Antelope Real Time Data System (brtt.com), and provides virtualized access to collections of data streams. We integrate real-time data streaming from different sources, collected for different purposes, on different time and spatial scales, and sensed by different methods. iRODS, noted for its policy-oriented data management, brings to sensor processing features and facilities such as single sign-on, third party access control lists ( ACLs), location transparency, logical resource naming, and server-side modeling capabilities while reducing the burden on sensor network operators. Rich integrated metadata support also makes it straightforward to discover data streams of interest and maintain data provenance. The workflow support in iRODS readily integrates sensor processing into any analytical pipeline. The system is developed as part of the NSF-funded Datanet Federation Consortium (datafed.org). APIs for selecting, opening, reaping and closing sensor streams are provided, along with other helper functions to associate metadata and convert sensor packets into NetCDF and JSON formats. Near real-time sensor data including seismic sensors, environmental sensors, LIDAR and video streams are available through this interface. A system for archiving sensor data and metadata in NetCDF format has been implemented and will be demonstrated at AGU.

  8. Integrated Sensor Architecture (ISA) for Live Virtual Constructive (LVC) Environments

    DTIC Science & Technology

    2014-03-01

    connect, publish their needs and capabilities, and interact with other systems even on disadvantaged networks. Within the ISA project, three levels of...constructive, disadvantaged network, sensor 1. INTRODUCTION In 2003 the Networked Sensors for the Future Force (NSFF) Advanced Technology Demonstration...While this combination is less optimal over disadvantaged networks, and we do not recommend it there, TCP and TLS perform adequately over networks with

  9. Reconfigurable routing protocol for free space optical sensor networks.

    PubMed

    Xie, Rong; Yang, Won-Hyuk; Kim, Young-Chon

    2012-01-01

    Recently, free space optical sensor networks (FSOSNs), which are based on free space optics (FSO) instead of radio frequency (RF), have gained increasing visibility over traditional wireless sensor networks (WSNs) due to their advantages such as larger capacity, higher security, and lower cost. However, the performance of FSOSNs is restricted to the requirement of a direct line-of-sight (LOS) path between a sender and a receiver pair. Once a node dies of energy depletion, the network would probably suffer from a dramatic decrease of connectivity, resulting in a huge loss of data packets. Thus, this paper proposes a reconfigurable routing protocol (RRP) to overcome this problem by dynamically reconfiguring the network virtual topology. The RRP works in three phases: (1) virtual topology construction, (2) routing establishment, and (3) reconfigurable routing. When data transmission begins, the data packets are first routed through the shortest hop paths. Then a reconfiguration is initiated by the node whose residual energy falls below a threshold. Nodes affected by this dying node are classified into two types, namely maintenance nodes and adjustment nodes, and they are reconfigured according to the types. An energy model is designed to evaluate the performance of RRP through OPNET simulation. Our simulation results indicate that the RRP achieves better performance compared with the simple-link protocol and a direct reconfiguration scheme in terms of connectivity, network lifetime, packet delivery ratio and the number of living nodes.

  10. Can a virtual reality assessment of fine motor skill predict successful central line insertion?

    PubMed

    Mohamadipanah, Hossein; Parthiban, Chembian; Nathwani, Jay; Rutherford, Drew; DiMarco, Shannon; Pugh, Carla

    2016-10-01

    Due to the increased use of peripherally inserted central catheter lines, central lines are not performed as frequently. The aim of this study is to evaluate whether a virtual reality (VR)-based assessment of fine motor skills can be used as a valid and objective assessment of central line skills. Surgical residents (N = 43) from 7 general surgery programs performed a subclavian central line in a simulated setting. Then, they participated in a force discrimination task in a VR environment. Hand movements from the subclavian central line simulation were tracked by electromagnetic sensors. Gross movements as monitored by the electromagnetic sensors were compared with the fine motor metrics calculated from the force discrimination tasks in the VR environment. Long periods of inactivity (idle time) during needle insertion and lack of smooth movements, as detected by the electromagnetic sensors, showed a significant correlation with poor force discrimination in the VR environment. Also, long periods of needle insertion time correlated to the poor performance in force discrimination in the VR environment. This study shows that force discrimination in a defined VR environment correlates to needle insertion time, idle time, and hand smoothness when performing subclavian central line placement. Fine motor force discrimination may serve as a valid and objective assessment of the skills required for successful needle insertion when placing central lines. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Mechanics of finger-tip electronics

    NASA Astrophysics Data System (ADS)

    Su, Yewang; Li, Rui; Cheng, Huanyu; Ying, Ming; Bonifas, Andrew P.; Hwang, Keh-Chih; Rogers, John A.; Huang, Yonggang

    2013-10-01

    Tactile sensors and electrotactile stimulators can provide important links between humans and virtual environments, through the sensation of touch. Soft materials, such as low modulus silicones, are attractive as platforms and support matrices for arrays sensors and actuators that laminate directly onto the fingertips. Analytic models for the mechanics of three dimensional, form-fitting finger cuffs based on such designs are presented here, along with quantitative validation using the finite element method. The results indicate that the maximum strains in the silicone and the embedded devices are inversely proportional to the square root of radius of curvature of the cuff. These and other findings can be useful in formulating designs for these and related classes of body-worn, three dimensional devices.

  12. Sensor-Augmented Virtual Labs: Using Physical Interactions with Science Simulations to Promote Understanding of Gas Behavior

    ERIC Educational Resources Information Center

    Chao, Jie; Chiu, Jennifer L.; DeJaegher, Crystal J.; Pan, Edward A.

    2016-01-01

    Deep learning of science involves integration of existing knowledge and normative science concepts. Past research demonstrates that combining physical and virtual labs sequentially or side by side can take advantage of the unique affordances each provides for helping students learn science concepts. However, providing simultaneously connected…

  13. A Novel Cloud-Based Service Robotics Application to Data Center Environmental Monitoring

    PubMed Central

    Russo, Ludovico Orlando; Rosa, Stefano; Maggiora, Marcello; Bona, Basilio

    2016-01-01

    This work presents a robotic application aimed at performing environmental monitoring in data centers. Due to the high energy density managed in data centers, environmental monitoring is crucial for controlling air temperature and humidity throughout the whole environment, in order to improve power efficiency, avoid hardware failures and maximize the life cycle of IT devices. State of the art solutions for data center monitoring are nowadays based on environmental sensor networks, which continuously collect temperature and humidity data. These solutions are still expensive and do not scale well in large environments. This paper presents an alternative to environmental sensor networks that relies on autonomous mobile robots equipped with environmental sensors. The robots are controlled by a centralized cloud robotics platform that enables autonomous navigation and provides a remote client user interface for system management. From the user point of view, our solution simulates an environmental sensor network. The system can easily be reconfigured in order to adapt to management requirements and changes in the layout of the data center. For this reason, it is called the virtual sensor network. This paper discusses the implementation choices with regards to the particular requirements of the application and presents and discusses data collected during a long-term experiment in a real scenario. PMID:27509505

  14. A real-time photogrammetric algorithm for sensor and synthetic image fusion with application to aviation combined vision

    NASA Astrophysics Data System (ADS)

    Lebedev, M. A.; Stepaniants, D. G.; Komarov, D. V.; Vygolov, O. V.; Vizilter, Yu. V.; Zheltov, S. Yu.

    2014-08-01

    The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.

  15. Development of a smart home simulator for use as a heuristic tool for management of sensor distribution.

    PubMed

    Poland, Michael P; Nugent, Chris D; Wang, Hui; Chen, Liming

    2009-01-01

    Smart Homes offer potential solutions for various forms of independent living for the elderly. The assistive and protective environment afforded by smart homes offer a safe, relatively inexpensive, dependable and viable alternative to vulnerable inhabitants. Nevertheless, the success of a smart home rests upon the quality of information its decision support system receives and this in turn places great importance on the issue of correct sensor deployment. In this article we present a software tool that has been developed to address the elusive issue of sensor distribution within smart homes. Details of the tool will be presented and it will be shown how it can be used to emulate any real world environment whereby virtual sensor distributions can be rapidly implemented and assessed without the requirement for physical deployment for evaluation. As such, this approach offers the potential of tailoring sensor distributions to the specific needs of a patient in a non-evasive manner. The heuristics based tool presented here has been developed as the first part of a three stage project.

  16. Virtual DRI dataset development

    NASA Astrophysics Data System (ADS)

    Hixson, Jonathan G.; Teaney, Brian P.; May, Christopher; Maurer, Tana; Nelson, Michael B.; Pham, Justin R.

    2017-05-01

    The U.S. Army RDECOM CERDEC NVESD MSD's target acquisition models have been used for many years by the military analysis community for sensor design, trade studies, and field performance prediction. This paper analyzes the results of perception tests performed to compare the results of a field DRI (Detection, Recognition, and Identification Test) performed in 2009 to current Soldier performance viewing the same imagery in a laboratory environment and simulated imagery of the same data set. The purpose of the experiment is to build a robust data set for use in the virtual prototyping of infrared sensors. This data set will provide a strong foundation relating, model predictions, field DRI results and simulated imagery.

  17. Performance analysis of the Microsoft Kinect sensor for 2D Simultaneous Localization and Mapping (SLAM) techniques.

    PubMed

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-12-05

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.

  18. Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM) Techniques

    PubMed Central

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-01-01

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595

  19. Structural Damage Detection Using Virtual Passive Controllers

    NASA Technical Reports Server (NTRS)

    Lew, Jiann-Shiun; Juang, Jer-Nan

    2001-01-01

    This paper presents novel approaches for structural damage detection which uses the virtual passive controllers attached to structures, where passive controllers are energy dissipative devices and thus guarantee the closed-loop stability. The use of the identified parameters of various closed-loop systems can solve the problem that reliable identified parameters, such as natural frequencies of the open-loop system may not provide enough information for damage detection. Only a small number of sensors are required for the proposed approaches. The identified natural frequencies, which are generally much less sensitive to noise and more reliable than the identified natural frequencies, are used for damage detection. Two damage detection techniques are presented. One technique is based on the structures with direct output feedback controllers while the other technique uses the second-order dynamic feedback controllers. A least-squares technique, which is based on the sensitivity of natural frequencies to damage variables, is used for accurately identifying the damage variables.

  20. A haptic sensor-actor-system based on ultrasound elastography and electrorheological fluids for virtual reality applications in medicine.

    PubMed

    Khaled, W; Ermert, H; Bruhns, O; Boese, H; Baumann, M; Monkman, G J; Egersdoerfer, S; Meier, A; Klein, D; Freimuth, H

    2003-01-01

    Mechanical properties of biological tissue represent important diagnostic information and are of histological relevance (hard lesions, "nodes" in organs: tumors; calcifications in vessels: arteriosclerosis). The problem is, that such information is usually obtained by digital palpation only, which is limited with respect to sensitivity. It requires intuitive assessment and does not allow quantitative documentation. A suitable sensor is required for quantitative detection of mechanical tissue properties. On the other hand, there is also some need for a realistic mechanical display of those tissue properties. Suitable actuator arrays with high spatial resolution and real-time capabilities are required operating in a haptic sensor actuator system with different applications. The sensor system uses real time ultrasonic elastography whereas the tactile actuator is based on electrorheological fluids. Due to their small size the actuator array elements have to be manufactured by micro-mechanical production methods. In order to supply the actuator elements with individual high voltages a sophisticated switching and control concept have been designed. This haptic system has the potential of inducing real time substantial forces, using a compact lightweight mechanism which can be applied to numerous areas including intraoperative navigation, telemedicine, teaching, space and telecommunication.

  1. Network-Capable Application Process and Wireless Intelligent Sensors for ISHM

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Morris, Jon; Turowski, Mark; Wang, Ray

    2011-01-01

    Intelligent sensor technology and systems are increasingly becoming attractive means to serve as frameworks for intelligent rocket test facilities with embedded intelligent sensor elements, distributed data acquisition elements, and onboard data acquisition elements. Networked intelligent processors enable users and systems integrators to automatically configure their measurement automation systems for analog sensors. NASA and leading sensor vendors are working together to apply the IEEE 1451 standard for adding plug-and-play capabilities for wireless analog transducers through the use of a Transducer Electronic Data Sheet (TEDS) in order to simplify sensor setup, use, and maintenance, to automatically obtain calibration data, and to eliminate manual data entry and error. A TEDS contains the critical information needed by an instrument or measurement system to identify, characterize, interface, and properly use the signal from an analog sensor. A TEDS is deployed for a sensor in one of two ways. First, the TEDS can reside in embedded, nonvolatile memory (typically flash memory) within the intelligent processor. Second, a virtual TEDS can exist as a separate file, downloadable from the Internet. This concept of virtual TEDS extends the benefits of the standardized TEDS to legacy sensors and applications where the embedded memory is not available. An HTML-based user interface provides a visual tool to interface with those distributed sensors that a TEDS is associated with, to automate the sensor management process. Implementing and deploying the IEEE 1451.1-based Network-Capable Application Process (NCAP) can achieve support for intelligent process in Integrated Systems Health Management (ISHM) for the purpose of monitoring, detection of anomalies, diagnosis of causes of anomalies, prediction of future anomalies, mitigation to maintain operability, and integrated awareness of system health by the operator. It can also support local data collection and storage. This invention enables wide-area sensing and employs numerous globally distributed sensing devices that observe the physical world through the existing sensor network. This innovation enables distributed storage, distributed processing, distributed intelligence, and the availability of DiaK (Data, Information, and Knowledge) to any element as needed. It also enables the simultaneous execution of multiple processes, and represents models that contribute to the determination of the condition and health of each element in the system. The NCAP (intelligent process) can configure data-collection and filtering processes in reaction to sensed data, allowing it to decide when and how to adapt collection and processing with regard to sophisticated analysis of data derived from multiple sensors. The user will be able to view the sensing device network as a single unit that supports a high-level query language. Each query would be able to operate over data collected from across the global sensor network just as a search query encompasses millions of Web pages. The sensor web can preserve ubiquitous information access between the querier and the queried data. Pervasive monitoring of the physical world raises significant data and privacy concerns. This innovation enables different authorities to control portions of the sensing infrastructure, and sensor service authors may wish to compose services across authority boundaries.

  2. An interactive VR system based on full-body tracking and gesture recognition

    NASA Astrophysics Data System (ADS)

    Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru

    2016-10-01

    Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.

  3. Analysis of the Assignment Scheduling Capability for Unmanned Aerial Vehicles (ASC-U) Simulation Tool

    DTIC Science & Technology

    2006-06-01

    dynamic programming approach known as a “rolling horizon” approach. This method accounts for state transitions within the simulation rather than modeling ... model is based on the framework developed for Dynamic Allocation of Fires and Sensors used to evaluate factors associated with networking assets in the...of UAVs required by all types of maneuver and support brigades. (Witsken, 2004) The Modeling , Virtual Environments, and Simulations Institute

  4. Guaranteeing Spoof-Resilient Multi-Robot Networks

    DTIC Science & Technology

    2016-02-12

    key-distribution. Our core contribution is a novel al- gorithm implemented on commercial Wi - Fi radios that can “sense” spoofers using the physics of...encrypted key exchange, but rather a commercial Wi - Fi card and software to implement our so- lution. Our virtual sensor leverages the rich physical...cheap commodity Wi - Fi radios, unlike hardware-based solutions [46, 48]. (3) It is robust to client mobility and power-scaling at- tacks. Finally, our

  5. Determining Spinal Posture for Encumbered Airmen in Crewstations Using the Luna Positioning Sensor

    DTIC Science & Technology

    to characterize design -relevant body size and shape variation as it applies to our service personnel. Of particular interest is cockpit accommodation...confidence in virtual assessments. For this effort, the Luna, Inc. fiber optic positioning sensor was evaluated to determine the utility of this

  6. A task scheduler framework for self-powered wireless sensors.

    PubMed

    Nordman, Mikael M

    2003-10-01

    The cost and inconvenience of cabling is a factor limiting widespread use of intelligent sensors. Recent developments in short-range, low-power radio seem to provide an opening to this problem, making development of wireless sensors feasible. However, for these sensors the energy availability is a main concern. The common solution is either to use a battery or to harvest ambient energy. The benefit of harvested ambient energy is that the energy feeder can be considered as lasting a lifetime, thus it saves the user from concerns related to energy management. The problem is, however, the unpredictability and unsteady behavior of ambient energy sources. This becomes a main concern for sensors that run multiple tasks at different priorities. This paper proposes a new scheduler framework that enables the reliable assignment of task priorities and scheduling in sensors powered by ambient energy. The framework being based on environment parameters, virtual queues, and a state machine with transition conditions, dynamically manages task execution according to priorities. The framework is assessed in a test system powered by a solar panel. The results show the functionality of the framework and how task execution reliably is handled without violating the priority scheme that has been assigned to it.

  7. Magnetic sensor technology for detecting mines, UXO, and other concealed security threats

    NASA Astrophysics Data System (ADS)

    Czipott, Peter V.; Iwanowski, Mark D.

    1997-01-01

    Magnetic sensors have been the sensor of choice in the detection and classification of buried mines and unexploded ordnance (UXO), both on land and underwater, Quantum Magnetics (QM), together with its research partner IBM, have developed a variety of advanced, very high sensitivity superconducting and room temperature magnetic sensors to meet military needs. This work has led to the development and utilization of a three-sensor gradiometer (TSG) patented by IBM, which cannot only detect, but also localize mines and ordnance. QM is also working with IBM and the U.S. Navy to develop an advanced superconducting gradiometer for buried underwater mine detection. The ability to both detect and classify buried non-metallic mines is virtually impossible with existing magnetic sensors. To solve this problem, Quantum Magnetics, building on work of the Naval Research Laboratory (NRL), is pioneering work in the development of quadrupole resonance (QR) methods which can be used to detect the explosive material directly. Based on recent laboratory work done at QM and previous work done in the U.S., Russia and the United Kingdom, we are confident that QR can be effectively applied to the non-metallic mine identification problem.

  8. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    PubMed Central

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-01-01

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318

  9. Virtual Collaboration: Advantages and Disadvantages in the Planning and Execution of Operations in the Information Age

    DTIC Science & Technology

    2004-02-09

    FINAL 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE VIRTUAL COLLABORATION: 5a. CONTRACT NUMBER ADVANTAGES AND DISADVANTAGES IN THE PLANNING AND...warfare is not one system; it is a system of systems from sensors to information flow. In analyzing the specific advantages and disadvantages of one of...Standard Form 298 (Rev. 8-98) NAVAL WAR COLLEGE Newport, R.I. VIRTUAL COLLABORATION: ADVANTAGES AND DISADVANTAGES IN THE PLANNING AND EXECUTION OF OPERATIONS

  10. The benefits of soft sensor and multi-rate control for the implementation of Wireless Networked Control Systems.

    PubMed

    Mansano, Raul K; Godoy, Eduardo P; Porto, Arthur J V

    2014-12-18

    Recent advances in wireless networking technology and the proliferation of industrial wireless sensors have led to an increasing interest in using wireless networks for closed loop control. The main advantages of Wireless Networked Control Systems (WNCSs) are the reconfigurability, easy commissioning and the possibility of installation in places where cabling is impossible. Despite these advantages, there are two main problems which must be considered for practical implementations of WNCSs. One problem is the sampling period constraint of industrial wireless sensors. This problem is related to the energy cost of the wireless transmission, since the power supply is limited, which precludes the use of these sensors in several closed-loop controls. The other technological concern in WNCS is the energy efficiency of the devices. As the sensors are powered by batteries, the lowest possible consumption is required to extend battery lifetime. As a result, there is a compromise between the sensor sampling period, the sensor battery lifetime and the required control performance for the WNCS. This paper develops a model-based soft sensor to overcome these problems and enable practical implementations of WNCSs. The goal of the soft sensor is generating virtual data allowing an actuation on the process faster than the maximum sampling period available for the wireless sensor. Experimental results have shown the soft sensor is a solution to the sampling period constraint problem of wireless sensors in control applications, enabling the application of industrial wireless sensors in WNCSs. Additionally, our results demonstrated the soft sensor potential for implementing energy efficient WNCS through the battery saving of industrial wireless sensors.

  11. A Proposed Treatment for Visual Field Loss caused by Traumatic Brain Injury using Interactive Visuotactile Virtual Environment

    NASA Astrophysics Data System (ADS)

    Farkas, Attila J.; Hajnal, Alen; Shiratuddin, Mohd F.; Szatmary, Gabriella

    In this paper, we propose a novel approach of using interactive virtual environment technology in Vision Restoration Therapy caused by Traumatic Brain Injury. We called the new system Interactive Visuotactile Virtual Environment and it holds a promise of expanding the scope of already existing rehabilitation techniques. Traditional vision rehabilitation methods are based on passive psychophysical training procedures, and can last up to six months before any modest improvements can be seen in patients. A highly immersive and interactive virtual environment will allow the patient to practice everyday activities such as object identification and object manipulation through the use 3D motion sensoring handheld devices such data glove or the Nintendo Wiimote. Employing both perceptual and action components in the training procedures holds the promise of more efficient sensorimotor rehabilitation. Increased stimulation of visual and sensorimotor areas of the brain should facilitate a comprehensive recovery of visuomotor function by exploiting the plasticity of the central nervous system. Integrated with a motion tracking system and an eye tracking device, the interactive virtual environment allows for the creation and manipulation of a wide variety of stimuli, as well as real-time recording of hand-, eye- and body movements and coordination. The goal of the project is to design a cost-effective and efficient vision restoration system.

  12. Intelligent Control and Health Monitoring. Chapter 3

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Kumar, Aditya; Mathews, H. Kirk; Rosenfeld, Taylor; Rybarik, Pavol; Viassolo, Daniel E.

    2009-01-01

    Advanced model-based control architecture overcomes the limitations state-of-the-art engine control and provides the potential of virtual sensors, for example for thrust and stall margin. "Tracking filters" are used to adapt the control parameters to actual conditions and to individual engines. For health monitoring standalone monitoring units will be used for on-board analysis to determine the general engine health and detect and isolate sudden faults. Adaptive models open up the possibility of adapting the control logic to maintain desired performance in the presence of engine degradation or to accommodate any faults. Improved and new sensors are required to allow sensing at stations within the engine gas path that are currently not instrumented due in part to the harsh conditions including high operating temperatures and to allow additional monitoring of vibration, mass flows and energy properties, exhaust gas composition, and gas path debris. The environmental and performance requirements for these sensors are summarized.

  13. Teleautonomous guidance for mobile robots

    NASA Technical Reports Server (NTRS)

    Borenstein, J.; Koren, Y.

    1990-01-01

    Teleautonomous guidance (TG), a technique for the remote guidance of fast mobile robots, has been developed and implemented. With TG, the mobile robot follows the general direction prescribed by an operator. However, if the robot encounters an obstacle, it autonomously avoids collision with that obstacle while trying to match the prescribed direction as closely as possible. This type of shared control is completely transparent and transfers control between teleoperation and autonomous obstacle avoidance gradually. TG allows the operator to steer vehicles and robots at high speeds and in cluttered environments, even without visual contact. TG is based on the virtual force field (VFF) method, which was developed earlier for autonomous obstacle avoidance. The VFF method is especially suited to the accommodation of inaccurate sensor data (such as that produced by ultrasonic sensors) and sensor fusion, and allows the mobile robot to travel quickly without stopping for obstacles.

  14. Magnetically-refreshable receptor platform structures for reusable nano-biosensor chips

    NASA Astrophysics Data System (ADS)

    Yoo, Haneul; Lee, Dong Jun; Cho, Dong-guk; Park, Juhun; Nam, Ki Wan; Tak Cho, Young; Park, Jae Yeol; Chen, Xing; Hong, Seunghun

    2016-01-01

    We developed a magnetically-refreshable receptor platform structure which can be integrated with quite versatile nano-biosensor structures to build reusable nano-biosensor chips. This structure allows one to easily remove used receptor molecules from a biosensor surface and reuse the biosensor for repeated sensing operations. Using this structure, we demonstrated reusable immunofluorescence biosensors. Significantly, since our method allows one to place receptor molecules very close to a nano-biosensor surface, it can be utilized to build reusable carbon nanotube transistor-based biosensors which require receptor molecules within a Debye length from the sensor surface. Furthermore, we also show that a single sensor chip can be utilized to detect two different target molecules simply by replacing receptor molecules using our method. Since this method does not rely on any chemical reaction to refresh sensor chips, it can be utilized for versatile biosensor structures and virtually-general receptor molecular species.

  15. Learning a detection map for a network of unattended ground sensors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Hung D.; Koch, Mark William

    2010-03-01

    We have developed algorithms to automatically learn a detection map of a deployed sensor field for a virtual presence and extended defense (VPED) system without apriori knowledge of the local terrain. The VPED system is an unattended network of sensor pods, with each pod containing acoustic and seismic sensors. Each pod has the ability to detect and classify moving targets at a limited range. By using a network of pods we can form a virtual perimeter with each pod responsible for a certain section of the perimeter. The site's geography and soil conditions can affect the detection performance of themore » pods. Thus, a network in the field may not have the same performance as a network designed in the lab. To solve this problem we automatically estimate a network's detection performance as it is being installed at a site by a mobile deployment unit (MDU). The MDU will wear a GPS unit, so the system not only knows when it can detect the MDU, but also the MDU's location. In this paper, we demonstrate how to handle anisotropic sensor-configurations, geography, and soil conditions.« less

  16. Learning Kinematic Constraints in Laparoscopic Surgery

    PubMed Central

    Huang, Felix C.; Mussa-Ivaldi, Ferdinando A.; Pugh, Carla M.; Patton, James L.

    2012-01-01

    To better understand how kinematic variables impact learning in surgical training, we devised an interactive environment for simulated laparoscopic maneuvers, using either 1) mechanical constraints typical of a surgical “box-trainer” or 2) virtual constraints in which free hand movements control virtual tool motion. During training, the virtual tool responded to the absolute position in space (Position-Based) or the orientation (Orientation-Based) of a hand-held sensor. Volunteers were further assigned to different sequences of target distances (Near-Far-Near or Far-Near-Far). Training with the Orientation-Based constraint enabled much lower path error and shorter movement times during training, which suggests that tool motion that simply mirrors joint motion is easier to learn. When evaluated in physically constrained (physical box-trainer) conditions, each group exhibited improved performance from training. However, Position-Based training enabled greater reductions in movement error relative to Orientation-Based (mean difference: 14.0 percent; CI: 0.7, 28.6). Furthermore, the Near-Far-Near schedule allowed a greater decrease in task time relative to the Far-Near-Far sequence (mean −13:5 percent, CI: −19:5, −7:5). Training that focused on shallow tool insertion (near targets) might promote more efficient movement strategies by emphasizing the curvature of tool motion. In addition, our findings suggest that an understanding of absolute tool position is critical to coping with mechanical interactions between the tool and trocar. PMID:23293709

  17. Machine learning-based assessment tool for imbalance and vestibular dysfunction with virtual reality rehabilitation system.

    PubMed

    Yeh, Shih-Ching; Huang, Ming-Chun; Wang, Pa-Chun; Fang, Te-Yung; Su, Mu-Chun; Tsai, Po-Yi; Rizzo, Albert

    2014-10-01

    Dizziness is a major consequence of imbalance and vestibular dysfunction. Compared to surgery and drug treatments, balance training is non-invasive and more desired. However, training exercises are usually tedious and the assessment tool is insufficient to diagnose patient's severity rapidly. An interactive virtual reality (VR) game-based rehabilitation program that adopted Cawthorne-Cooksey exercises, and a sensor-based measuring system were introduced. To verify the therapeutic effect, a clinical experiment with 48 patients and 36 normal subjects was conducted. Quantified balance indices were measured and analyzed by statistical tools and a Support Vector Machine (SVM) classifier. In terms of balance indices, patients who completed the training process are progressed and the difference between normal subjects and patients is obvious. Further analysis by SVM classifier show that the accuracy of recognizing the differences between patients and normal subject is feasible, and these results can be used to evaluate patients' severity and make rapid assessment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. A Context-Aware Method for Authentically Simulating Outdoors Shadows for Mobile Augmented Reality.

    PubMed

    Barreira, Joao; Bessa, Maximino; Barbosa, Luis; Magalhaes, Luis

    2018-03-01

    Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.

  19. Enhancing patient freedom in rehabilitation robotics using gaze-based intention detection.

    PubMed

    Novak, Domen; Riener, Robert

    2013-06-01

    Several design strategies for rehabilitation robotics have aimed to improve patients' experiences using motivating and engaging virtual environments. This paper presents a new design strategy: enhancing patient freedom with a complex virtual environment that intelligently detects patients' intentions and supports the intended actions. A 'virtual kitchen' scenario has been developed in which many possible actions can be performed at any time, allowing patients to experiment and giving them more freedom. Remote eye tracking is used to detect the intended action and trigger appropriate support by a rehabilitation robot. This approach requires no additional equipment attached to the patient and has a calibration time of less than a minute. The system was tested on healthy subjects using the ARMin III arm rehabilitation robot. It was found to be technically feasible and usable by healthy subjects. However, the intention detection algorithm should be improved using better sensor fusion, and clinical tests with patients are needed to evaluate the system's usability and potential therapeutic benefits.

  20. Reconstruction of in-plane strain maps using hybrid dense sensor network composed of sensing skin

    NASA Astrophysics Data System (ADS)

    Downey, Austin; Laflamme, Simon; Ubertini, Filippo

    2016-12-01

    The authors have recently developed a soft-elastomeric capacitive (SEC)-based thin film sensor for monitoring strain on mesosurfaces. Arranged in a network configuration, the sensing system is analogous to a biological skin, where local strain can be monitored over a global area. Under plane stress conditions, the sensor output contains the additive measurement of the two principal strain components over the monitored surface. In applications where the evaluation of strain maps is useful, in structural health monitoring for instance, such signal must be decomposed into linear strain components along orthogonal directions. Previous work has led to an algorithm that enabled such decomposition by leveraging a dense sensor network configuration with the addition of assumed boundary conditions. Here, we significantly improve the algorithm’s accuracy by leveraging mature off-the-shelf solutions to create a hybrid dense sensor network (HDSN) to improve on the boundary condition assumptions. The system’s boundary conditions are enforced using unidirectional RSGs and assumed virtual sensors. Results from an extensive experimental investigation demonstrate the good performance of the proposed algorithm and its robustness with respect to sensors’ layout. Overall, the proposed algorithm is seen to effectively leverage the advantages of a hybrid dense network for application of the thin film sensor to reconstruct surface strain fields over large surfaces.

  1. Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments

    NASA Astrophysics Data System (ADS)

    Portalés, Cristina; Lerma, José Luis; Navarro, Santiago

    2010-01-01

    Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.

  2. Autonomous Satellite Operations Via Secure Virtual Mission Operations Center

    NASA Technical Reports Server (NTRS)

    Miller, Eric; Paulsen, Phillip E.; Pasciuto, Michael

    2011-01-01

    The science community is interested in improving their ability to respond to rapidly evolving, transient phenomena via autonomous rapid reconfiguration, which derives from the ability to assemble separate but collaborating sensors and data forecasting systems to meet a broad range of research and application needs. Current satellite systems typically require human intervention to respond to triggers from dissimilar sensor systems. Additionally, satellite ground services often need to be coordinated days or weeks in advance. Finally, the boundaries between the various sensor systems that make up such a Sensor Web are defined by such things as link delay and connectivity, data and error rate asymmetry, data reliability, quality of service provisions, and trust, complicating autonomous operations. Over the past ten years, researchers from the NASA Glenn Research Center (GRC), General Dynamics, Surrey Satellite Technology Limited (SSTL), Cisco, Universal Space Networks (USN), the U.S. Geological Survey (USGS), the Naval Research Laboratory, the DoD Operationally Responsive Space (ORS) Office, and others have worked collaboratively to develop a virtual mission operations capability. Called VMOC (Virtual Mission Operations Center), this new capability allows cross-system queuing of dissimilar mission unique systems through the use of a common security scheme and published application programming interfaces (APIs). Collaborative VMOC demonstrations over the last several years have supported the standardization of spacecraft to ground interfaces needed to reduce costs, maximize space effects to the user, and allow the generation of new tactics, techniques and procedures that lead to responsive space employment.

  3. Sensor data fusion for textured reconstruction and virtual representation of alpine scenes

    NASA Astrophysics Data System (ADS)

    Häufel, Gisela; Bulatov, Dimitri; Solbrig, Peter

    2017-10-01

    The concept of remote sensing is to provide information about a wide-range area without making physical contact with this area. If, additionally to satellite imagery, images and videos taken by drones provide a more up-to-date data at a higher resolution, or accurate vector data is downloadable from the Internet, one speaks of sensor data fusion. The concept of sensor data fusion is relevant for many applications, such as virtual tourism, automatic navigation, hazard assessment, etc. In this work, we describe sensor data fusion aiming to create a semantic 3D model of an extremely interesting yet challenging dataset: An alpine region in Southern Germany. A particular challenge of this work is that rock faces including overhangs are present in the input airborne laser point cloud. The proposed procedure for identification and reconstruction of overhangs from point clouds comprises four steps: Point cloud preparation, filtering out vegetation, mesh generation and texturing. Further object types are extracted in several interesting subsections of the dataset: Building models with textures from UAV (Unmanned Aerial Vehicle) videos, hills reconstructed as generic surfaces and textured by the orthophoto, individual trees detected by the watershed algorithm, as well as the vector data for roads retrieved from openly available shapefiles and GPS-device tracks. We pursue geo-specific reconstruction by assigning texture and width to roads of several pre-determined types and modeling isolated trees and rocks using commercial software. For visualization and simulation of the area, we have chosen the simulation system Virtual Battlespace 3 (VBS3). It becomes clear that the proposed concept of sensor data fusion allows a coarse reconstruction of a large scene and, at the same time, an accurate and up-to-date representation of its relevant subsections, in which simulation can take place.

  4. Creating photorealistic virtual model with polarization-based vision system

    NASA Astrophysics Data System (ADS)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  5. A lightweight sensor network management system design

    USGS Publications Warehouse

    Yuan, F.; Song, W.-Z.; Peterson, N.; Peng, Y.; Wang, L.; Shirazi, B.; LaHusen, R.

    2008-01-01

    In this paper, we propose a lightweight and transparent management framework for TinyOS sensor networks, called L-SNMS, which minimizes the overhead of management functions, including memory usage overhead, network traffic overhead, and integration overhead. We accomplish this by making L-SNMS virtually transparent to other applications hence requiring minimal integration. The proposed L-SNMS framework has been successfully tested on various sensor node platforms, including TelosB, MICAz and IMote2. ?? 2008 IEEE.

  6. Balance rehabilitation: promoting the role of virtual reality in patients with diabetic peripheral neuropathy.

    PubMed

    Grewal, Gurtej S; Sayeed, Rashad; Schwenk, Michael; Bharara, Manish; Menzies, Robert; Talal, Talal K; Armstrong, David G; Najafi, Bijan

    2013-01-01

    Individuals with diabetic peripheral neuropathy frequently experience concomitant impaired proprioception and postural instability. Conventional exercise training has been demonstrated to be effective in improving balance but does not incorporate visual feedback targeting joint perception, which is an integral mechanism that helps compensate for impaired proprioception in diabetic peripheral neuropathy. This prospective cohort study recruited 29 participants (mean ± SD: age, 57 ± 10 years; body mass index [calculated as weight in kilograms divided by height in meters squared], 26.9 ± 3.1). Participants satisfying the inclusion criteria performed predefined ankle exercises through reaching tasks, with visual feedback from the ankle joint projected on a screen. Ankle motion in the mediolateral and anteroposterior directions was captured using wearable sensors attached to the participant's shank. Improvements in postural stability were quantified by measuring center of mass sway area and the reciprocal compensatory index before and after training using validated body-worn sensor technology. Findings revealed a significant reduction in center of mass sway after training (mean, 22%; P = .02). A higher postural stability deficit (high body sway) at baseline was associated with higher training gains in postural balance (reduction in center of mass sway) (r = -0.52, P < .05). In addition, significant improvement was observed in postural coordination between the ankle and hip joints (mean, 10.4%; P = .04). The present research implemented a novel balance rehabilitation strategy based on virtual reality technology. The method included wearable sensors and an interactive user interface for real-time visual feedback based on ankle joint motion, similar to a video gaming environment, for compensating impaired joint proprioception. These findings support that visual feedback generated from the ankle joint coupled with motor learning may be effective in improving postural stability in patients with diabetic peripheral neuropathy.

  7. An Internet of Things based physiological signal monitoring and receiving system for virtual enhanced health care network.

    PubMed

    Rajan, J Pandia; Rajan, S Edward

    2018-01-01

    Wireless physiological signal monitoring system designing with secured data communication in the health care system is an important and dynamic process. We propose a signal monitoring system using NI myRIO connected with the wireless body sensor network through multi-channel signal acquisition method. Based on the server side validation of the signal, the data connected to the local server is updated in the cloud. The Internet of Things (IoT) architecture is used to get the mobility and fast access of patient data to healthcare service providers. This research work proposes a novel architecture for wireless physiological signal monitoring system using ubiquitous healthcare services by virtual Internet of Things. We showed an improvement in method of access and real time dynamic monitoring of physiological signal of this remote monitoring system using virtual Internet of thing approach. This remote monitoring and access system is evaluated in conventional value. This proposed system is envisioned to modern smart health care system by high utility and user friendly in clinical applications. We claim that the proposed scheme significantly improves the accuracy of the remote monitoring system compared to the other wireless communication methods in clinical system.

  8. Robust controller designs for second-order dynamic system: A virtual passive approach

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh

    1990-01-01

    A robust controller design is presented for second-order dynamic systems. The controller is model-independent and itself is a virtual second-order dynamic system. Conditions on actuator and sensor placements are identified for controller designs that guarantee overall closed-loop stability. The dynamic controller can be viewed as a virtual passive damping system that serves to stabilize the actual dynamic system. The control gains are interpreted as virtual mass, spring, and dashpot elements that play the same roles as actual physical elements in stability analysis. Position, velocity, and acceleration feedback are considered. Simple examples are provided to illustrate the physical meaning of this controller design.

  9. Virtual Simulation Capability for Deployable Force Protection Analysis (VSCDFP) FY 15 Plan

    DTIC Science & Technology

    2014-07-30

    Unmanned Aircraft Systems ( SUAS ) outfitted with a baseline two-axis steerable “Infini-spin” electro- optic/infrared (EO/IR) sensor payload. The current...Payload (EPRP) enhanced sensor system to the Puma SUAS will be beneficial for Soldiers executing RCP mission sets. • Develop the RCP EPRP Concept of

  10. A Cluster-Based Architecture to Structure the Topology of Parallel Wireless Sensor Networks

    PubMed Central

    Lloret, Jaime; Garcia, Miguel; Bri, Diana; Diaz, Juan R.

    2009-01-01

    A wireless sensor network is a self-configuring network of mobile nodes connected by wireless links where the nodes have limited capacity and energy. In many cases, the application environment requires the design of an exclusive network topology for a particular case. Cluster-based network developments and proposals in existence have been designed to build a network for just one type of node, where all nodes can communicate with any other nodes in their coverage area. Let us suppose a set of clusters of sensor nodes where each cluster is formed by different types of nodes (e.g., they could be classified by the sensed parameter using different transmitting interfaces, by the node profile or by the type of device: laptops, PDAs, sensor etc.) and exclusive networks, as virtual networks, are needed with the same type of sensed data, or the same type of devices, or even the same type of profiles. In this paper, we propose an algorithm that is able to structure the topology of different wireless sensor networks to coexist in the same environment. It allows control and management of the topology of each network. The architecture operation and the protocol messages will be described. Measurements from a real test-bench will show that the designed protocol has low bandwidth consumption and also demonstrates the viability and the scalability of the proposed architecture. Our ccluster-based algorithm is compared with other algorithms reported in the literature in terms of architecture and protocol measurements. PMID:22303185

  11. Collaborative Localization and Location Verification in WSNs

    PubMed Central

    Miao, Chunyu; Dai, Guoyong; Ying, Kezhen; Chen, Qingzhang

    2015-01-01

    Localization is one of the most important technologies in wireless sensor networks. A lightweight distributed node localization scheme is proposed by considering the limited computational capacity of WSNs. The proposed scheme introduces the virtual force model to determine the location by incremental refinement. Aiming at solving the drifting problem and malicious anchor problem, a location verification algorithm based on the virtual force mode is presented. In addition, an anchor promotion algorithm using the localization reliability model is proposed to re-locate the drifted nodes. Extended simulation experiments indicate that the localization algorithm has relatively high precision and the location verification algorithm has relatively high accuracy. The communication overhead of these algorithms is relative low, and the whole set of reliable localization methods is practical as well as comprehensive. PMID:25954948

  12. The mixed reality of things: emerging challenges for human-information interaction

    NASA Astrophysics Data System (ADS)

    Spicer, Ryan P.; Russell, Stephen M.; Rosenberg, Evan Suma

    2017-05-01

    Virtual and mixed reality technology has advanced tremendously over the past several years. This nascent medium has the potential to transform how people communicate over distance, train for unfamiliar tasks, operate in challenging environments, and how they visualize, interact, and make decisions based on complex data. At the same time, the marketplace has experienced a proliferation of network-connected devices and generalized sensors that are becoming increasingly accessible and ubiquitous. As the "Internet of Things" expands to encompass a predicted 50 billion connected devices by 2020, the volume and complexity of information generated in pervasive and virtualized environments will continue to grow exponentially. The convergence of these trends demands a theoretically grounded research agenda that can address emerging challenges for human-information interaction (HII). Virtual and mixed reality environments can provide controlled settings where HII phenomena can be observed and measured, new theories developed, and novel algorithms and interaction techniques evaluated. In this paper, we describe the intersection of pervasive computing with virtual and mixed reality, identify current research gaps and opportunities to advance the fundamental understanding of HII, and discuss implications for the design and development of cyber-human systems for both military and civilian use.

  13. Secure, Autonomous, Intelligent Controller for Integrating Distributed Emergency Response Satellite Operations

    NASA Astrophysics Data System (ADS)

    Ivancic, W. D.; Paulsen, P. E.; Miller, E. M.; Sage, S. P.

    This report describes a Secure, Autonomous, and Intelligent Controller for Integrating Distributed Emergency Response Satellite Operations. It includes a description of current improvements to existing Virtual Mission Operations Center technology being used by US Department of Defense and originally developed under NASA funding. The report also highlights a technology demonstration performed in partnership with the United States Geological Service for Earth Resources Observation and Science using DigitalGlobe® satellites to obtain space-based sensor data.

  14. Fabrication of an infrared Shack-Hartmann sensor by combining high-speed single-point diamond milling and precision compression molding processes.

    PubMed

    Zhang, Lin; Zhou, Wenchen; Naples, Neil J; Yi, Allen Y

    2018-05-01

    A novel fabrication method by combining high-speed single-point diamond milling and precision compression molding processes for fabrication of discontinuous freeform microlens arrays was proposed. Compared with slow tool servo diamond broaching, high-speed single-point diamond milling was selected for its flexibility in the fabrication of true 3D optical surfaces with discontinuous features. The advantage of single-point diamond milling is that the surface features can be constructed sequentially by spacing the axes of a virtual spindle at arbitrary positions based on the combination of rotational and translational motions of both the high-speed spindle and linear slides. By employing this method, each micro-lenslet was regarded as a microstructure cell by passing the axis of the virtual spindle through the vertex of each cell. An optimization arithmetic based on minimum-area fabrication was introduced to the machining process to further increase the machining efficiency. After the mold insert was machined, it was employed to replicate the microlens array onto chalcogenide glass. In the ensuing optical measurement, the self-built Shack-Hartmann wavefront sensor was proven to be accurate in detecting an infrared wavefront by both experiments and numerical simulation. The combined results showed that precision compression molding of chalcogenide glasses could be an economic and precision optical fabrication technology for high-volume production of infrared optics.

  15. In-vehicle group activity modeling and simulation in sensor-based virtual environment

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir; Telagamsetti, Durga; Poshtyar, Azin; Chan, Alex; Hu, Shuowen

    2016-05-01

    Human group activity recognition is a very complex and challenging task, especially for Partially Observable Group Activities (POGA) that occur in confined spaces with limited visual observability and often under severe occultation. In this paper, we present IRIS Virtual Environment Simulation Model (VESM) for the modeling and simulation of dynamic POGA. More specifically, we address sensor-based modeling and simulation of a specific category of POGA, called In-Vehicle Group Activities (IVGA). In VESM, human-alike animated characters, called humanoids, are employed to simulate complex in-vehicle group activities within the confined space of a modeled vehicle. Each articulated humanoid is kinematically modeled with comparable physical attributes and appearances that are linkable to its human counterpart. Each humanoid exhibits harmonious full-body motion - simulating human-like gestures and postures, facial impressions, and hands motions for coordinated dexterity. VESM facilitates the creation of interactive scenarios consisting of multiple humanoids with different personalities and intentions, which are capable of performing complicated human activities within the confined space inside a typical vehicle. In this paper, we demonstrate the efficiency and effectiveness of VESM in terms of its capabilities to seamlessly generate time-synchronized, multi-source, and correlated imagery datasets of IVGA, which are useful for the training and testing of multi-source full-motion video processing and annotation. Furthermore, we demonstrate full-motion video processing of such simulated scenarios under different operational contextual constraints.

  16. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model

    PubMed Central

    Wu, Jian-Xing; Huang, Ping-Tzan; Li, Chien-Ming

    2018-01-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500–700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility. PMID:29515815

  17. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model.

    PubMed

    Wu, Jian-Xing; Huang, Ping-Tzan; Lin, Chia-Hung; Li, Chien-Ming

    2018-02-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500-700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility.

  18. Development of low cost and accurate homemade sensor system based on Surface Plasmon Resonance (SPR)

    NASA Astrophysics Data System (ADS)

    Laksono, F. D.; Supardianningsih; Arifin, M.; Abraha, K.

    2018-04-01

    In this paper, we developed homemade and computerized sensor system based on Surface Plasmon Resonance (SPR). The developed systems consist of mechanical system instrument, laser power sensor, and user interface. The mechanical system development that uses anti-backlash gear design was successfully able to enhance the angular resolution angle of incidence laser up to 0.01°. In this system, the laser detector acquisition system and stepper motor controller utilizing Arduino Uno which is easy to program, flexible, and low cost, was used. Furthermore, we employed LabView’s user interface as the virtual instrument for facilitating the sample measurement and for transforming the data recording directly into the digital form. The test results using gold-deposited half-cylinder prism showed the Total Internal Reflection (TIR) angle of 41,34°± 0,01° and SPR angle of 44,20°± 0,01°, respectively. The result demonstrated that the developed system managed to reduce the measurement duration and data recording errors caused by human error. Also, the test results also concluded that the system’s measurement is repeatable and accurate.

  19. Gyro Drift Correction for An Indirect Kalman Filter Based Sensor Fusion Driver.

    PubMed

    Lee, Chan-Gun; Dao, Nhu-Ngoc; Jang, Seonmin; Kim, Deokhwan; Kim, Yonghun; Cho, Sungrae

    2016-06-11

    Sensor fusion techniques have made a significant contribution to the success of the recently emerging mobile applications era because a variety of mobile applications operate based on multi-sensing information from the surrounding environment, such as navigation systems, fitness trackers, interactive virtual reality games, etc. For these applications, the accuracy of sensing information plays an important role to improve the user experience (UX) quality, especially with gyroscopes and accelerometers. Therefore, in this paper, we proposed a novel mechanism to resolve the gyro drift problem, which negatively affects the accuracy of orientation computations in the indirect Kalman filter based sensor fusion. Our mechanism focuses on addressing the issues of external feedback loops and non-gyro error elements contained in the state vectors of an indirect Kalman filter. Moreover, the mechanism is implemented in the device-driver layer, providing lower process latency and transparency capabilities for the upper applications. These advances are relevant to millions of legacy applications since utilizing our mechanism does not require the existing applications to be re-programmed. The experimental results show that the root mean square errors (RMSE) before and after applying our mechanism are significantly reduced from 6.3 × 10(-1) to 5.3 × 10(-7), respectively.

  20. iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones

    NASA Astrophysics Data System (ADS)

    Choi, Junyeong; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2013-02-01

    The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand's palm through a built-in camera. The virtual contents are faithfully rendered on the user's palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.

  1. Virtual reality and telepresence for military medicine.

    PubMed

    Satava, R M

    1995-03-01

    The profound changes brought about by technology in the past few decades are leading to a total revolution in medicine. The advanced technologies of telepresence and virtual reality are but two of the manifestations emerging from our new information age; now all of medicine can be empowered because of this digital technology. The leading edge is on the digital battlefield, where an entire new concept in military medicine is evolving. Using remote sensors, intelligent systems, telepresence surgery and virtual reality surgical simulations, combat casualty care is prepared for the 21st century.

  2. Virtual groups for patient WBAN monitoring in medical environments.

    PubMed

    Ivanov, Stepan; Foley, Christopher; Balasubramaniam, Sasitharan; Botvich, Dmitri

    2012-11-01

    Wireless body area networks (WBAN) provide a tremendous opportunity for remote health monitoring. However, engineering WBAN health monitoring systems encounters a number of challenges including efficient WBAN monitoring information extraction, dynamically fine tuning the monitoring process to suit the quality of data, and to allow the translation of high-level requirements of medical officers to low-level sensor reconfiguration. This paper addresses these challenges, by proposing an architecture that allows virtual groups to be formed between devices of patients, nurses, and doctors in order to enable remote analysis of WBAN data. Group formation and modification is performed with respect to patients' conditions and medical officers' requirements, which could be easily adjusted through high-level policies. We also propose, a new metric called the Quality of Health Monitoring, which allows medical officers to provide feedback on the quality of WBAN data received. The WBAN data gathered are transmitted to the virtual group members through an underlying environmental sensor network. The proposed approach is evaluated through a series of simulation.

  3. Rapid-response Sensor Networks Leveraging Open Standards and the Internet of Things

    NASA Astrophysics Data System (ADS)

    Bermudez, L. E.; Lieberman, J. E.; Lewis, L.; Botts, M.; Liang, S.

    2016-12-01

    New sensor technologies provide an unparalleled capability to collect large numbers of diverse observations about the world around us. Networks of such sensors are especially effective for capturing and analyzing unexpected, fast moving events if they can be deployed with a minimum of time, effort, and cost. A rapid-response sensing and processing capability is extremely important in quickly unfolding events not only to collect data for future research.but also to support response efforts that may be needed by providing up-to-date knowledge of the situation. A recent pilot activity coordinated by the Open Geospatial Consortium combined Sensor Web Enablement (SWE) standards with Internet of Things (IoT) practices to understand better how to set up rapid-response sensor networks in comparable event situations involving accidents or disasters. The networks included weather and environmental sensors, georeferenced UAV and PTZ imagery collectors, and observations from "citizen sensors", as well as virtual observations generated by predictive models. A key feature of each "SWE-IoT" network was one or more Sensor Hubs that connected local, often proprietary sensor device protocols to a common set of standard SWE data types and standard Web interfaces on an IP-based internetwork. This IoT approach provided direct, common, interoperable access to all sensor readings from anywhere on the internetwork of sensors, Hubs, and applications. Sensor Hubs also supported an automated discovery protocol in which activated Hubs registered themselves with a canonical catalog service. As each sensor (wireless or wired) was activated within range of an authorized Hub, it registered itself with that Hub, which in turn registered the sensor and its capabilities with the catalog. Sensor Hub functions were implemented in a range of component types, from personal devices such as smartphones and Raspberry Pi's to full cloud-based sensor services platforms. Connected into a network "constellation" the Hubs also enabled reliable exchange and persistence of sensor data in constrained communications environments. Pilot results are being documented in public OGC engineering reports and are feeding into improved standards to support SWE-IoT networks for a range of domains and applications.

  4. The Benefits of Soft Sensor and Multi-Rate Control for the Implementation of Wireless Networked Control Systems

    PubMed Central

    Mansano, Raul K.; Godoy, Eduardo P.; Porto, Arthur J. V.

    2014-01-01

    Recent advances in wireless networking technology and the proliferation of industrial wireless sensors have led to an increasing interest in using wireless networks for closed loop control. The main advantages of Wireless Networked Control Systems (WNCSs) are the reconfigurability, easy commissioning and the possibility of installation in places where cabling is impossible. Despite these advantages, there are two main problems which must be considered for practical implementations of WNCSs. One problem is the sampling period constraint of industrial wireless sensors. This problem is related to the energy cost of the wireless transmission, since the power supply is limited, which precludes the use of these sensors in several closed-loop controls. The other technological concern in WNCS is the energy efficiency of the devices. As the sensors are powered by batteries, the lowest possible consumption is required to extend battery lifetime. As a result, there is a compromise between the sensor sampling period, the sensor battery lifetime and the required control performance for the WNCS. This paper develops a model-based soft sensor to overcome these problems and enable practical implementations of WNCSs. The goal of the soft sensor is generating virtual data allowing an actuation on the process faster than the maximum sampling period available for the wireless sensor. Experimental results have shown the soft sensor is a solution to the sampling period constraint problem of wireless sensors in control applications, enabling the application of industrial wireless sensors in WNCSs. Additionally, our results demonstrated the soft sensor potential for implementing energy efficient WNCS through the battery saving of industrial wireless sensors. PMID:25529208

  5. Embodied collaboration support system for 3D shape evaluation in virtual space

    NASA Astrophysics Data System (ADS)

    Okubo, Masashi; Watanabe, Tomio

    2005-12-01

    Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.

  6. The Language of Glove: Wireless gesture decoder with low-power and stretchable hybrid electronics.

    PubMed

    O'Connor, Timothy F; Fach, Matthew E; Miller, Rachel; Root, Samuel E; Mercier, Patrick P; Lipomi, Darren J

    2017-01-01

    This communication describes a glove capable of wirelessly translating the American Sign Language (ASL) alphabet into text displayable on a computer or smartphone. The key components of the device are strain sensors comprising a piezoresistive composite of carbon particles embedded in a fluoroelastomer. These sensors are integrated with a wearable electronic module consisting of digitizers, a microcontroller, and a Bluetooth radio. Finite-element analysis predicts a peak strain on the sensors of 5% when the knuckles are fully bent. Fatigue studies suggest that the sensors successfully detect the articulation of the knuckles even when bent to their maximal degree 1,000 times. In concert with an accelerometer and pressure sensors, the glove is able to translate all 26 letters of the ASL alphabet. Lastly, data taken from the glove are used to control a virtual hand; this application suggests new ways in which stretchable and wearable electronics can enable humans to interface with virtual environments. Critically, this system was constructed of components costing less than $100 and did not require chemical synthesis or access to a cleanroom. It can thus be used as a test bed for materials scientists to evaluate the performance of new materials and flexible and stretchable hybrid electronics.

  7. The Language of Glove: Wireless gesture decoder with low-power and stretchable hybrid electronics

    PubMed Central

    O’Connor, Timothy F.; Fach, Matthew E.; Miller, Rachel; Root, Samuel E.; Mercier, Patrick P.

    2017-01-01

    This communication describes a glove capable of wirelessly translating the American Sign Language (ASL) alphabet into text displayable on a computer or smartphone. The key components of the device are strain sensors comprising a piezoresistive composite of carbon particles embedded in a fluoroelastomer. These sensors are integrated with a wearable electronic module consisting of digitizers, a microcontroller, and a Bluetooth radio. Finite-element analysis predicts a peak strain on the sensors of 5% when the knuckles are fully bent. Fatigue studies suggest that the sensors successfully detect the articulation of the knuckles even when bent to their maximal degree 1,000 times. In concert with an accelerometer and pressure sensors, the glove is able to translate all 26 letters of the ASL alphabet. Lastly, data taken from the glove are used to control a virtual hand; this application suggests new ways in which stretchable and wearable electronics can enable humans to interface with virtual environments. Critically, this system was constructed of components costing less than $100 and did not require chemical synthesis or access to a cleanroom. It can thus be used as a test bed for materials scientists to evaluate the performance of new materials and flexible and stretchable hybrid electronics. PMID:28700603

  8. A Non-Invasive Multichannel Hybrid Fiber-Optic Sensor System for Vital Sign Monitoring

    PubMed Central

    Fajkus, Marcel; Nedoma, Jan; Martinek, Radek; Vasinek, Vladimir; Nazeran, Homer; Siska, Petr

    2017-01-01

    In this article, we briefly describe the design, construction, and functional verification of a hybrid multichannel fiber-optic sensor system for basic vital sign monitoring. This sensor uses a novel non-invasive measurement probe based on the fiber Bragg grating (FBG). The probe is composed of two FBGs encapsulated inside a polydimethylsiloxane polymer (PDMS). The PDMS is non-reactive to human skin and resistant to electromagnetic waves, UV absorption, and radiation. We emphasize the construction of the probe to be specifically used for basic vital sign monitoring such as body temperature, respiratory rate and heart rate. The proposed sensor system can continuously process incoming signals from up to 128 individuals. We first present the overall design of this novel multichannel sensor and then elaborate on how it has the potential to simplify vital sign monitoring and consequently improve the comfort level of patients in long-term health care facilities, hospitals and clinics. The reference ECG signal was acquired with the use of standard gel electrodes fixed to the monitored person’s chest using a real-time monitoring system for ECG signals with virtual instrumentation. The outcomes of these experiments have unambiguously proved the functionality of the sensor system and will be used to inform our future research in this fast developing and emerging field. PMID:28075341

  9. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations

    PubMed Central

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-01-01

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315

  10. DNA Encoding Training Using 3D Gesture Interaction.

    PubMed

    Nicola, Stelian; Handrea, Flavia-Laura; Crişan-Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara

    2017-01-01

    The work described in this paper summarizes the development process and presents the results of a human genetics training application, studying the 20 amino acids formed by the combination of the 3 nucleotides of DNA targeting mainly medical and bioinformatics students. Currently, the domain applications using recognized human gestures of the Leap Motion sensor are used in molecules controlling and learning from Mendeleev table or in visualizing the animated reactions of specific molecules with water. The novelty in the current application consists in using the Leap Motion sensor creating new gestures for the application control and creating a tag based algorithm corresponding to each amino acid, depending on the position in the 3D virtual space of the 4 nucleotides of DNA and their type. The team proposes a 3D application based on Unity editor and on Leap Motion sensor where the user has the liberty of forming different combinations of the 20 amino acids. The results confirm that this new type of study of medicine/biochemistry using the Leap Motion sensor for handling amino acids is suitable for students. The application is original and interactive and the users can create their own amino acid structures in a 3D-like environment which they could not do otherwise using traditional pen-and-paper.

  11. Affordable and personalized lighting using inverse modeling and virtual sensors

    NASA Astrophysics Data System (ADS)

    Basu, Chandrayee; Chen, Benjamin; Richards, Jacob; Dhinakaran, Aparna; Agogino, Alice; Martin, Rodney

    2014-03-01

    Wireless sensor networks (WSN) have great potential to enable personalized intelligent lighting systems while reducing building energy use by 50%-70%. As a result WSN systems are being increasingly integrated in state-ofart intelligent lighting systems. In the future these systems will enable participation of lighting loads as ancillary services. However, such systems can be expensive to install and lack the plug-and-play quality necessary for user-friendly commissioning. In this paper we present an integrated system of wireless sensor platforms and modeling software to enable affordable and user-friendly intelligent lighting. It requires ⇠ 60% fewer sensor deployments compared to current commercial systems. Reduction in sensor deployments has been achieved by optimally replacing the actual photo-sensors with real-time discrete predictive inverse models. Spatially sparse and clustered sub-hourly photo-sensor data captured by the WSN platforms are used to develop and validate a piece-wise linear regression of indoor light distribution. This deterministic data-driven model accounts for sky conditions and solar position. The optimal placement of photo-sensors is performed iteratively to achieve the best predictability of the light field desired for indoor lighting control. Using two weeks of daylight and artificial light training data acquired at the Sustainability Base at NASA Ames, the model was able to predict the light level at seven monitored workstations with 80%-95% accuracy. We estimate that 10% adoption of this intelligent wireless sensor system in commercial buildings could save 0.2-0.25 quads BTU of energy nationwide.

  12. Material recognition based on thermal cues: Mechanisms and applications.

    PubMed

    Ho, Hsin-Ni

    2018-01-01

    Some materials feel colder to the touch than others, and we can use this difference in perceived coldness for material recognition. This review focuses on the mechanisms underlying material recognition based on thermal cues. It provides an overview of the physical, perceptual, and cognitive processes involved in material recognition. It also describes engineering domains in which material recognition based on thermal cues have been applied. This includes haptic interfaces that seek to reproduce the sensations associated with contact in virtual environments and tactile sensors aim for automatic material recognition. The review concludes by considering the contributions of this line of research in both science and engineering.

  13. Material recognition based on thermal cues: Mechanisms and applications

    PubMed Central

    Ho, Hsin-Ni

    2018-01-01

    ABSTRACT Some materials feel colder to the touch than others, and we can use this difference in perceived coldness for material recognition. This review focuses on the mechanisms underlying material recognition based on thermal cues. It provides an overview of the physical, perceptual, and cognitive processes involved in material recognition. It also describes engineering domains in which material recognition based on thermal cues have been applied. This includes haptic interfaces that seek to reproduce the sensations associated with contact in virtual environments and tactile sensors aim for automatic material recognition. The review concludes by considering the contributions of this line of research in both science and engineering. PMID:29687043

  14. Secure, Autonomous, Intelligent Controller for Integrating Distributed Emergency Response Satellite Operations

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Paulsen, Phillip E.; Miller, Eric M.; Sage, Steen P.

    2013-01-01

    This report describes a Secure, Autonomous, and Intelligent Controller for Integrating Distributed Emergency Response Satellite Operations. It includes a description of current improvements to existing Virtual Mission Operations Center technology being used by US Department of Defense and originally developed under NASA funding. The report also highlights a technology demonstration performed in partnership with the United States Geological Service for Earth Resources Observation and Science using DigitalGlobe(Registered TradeMark) satellites to obtain space-based sensor data.

  15. Interreality: A New Paradigm for E-health.

    PubMed

    Riva, Giuseppe

    2009-01-01

    "Interreality" is a personalized immersive e-therapy whose main novelty is a hybrid, closed-loop empowering experience bridging physical and virtual worlds. The main feature of interreality is a twofold link between the virtual and the real world: (a) behavior in the physical world influences the experience in the virtual one; (b) behavior in the virtual world influences the experience in the real one. This is achieved through: (1) 3D Shared Virtual Worlds: role-playing experiences in which one or more users interact with one another within a 3D world; (2) Bio and Activity Sensors (From the Real to the Virtual World): They are used to track the emotional/health/activity status of the user and to influence his/her experience in the virtual world (aspect, activity and access); (3) Mobile Internet Appliances (From the Virtual to the Real One): In interreality, the social and individual user activity in the virtual world has a direct link with the users' life through a mobile phone/digital assistant. The different technologies that are involved in the interreality vision and its clinical rationale are addressed and discussed.

  16. Autonomous Sensors for Large Scale Data Collection

    NASA Astrophysics Data System (ADS)

    Noto, J.; Kerr, R.; Riccobono, J.; Kapali, S.; Migliozzi, M. A.; Goenka, C.

    2017-12-01

    Presented here is a novel implementation of a "Doppler imager" which remotely measures winds and temperatures of the neutral background atmosphere at ionospheric altitudes of 87-300Km and possibly above. Incorporating both recent optical manufacturing developments, modern network awareness and the application of machine learning techniques for intelligent self-monitoring and data classification. This system achieves cost savings in manufacturing, deployment and lifetime operating costs. Deployed in both ground and space-based modalities, this cost-disruptive technology will allow computer models of, ionospheric variability and other space weather models to operate with higher precision. Other sensors can be folded into the data collection and analysis architecture easily creating autonomous virtual observatories. A prototype version of this sensor has recently been deployed in Trivandrum India for the Indian Government. This Doppler imager is capable of operation, even within the restricted CubeSat environment. The CubeSat bus offers a very challenging environment, even for small instruments. The lack of SWaP and the challenging thermal environment demand development of a new generation of instruments; the Doppler imager presented is well suited to this environment. Concurrent with this CubeSat development is the development and construction of ground based arrays of inexpensive sensors using the proposed technology. This instrument could be flown inexpensively on one or more CubeSats to provide valuable data to space weather forecasters and ionospheric scientists. Arrays of magnetometers have been deployed for the last 20 years [Alabi, 2005]. Other examples of ground based arrays include an array of white-light all sky imagers (THEMIS) deployed across Canada [Donovan et al., 2006], oceans sensors on buoys [McPhaden et al., 2010], and arrays of seismic sensors [Schweitzer et al., 2002]. A comparable array of Doppler imagers can be constructed and deployed on the ground, to compliment the CubeSat data.

  17. Development of a novel haptic glove for improving finger dexterity in poststroke rehabilitation.

    PubMed

    Lin, Chi-Ying; Tsai, Chia-Min; Shih, Pei-Cheng; Wu, Hsiao-Ching

    2015-01-01

    Almost all stroke patients experience a certain degree of fine motor impairment, and impeded finger movement may limit activities in daily life. Thus, to improve the quality of life of stroke patients, designing an efficient training device for fine motor rehabilitation is crucial. This study aimed to develop a novel fine motor training glove that integrates a virtual-reality based interactive environment with vibrotactile feedback for more effective post stroke hand rehabilitation. The proposed haptic rehabilitation device is equipped with small DC vibration motors for vibrotactile feedback stimulation and piezoresistive thin-film force sensors for motor function evaluation. Two virtual-reality based games ``gopher hitting'' and ``musical note hitting'' were developed as a haptic interface. According to the designed rehabilitation program, patients intuitively push and practice their fingers to improve the finger isolation function. Preliminary tests were conducted to assess the feasibility of the developed haptic rehabilitation system and to identify design concerns regarding the practical use in future clinical testing.

  18. Sensors and Algorithms for an Unmanned Surf-Zone Robot

    DTIC Science & Technology

    2015-12-01

    71 3. Data Fusion and Filtering................................................ 74 C. VIRTUAL POTENTIAL FIELD (VPF) PATH PLANNING ...iron effects are clearly seen: Soft iron de - calibration (sphere distortion) was caused by proximity of circuit boards. Offset of the center of the...information to perform global tasks such as path- planning , sensors and actuators commands, external communications, etc. Python3 is used as the primary

  19. Design of a small laser ceilometer and visibility measuring device for helicopter landing sites

    NASA Astrophysics Data System (ADS)

    Streicher, Jurgen; Werner, Christian; Dittel, Walter

    2004-01-01

    Hardware development for remote sensing costs a lot of time and money. A virtual instrument based on software modules was developed to optimise a small visibility and cloud base height sensor. Visibility is the parameter describing the turbidity of the atmosphere. This can be done either by a mean value over a path measured by a transmissometer or for each point of the atmosphere like the backscattered intensity of a range resolved lidar measurement. A standard ceilometer detects the altitude of clouds by using the runtime of the laser pulse and the increasing intensity of the back scattered light when hitting the boundary of a cloud. This corresponds to hard target range finding, but with a more sensitive detection. The output of a standard ceilometer is in case of cloud coverage the altitude of one or more layers. Commercial cloud sensors are specified to track cloud altitude at rather large distances (100 m up to 10 km) and are therefore big and expensive. A virtual instrument was used to calculate the system parameters for a small system for heliports at hospitals and landing platforms under visual flight rules (VFR). Helicopter pilots need information about cloud altitude (base not below 500 feet) and/or the visibility conditions (visual range not lower than 600m) at the destinated landing point. Private pilots need this information too when approaching a non-commercial airport. Both values can be measured automatically with the developed small and compact prototype, at the size of a shoebox for a reasonable price.

  20. Intelligent Sensors: Strategies for an Integrated Systems Approach

    NASA Technical Reports Server (NTRS)

    Chitikeshi, Sanjeevi; Mahajan, Ajay; Bandhil, Pavan; Utterbach, Lucas; Figueroa, Fernando

    2005-01-01

    This paper proposes the development of intelligent sensors as an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Intelligent Systems Health Monitoring (ISHM) vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent Sensors (PIS) and Virtual Intelligent Sensors (VIS).

  1. Strain Sensors with Adjustable Sensitivity by Tailoring the Microstructure of Graphene Aerogel/PDMS Nanocomposites.

    PubMed

    Wu, Shuying; Ladani, Raj B; Zhang, Jin; Ghorbani, Kamran; Zhang, Xuehua; Mouritz, Adrian P; Kinloch, Anthony J; Wang, Chun H

    2016-09-21

    Strain sensors with high elastic limit and high sensitivity are required to meet the rising demand for wearable electronics. Here, we present the fabrication of highly sensitive strain sensors based on nanocomposites consisting of graphene aerogel (GA) and polydimethylsiloxane (PDMS), with the primary focus being to tune the sensitivity of the sensors by tailoring the cellular microstructure through controlling the manufacturing processes. The resultant nanocomposite sensors exhibit a high sensitivity with a gauge factor of up to approximately 61.3. Of significant importance is that the sensitivity of the strain sensors can be readily altered by changing the concentration of the precursor (i.e., an aqueous dispersion of graphene oxide) and the freezing temperature used to process the GA. The results reveal that these two parameters control the cell size and cell-wall thickness of the resultant GA, which may be correlated to the observed variations in the sensitivities of the strain sensors. The higher is the concentration of graphene oxide, then the lower is the sensitivity of the resultant nanocomposite strain sensor. Upon increasing the freezing temperature from -196 to -20 °C, the sensitivity increases and reaches a maximum value of 61.3 at -50 °C and then decreases with a further increase in freezing temperature to -20 °C. Furthermore, the strain sensors offer excellent durability and stability, with their piezoresistivities remaining virtually unchanged even after 10 000 cycles of high-strain loading-unloading. These novel findings pave the way to custom design strain sensors with a desirable piezoresistive behavior.

  2. Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle

    PubMed Central

    Barriuso, Alberto L.; De Paz, Juan F.; Lozano, Álvaro

    2018-01-01

    Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed. PMID:29301310

  3. Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle.

    PubMed

    Barriuso, Alberto L; Villarrubia González, Gabriel; De Paz, Juan F; Lozano, Álvaro; Bajo, Javier

    2018-01-02

    Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed.

  4. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion

    PubMed Central

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-01-01

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time. PMID:28475145

  5. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    PubMed

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  6. "torino 1911" Project: a Contribution of a Slam-Based Survey to Extensive 3d Heritage Modeling

    NASA Astrophysics Data System (ADS)

    Chiabrando, F.; Della Coletta, C.; Sammartano, G.; Spanò, A.; Spreafico, A.

    2018-05-01

    In the framework of the digital documentation of complex environments the advanced Geomatics researches offers integrated solution and multi-sensor strategies for the 3D accurate reconstruction of stratified structures and articulated volumes in the heritage domain. The use of handheld devices for rapid mapping, both image- and range-based, can help the production of suitable easy-to use and easy-navigable 3D model for documentation projects. These types of reality-based modelling could support, with their tailored integrated geometric and radiometric aspects, valorisation and communication projects including virtual reconstructions, interactive navigation settings, immersive reality for dissemination purposes and evoking past places and atmospheres. The aim of this research is localized within the "Torino 1911" project, led by the University of San Diego (California) in cooperation with the PoliTo. The entire project is conceived for multi-scale reconstruction of the real and no longer existing structures in the whole park space of more than 400,000 m2, for a virtual and immersive visualization of the Turin 1911 International "Fabulous Exposition" event, settled in the Valentino Park. Particularly, in the presented research, a 3D metric documentation workflow is proposed and validated in order to integrate the potentialities of LiDAR mapping by handheld SLAM-based device, the ZEB REVO Real Time instrument by GeoSLAM (2017 release), instead of TLS consolidated systems. Starting from these kind of models, the crucial aspects of the trajectories performances in the 3D reconstruction and the radiometric content from imaging approaches are considered, specifically by means of compared use of common DSLR cameras and portable sensors.

  7. Virtual reality: a reality for future military pilotage?

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Martinsen, Gary L.; Marasco, Peter L.; Havig, Paul R.

    2009-05-01

    Virtual reality (VR) systems provide exciting new ways to interact with information and with the world. The visual VR environment can be synthetic (computer generated) or be an indirect view of the real world using sensors and displays. With the potential opportunities of a VR system, the question arises about what benefits or detriments a military pilot might incur by operating in such an environment. Immersive and compelling VR displays could be accomplished with an HMD (e.g., imagery on the visor), large area collimated displays, or by putting the imagery on an opaque canopy. But what issues arise when, instead of viewing the world directly, a pilot views a "virtual" image of the world? Is 20/20 visual acuity in a VR system good enough? To deliver this acuity over the entire visual field would require over 43 megapixels (MP) of display surface for an HMD or about 150 MP for an immersive CAVE system, either of which presents a serious challenge with current technology. Additionally, the same number of sensor pixels would be required to drive the displays to this resolution (and formidable network architectures required to relay this information), or massive computer clusters are necessary to create an entirely computer-generated virtual reality with this resolution. Can we presently implement such a system? What other visual requirements or engineering issues should be considered? With the evolving technology, there are many technological issues and human factors considerations that need to be addressed before a pilot is placed within a virtual cockpit.

  8. An instrumented glove for grasp specification in virtual-reality-based point-and-direct telerobotics.

    PubMed

    Yun, M H; Cannon, D; Freivalds, A; Thomas, G

    1997-10-01

    Hand posture and force, which define aspects of the way an object is grasped, are features of robotic manipulation. A means for specifying these grasping "flavors" has been developed that uses an instrumented glove equipped with joint and force sensors. The new grasp specification system will be used at the Pennsylvania State University (Penn State) in a Virtual Reality based Point-and-Direct (VR-PAD) robotics implementation. Here, an operator gives directives to a robot in the same natural way that human may direct another. Phrases such as "put that there" cause the robot to define a grasping strategy and motion strategy to complete the task on its own. In the VR-PAD concept, pointing is done using virtual tools such that an operator can appear to graphically grasp real items in live video. Rather than requiring full duplication of forces and kinesthetic movement throughout a task as is required in manual telemanipulation, hand posture and force are now specified only once. The grasp parameters then become object flavors. The robot maintains the specified force and hand posture flavors for an object throughout the task in handling the real workpiece or item of interest. In the Computer integrated Manufacturing (CIM) Laboratory at Penn State, hand posture and force data were collected for manipulating bricks and other items that require varying amounts of force at multiple pressure points. The feasibility of measuring desired grasp characteristics was demonstrated for a modified Cyberglove impregnated with Force-Sensitive Resistor (FSR) (pressure sensors in the fingertips. A joint/force model relating the parameters of finger articulation and pressure to various lifting tasks was validated for the instrumented "wired" glove. Operators using such a modified glove may ultimately be able to configure robot grasping tasks in environments involving hazardous waste remediation, flexible manufacturing, space operations and other flexible robotics applications. In each case, the VR-PAD approach will finesse the computational and delay problems of real-time multiple-degree-of-freedom force feedback telemanipulation.

  9. Dynamic shared state maintenance in distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Hamza-Lup, Felix George

    Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model. An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for sensor-based distributed VE that has the potential to improve the system real-time behavior and scalability. (Abstract shortened by UMI.)

  10. An ultrahigh-accuracy Miniature Dew Point Sensor based on an Integrated Photonics Platform.

    PubMed

    Tao, Jifang; Luo, Yu; Wang, Li; Cai, Hong; Sun, Tao; Song, Junfeng; Liu, Hui; Gu, Yuandong

    2016-07-15

    The dew point is the temperature at which vapour begins to condense out of the gaseous phase. The deterministic relationship between the dew point and humidity is the basis for the industry-standard "chilled-mirror" dew point hygrometers used for highly accurate humidity measurements, which are essential for a broad range of industrial and metrological applications. However, these instruments have several limitations, such as high cost, large size and slow response. In this report, we demonstrate a compact, integrated photonic dew point sensor (DPS) that features high accuracy, a small footprint, and fast response. The fundamental component of this DPS is a partially exposed photonic micro-ring resonator, which serves two functions simultaneously: 1) sensing the condensed water droplets via evanescent fields and 2) functioning as a highly accurate, in situ temperature sensor based on the thermo-optic effect (TOE). This device virtually eliminates most of the temperature-related errors that affect conventional "chilled-mirror" hygrometers. Moreover, this DPS outperforms conventional "chilled-mirror" hygrometers with respect to size, cost and response time, paving the way for on-chip dew point detection and extension to applications for which the conventional technology is unsuitable because of size, cost, and other constraints.

  11. An ultrahigh-accuracy Miniature Dew Point Sensor based on an Integrated Photonics Platform

    NASA Astrophysics Data System (ADS)

    Tao, Jifang; Luo, Yu; Wang, Li; Cai, Hong; Sun, Tao; Song, Junfeng; Liu, Hui; Gu, Yuandong

    2016-07-01

    The dew point is the temperature at which vapour begins to condense out of the gaseous phase. The deterministic relationship between the dew point and humidity is the basis for the industry-standard “chilled-mirror” dew point hygrometers used for highly accurate humidity measurements, which are essential for a broad range of industrial and metrological applications. However, these instruments have several limitations, such as high cost, large size and slow response. In this report, we demonstrate a compact, integrated photonic dew point sensor (DPS) that features high accuracy, a small footprint, and fast response. The fundamental component of this DPS is a partially exposed photonic micro-ring resonator, which serves two functions simultaneously: 1) sensing the condensed water droplets via evanescent fields and 2) functioning as a highly accurate, in situ temperature sensor based on the thermo-optic effect (TOE). This device virtually eliminates most of the temperature-related errors that affect conventional “chilled-mirror” hygrometers. Moreover, this DPS outperforms conventional “chilled-mirror” hygrometers with respect to size, cost and response time, paving the way for on-chip dew point detection and extension to applications for which the conventional technology is unsuitable because of size, cost, and other constraints.

  12. Integrating soft sensor systems using conductive thread

    NASA Astrophysics Data System (ADS)

    Teng, Lijun; Jeronimo, Karina; Wei, Tianqi; Nemitz, Markus P.; Lyu, Geng; Stokes, Adam A.

    2018-05-01

    We are part of a growing community of researchers who are developing a new class of soft machines. By using mechanically soft materials (MPa modulus) we can design systems which overcome the bulk-mechanical mismatches between soft biological systems and hard engineered components. To develop fully integrated soft machines—which include power, communications, and control sub-systems—the research community requires methods for interconnecting between soft and hard electronics. Sensors based upon eutectic gallium alloys in microfluidic channels can be used to measure normal and strain forces, but integrating these sensors into systems of heterogeneous Young’s modulus is difficult due the complexity of finding a material which is electrically conductive, mechanically flexible, and stable over prolonged periods of time. Many existing gallium-based liquid alloy sensors are not mechanically or electrically robust, and have poor stability over time. We present the design and fabrication of a high-resolution pressure-sensor soft system that can transduce normal force into a digital output. In this soft system, which is built on a monolithic silicone substrate, a galinstan-based microfluidic pressure sensor is integrated with a flexible printed circuit board. We used conductive thread as the interconnect and found that this method alleviates problems arising due to the mechanical mismatch between conventional metal wires and soft or liquid materials. Conductive thread is low-cost, it is readily wetted by the liquid metal, it produces little bending moment into the microfluidic channel, and it can be connected directly onto the copper bond-pads of the flexible printed circuit board. We built a bridge-system to provide stable readings from the galinstan pressure sensor. This system gives linear measurement results between 500-3500 Pa of applied pressure. We anticipate that integrated systems of this type will find utility in soft-robotic systems as used for wearable technologies like virtual reality, or in soft-medical devices such as exoskeletal rehabilitation robots.

  13. Sensor Network Infrastructure for a Home Care Monitoring System

    PubMed Central

    Palumbo, Filippo; Ullberg, Jonas; Štimec, Ales; Furfari, Francesco; Karlsson, Lars; Coradeschi, Silvia

    2014-01-01

    This paper presents the sensor network infrastructure for a home care system that allows long-term monitoring of physiological data and everyday activities. The aim of the proposed system is to allow the elderly to live longer in their home without compromising safety and ensuring the detection of health problems. The system offers the possibility of a virtual visit via a teleoperated robot. During the visit, physiological data and activities occurring during a period of time can be discussed. These data are collected from physiological sensors (e.g., temperature, blood pressure, glucose) and environmental sensors (e.g., motion, bed/chair occupancy, electrical usage). The system can also give alarms if sudden problems occur, like a fall, and warnings based on more long-term trends, such as the deterioration of health being detected. It has been implemented and tested in a test environment and has been deployed in six real homes for a year-long evaluation. The key contribution of the paper is the presentation of an implemented system for ambient assisted living (AAL) tested in a real environment, combining the acquisition of sensor data, a flexible and adaptable middleware compliant with the OSGistandard and a context recognition application. The system has been developed in a European project called GiraffPlus. PMID:24573309

  14. Smart Multi-Level Tool for Remote Patient Monitoring Based on a Wireless Sensor Network and Mobile Augmented Reality

    PubMed Central

    González, Fernando Cornelio Jimènez; Villegas, Osslan Osiris Vergara; Ramírez, Dulce Esperanza Torres; Sánchez, Vianey Guadalupe Cruz; Domínguez, Humberto Ochoa

    2014-01-01

    Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. One of the main advances is the development of real-time monitors that use intelligent and wireless communication technology. In this paper, a system is presented for the remote monitoring of the body temperature and heart rate of a patient by means of a wireless sensor network (WSN) and mobile augmented reality (MAR). The combination of a WSN and MAR provides a novel alternative to remotely measure body temperature and heart rate in real time during patient care. The system is composed of (1) hardware such as Arduino microcontrollers (in the patient nodes), personal computers (for the nurse server), smartphones (for the mobile nurse monitor and the virtual patient file) and sensors (to measure body temperature and heart rate), (2) a network layer using WiFly technology, and (3) software such as LabView, Android SDK, and DroidAR. The results obtained from tests show that the system can perform effectively within a range of 20 m and requires ten minutes to stabilize the temperature sensor to detect hyperthermia, hypothermia or normal body temperature conditions. Additionally, the heart rate sensor can detect conditions of tachycardia and bradycardia. PMID:25230306

  15. Sensor network infrastructure for a home care monitoring system.

    PubMed

    Palumbo, Filippo; Ullberg, Jonas; Stimec, Ales; Furfari, Francesco; Karlsson, Lars; Coradeschi, Silvia

    2014-02-25

    This paper presents the sensor network infrastructure for a home care system that allows long-term monitoring of physiological data and everyday activities. The aim of the proposed system is to allow the elderly to live longer in their home without compromising safety and ensuring the detection of health problems. The system offers the possibility of a virtual visit via a teleoperated robot. During the visit, physiological data and activities occurring during a period of time can be discussed. These data are collected from physiological sensors (e.g., temperature, blood pressure, glucose) and environmental sensors (e.g., motion, bed/chair occupancy, electrical usage). The system can also give alarms if sudden problems occur, like a fall, and warnings based on more long-term trends, such as the deterioration of health being detected. It has been implemented and tested in a test environment and has been deployed in six real homes for a year-long evaluation. The key contribution of the paper is the presentation of an implemented system for ambient assisted living (AAL) tested in a real environment, combining the acquisition of sensor data, a flexible and adaptable middleware compliant with the OSGistandard and a context recognition application. The system has been developed in a European project called GiraffPlus.

  16. Smart multi-level tool for remote patient monitoring based on a wireless sensor network and mobile augmented reality.

    PubMed

    González, Fernando Cornelio Jiménez; Villegas, Osslan Osiris Vergara; Ramírez, Dulce Esperanza Torres; Sánchez, Vianey Guadalupe Cruz; Domínguez, Humberto Ochoa

    2014-09-16

    Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. One of the main advances is the development of real-time monitors that use intelligent and wireless communication technology. In this paper, a system is presented for the remote monitoring of the body temperature and heart rate of a patient by means of a wireless sensor network (WSN) and mobile augmented reality (MAR). The combination of a WSN and MAR provides a novel alternative to remotely measure body temperature and heart rate in real time during patient care. The system is composed of (1) hardware such as Arduino microcontrollers (in the patient nodes), personal computers (for the nurse server), smartphones (for the mobile nurse monitor and the virtual patient file) and sensors (to measure body temperature and heart rate), (2) a network layer using WiFly technology, and (3) software such as LabView, Android SDK, and DroidAR. The results obtained from tests show that the system can perform effectively within a range of 20 m and requires ten minutes to stabilize the temperature sensor to detect hyperthermia, hypothermia or normal body temperature conditions. Additionally, the heart rate sensor can detect conditions of tachycardia and bradycardia.

  17. Eglin virtual range database for hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Talele, Sunjay E.; Pickard, J. W., Jr.; Owens, Monte A.; Foster, Joseph; Watson, John S.; Amick, Mary Amenda; Anthony, Kenneth

    1998-07-01

    Realistic backgrounds are necessary to support high fidelity hardware-in-the-loop testing. Advanced avionics and weapon system sensors are driving the requirement for higher resolution imagery. The model-test-model philosophy being promoted by the T&E community is resulting in the need for backgrounds that are realistic or virtual representations of actual test areas. Combined, these requirements led to a major upgrade of the terrain database used for hardware-in-the-loop testing at the Guided Weapons Evaluation Facility (GWEF) at Eglin Air Force Base, Florida. This paper will describe the process used to generate the high-resolution (1-foot) database of ten sites totaling over 20 square kilometers of the Eglin range. this process involved generating digital elevation maps from stereo aerial imagery and classifying ground cover material using the spectral content. These databases were then optimized for real-time operation at 90 Hz.

  18. Dynamic concision for three-dimensional reconstruction of human organ built with virtual reality modelling language (VRML).

    PubMed

    Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun

    2005-07-01

    This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging.

  19. Dynamic concision for three-dimensional reconstruction of human organ built with virtual reality modelling language (VRML)*

    PubMed Central

    Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun

    2005-01-01

    This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging. PMID:15973760

  20. Development and user evaluation of a virtual rehabilitation system for wobble board balance training.

    PubMed

    Fitzgerald, Diarmaid; Trakarnratanakul, Nanthana; Dunne, Lucy; Smyth, Barry; Caulfield, Brian

    2008-01-01

    We have developed a prototype virtual reality-based balance training system using a single inertial orientation sensor attached to the upper surface of a wobble board. This input device has been interfaced with Neverball, an open source computer game to create the balance training platform. Users can exercise with the system by standing on the wobble board and tilting it in different directions to control an on-screen environment. We have also developed a customized instruction manual to use when setting up the system. To evaluate the usability our prototype system we undertook a user evaluation study with twelve healthy novice participants. Participants were required to assemble the system using an instruction manual and then perform balance exercises with the system. Following this period of exercise VRUSE, a usability evaluation questionnaire, was completed by participants. Results indicated a high level of usability in all categories evaluated.

  1. Development of inferential sensors for real-time quality control of water-level data for the Everglades Depth Estimation Network

    USGS Publications Warehouse

    Daamen, Ruby C.; Edwin A. Roehl, Jr.; Conrads, Paul

    2010-01-01

    A technology often used for industrial applications is “inferential sensor.” Rather than installing a redundant sensor to measure a process, such as an additional waterlevel gage, an inferential sensor, or virtual sensor, is developed that estimates the processes measured by the physical sensor. The advantage of an inferential sensor is that it provides a redundant signal to the sensor in the field but without exposure to environmental threats. In the event that a gage does malfunction, the inferential sensor provides an estimate for the period of missing data. The inferential sensor also can be used in the quality assurance and quality control of the data. Inferential sensors for gages in the EDEN network are currently (2010) under development. The inferential sensors will be automated so that the real-time EDEN data will continuously be compared to the inferential sensor signal and digital reports of the status of the real-time data will be sent periodically to the appropriate support personnel. The development and application of inferential sensors is easily transferable to other real-time hydrologic monitoring networks.

  2. Molecular Rift: Virtual Reality for Drug Designers.

    PubMed

    Norrby, Magnus; Grebner, Christoph; Eriksson, Joakim; Boström, Jonas

    2015-11-23

    Recent advances in interaction design have created new ways to use computers. One example is the ability to create enhanced 3D environments that simulate physical presence in the real world--a virtual reality. This is relevant to drug discovery since molecular models are frequently used to obtain deeper understandings of, say, ligand-protein complexes. We have developed a tool (Molecular Rift), which creates a virtual reality environment steered with hand movements. Oculus Rift, a head-mounted display, is used to create the virtual settings. The program is controlled by gesture-recognition, using the gaming sensor MS Kinect v2, eliminating the need for standard input devices. The Open Babel toolkit was integrated to provide access to powerful cheminformatics functions. Molecular Rift was developed with a focus on usability, including iterative test-group evaluations. We conclude with reflections on virtual reality's future capabilities in chemistry and education. Molecular Rift is open source and can be downloaded from GitHub.

  3. Cordless hand-held optical 3D sensor

    NASA Astrophysics Data System (ADS)

    Munkelt, Christoph; Bräuer-Burchardt, Christian; Kühmstedt, Peter; Schmidt, Ingo; Notni, Gunther

    2007-07-01

    A new mobile optical 3D measurement system using phase correlation based fringe projection technique will be presented. The sensor consist of a digital projection unit and two cameras in a stereo arrangement, whereby both are battery powered. The data transfer to a base station will be done via WLAN. This gives the possibility to use the system in complicate, remote measurement situations, which are typical in archaeology and architecture. In the measurement procedure the sensor will be hand-held by the user, illuminating the object with a sequence of less than 10 fringe patterns, within a time below 200 ms. This short sequence duration was achieved by a new approach, which combines the epipolar constraint with robust phase correlation utilizing a pre-calibrated sensor head, containing two cameras and a digital fringe projector. Furthermore, the system can be utilized to acquire the all around shape of objects by using the phasogrammetric approach with virtual land marks introduced by the authors 1, 2. This way no matching procedures or markers are necessary for the registration of multiple views, which makes the system very flexible in accomplishing different measurement tasks. The realized measurement field is approx. 100 mm up to 400 mm in diameter. The mobile character makes the measurement system useful for a wide range of applications in arts, architecture, archaeology and criminology, which will be shown in the paper.

  4. Practical Use of Operation Data in the Process Industry

    NASA Astrophysics Data System (ADS)

    Kano, Manabu

    This paper aims to reveal real problems in the process industry and introduce recent development to solve such problems from the viewpoint of effective use of operation data. Two topics are discussed: virtual sensor and process control. First, in order to clarify the present state and problems, a part of our recent questionnaire survey of process control is quoted. It is emphasized that maintenance is a key issue not only for soft-sensors but also for controllers. Then, new techniques are explained. The first one is correlation-based just-in-time modeling (CoJIT), which can realize higher prediction performance than conventional methods and simplify model maintenance. The second is extended fictitious reference iterative tuning (E-FRIT), which can realize data-driven PID control parameter tuning without process modeling. The great usefulness of these techniques are demonstrated through their industrial applications.

  5. Intelligent neonatal monitoring based on a virtual thermal sensor

    PubMed Central

    2014-01-01

    Background Temperature measurement is a vital part of daily neonatal care. Accurate measurements are important for detecting deviations from normal values for both optimal incubator and radiant warmer functioning. The purpose of monitoring the temperature is to maintain the infant in a thermoneutral environmental zone. This physiological zone is defined as the narrow range of environmental temperatures in which the infant maintains a normal body temperature without increasing his or her metabolic rate and thus oxygen consumption. Although the temperature measurement gold standard is the skin electrode, infrared thermography (IRT) should be considered as an effortless and reliable tool for measuring and mapping human skin temperature distribution and assist in assessing thermoregulatory reflexes. Methods Body surface temperature was recorded under several clinical conditions using an infrared thermography imaging technique. Temperature distributions were recorded as real-time video, which was analyzed to evaluate mean skin temperatures. Emissivity variations were considered for optimal neonatal IRT correction for which the compensation vector was overlaid on the tracking algorithm to improve the temperature reading. Finally, a tracking algorithm was designed for active follow-up of the defined region of interest over a neonate’s geometry. Results The outcomes obtained from the thermal virtual sensor demonstrate its ability to accurately track different geometric profiles and shapes over the external anatomy of a neonate. Only a small percentage of the motion detection attempts failed to fit tracking scenarios due to the lack of a properly matching matrix for the ROI profile over neonate’s body surface. Conclusions This paper presents the design and implementation of a virtual temperature sensing application that can assist neonatologists in interpreting a neonate’s skin temperature patterns. Regarding the surface temperature, the influence of different environmental conditions inside the incubator has been confirming. PMID:24580961

  6. Intelligent neonatal monitoring based on a virtual thermal sensor.

    PubMed

    Abbas, Abbas K; Leonhardt, Steffen

    2014-03-02

    Temperature measurement is a vital part of daily neonatal care. Accurate measurements are important for detecting deviations from normal values for both optimal incubator and radiant warmer functioning. The purpose of monitoring the temperature is to maintain the infant in a thermoneutral environmental zone. This physiological zone is defined as the narrow range of environmental temperatures in which the infant maintains a normal body temperature without increasing his or her metabolic rate and thus oxygen consumption. Although the temperature measurement gold standard is the skin electrode, infrared thermography (IRT) should be considered as an effortless and reliable tool for measuring and mapping human skin temperature distribution and assist in assessing thermoregulatory reflexes. Body surface temperature was recorded under several clinical conditions using an infrared thermography imaging technique. Temperature distributions were recorded as real-time video, which was analyzed to evaluate mean skin temperatures. Emissivity variations were considered for optimal neonatal IRT correction for which the compensation vector was overlaid on the tracking algorithm to improve the temperature reading. Finally, a tracking algorithm was designed for active follow-up of the defined region of interest over a neonate's geometry. The outcomes obtained from the thermal virtual sensor demonstrate its ability to accurately track different geometric profiles and shapes over the external anatomy of a neonate. Only a small percentage of the motion detection attempts failed to fit tracking scenarios due to the lack of a properly matching matrix for the ROI profile over neonate's body surface. This paper presents the design and implementation of a virtual temperature sensing application that can assist neonatologists in interpreting a neonate's skin temperature patterns. Regarding the surface temperature, the influence of different environmental conditions inside the incubator has been confirming.

  7. Information integration and diagnosis analysis of equipment status and production quality for machining process

    NASA Astrophysics Data System (ADS)

    Zan, Tao; Wang, Min; Hu, Jianzhong

    2010-12-01

    Machining status monitoring technique by multi-sensors can acquire and analyze the machining process information to implement abnormity diagnosis and fault warning. Statistical quality control technique is normally used to distinguish abnormal fluctuations from normal fluctuations through statistical method. In this paper by comparing the advantages and disadvantages of the two methods, the necessity and feasibility of integration and fusion is introduced. Then an approach that integrates multi-sensors status monitoring and statistical process control based on artificial intelligent technique, internet technique and database technique is brought forward. Based on virtual instrument technique the author developed the machining quality assurance system - MoniSysOnline, which has been used to monitoring the grinding machining process. By analyzing the quality data and AE signal information of wheel dressing process the reason of machining quality fluctuation has been obtained. The experiment result indicates that the approach is suitable for the status monitoring and analyzing of machining process.

  8. A Model-Based Approach for Bridging Virtual and Physical Sensor Nodes in a Hybrid Simulation Framework

    PubMed Central

    Mozumdar, Mohammad; Song, Zhen Yu; Lavagno, Luciano; Sangiovanni-Vincentelli, Alberto L.

    2014-01-01

    The Model Based Design (MBD) approach is a popular trend to speed up application development of embedded systems, which uses high-level abstractions to capture functional requirements in an executable manner, and which automates implementation code generation. Wireless Sensor Networks (WSNs) are an emerging very promising application area for embedded systems. However, there is a lack of tools in this area, which would allow an application developer to model a WSN application by using high level abstractions, simulate it mapped to a multi-node scenario for functional analysis, and finally use the refined model to automatically generate code for different WSN platforms. Motivated by this idea, in this paper we present a hybrid simulation framework that not only follows the MBD approach for WSN application development, but also interconnects a simulated sub-network with a physical sub-network and then allows one to co-simulate them, which is also known as Hardware-In-the-Loop (HIL) simulation. PMID:24960083

  9. Visualizing vascular structures in virtual environments

    NASA Astrophysics Data System (ADS)

    Wischgoll, Thomas

    2013-01-01

    In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.

  10. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  11. Study on Impact Acoustic—Visual Sensor-Based Sorting of ELV Plastic Materials

    PubMed Central

    Huang, Jiu; Tian, Chuyuan; Ren, Jingwei; Bian, Zhengfu

    2017-01-01

    This paper concentrates on a study of a novel multi-sensor aided method by using acoustic and visual sensors for detection, recognition and separation of End-of Life vehicles’ (ELVs) plastic materials, in order to optimize the recycling rate of automotive shredder residues (ASRs). Sensor-based sorting technologies have been utilized for material recycling for the last two decades. One of the problems still remaining results from black and dark dyed plastics which are very difficult to recognize using visual sensors. In this paper a new multi-sensor technology for black plastic recognition and sorting by using impact resonant acoustic emissions (AEs) and laser triangulation scanning was introduced. A pilot sorting system which consists of a 3-dimensional visual sensor and an acoustic sensor was also established; two kinds commonly used vehicle plastics, polypropylene (PP) and acrylonitrile-butadiene-styrene (ABS) and two kinds of modified vehicle plastics, polypropylene/ethylene-propylene-diene-monomer (PP-EPDM) and acrylonitrile-butadiene-styrene/polycarbonate (ABS-PC) were tested. In this study the geometrical features of tested plastic scraps were measured by the visual sensor, and their corresponding impact acoustic emission (AE) signals were acquired by the acoustic sensor. The signal processing and feature extraction of visual data as well as acoustic signals were realized by virtual instruments. Impact acoustic features were recognized by using FFT based power spectral density analysis. The results shows that the characteristics of the tested PP and ABS plastics were totally different, but similar to their respective modified materials. The probability of scrap material recognition rate, i.e., the theoretical sorting efficiency between PP and PP-EPDM, could reach about 50%, and between ABS and ABS-PC it could reach about 75% with diameters ranging from 14 mm to 23 mm, and with exclusion of abnormal impacts, the actual separation rates were 39.2% for PP, 41.4% for PP/EPDM scraps as well as 62.4% for ABS, and 70.8% for ABS/PC scraps. Within the diameter range of 8-13 mm, only 25% of PP and 27% of PP/EPDM scraps, as well as 43% of ABS, and 47% of ABS/PC scraps were finally separated. This research proposes a new approach for sensor-aided automatic recognition and sorting of black plastic materials, it is an effective method for ASR reduction and recycling. PMID:28594341

  12. Study on Impact Acoustic-Visual Sensor-Based Sorting of ELV Plastic Materials.

    PubMed

    Huang, Jiu; Tian, Chuyuan; Ren, Jingwei; Bian, Zhengfu

    2017-06-08

    This paper concentrates on a study of a novel multi-sensor aided method by using acoustic and visual sensors for detection, recognition and separation of End-of Life vehicles' (ELVs) plastic materials, in order to optimize the recycling rate of automotive shredder residues (ASRs). Sensor-based sorting technologies have been utilized for material recycling for the last two decades. One of the problems still remaining results from black and dark dyed plastics which are very difficult to recognize using visual sensors. In this paper a new multi-sensor technology for black plastic recognition and sorting by using impact resonant acoustic emissions (AEs) and laser triangulation scanning was introduced. A pilot sorting system which consists of a 3-dimensional visual sensor and an acoustic sensor was also established; two kinds commonly used vehicle plastics, polypropylene (PP) and acrylonitrile-butadiene-styrene (ABS) and two kinds of modified vehicle plastics, polypropylene/ethylene-propylene-diene-monomer (PP-EPDM) and acrylonitrile-butadiene-styrene/polycarbonate (ABS-PC) were tested. In this study the geometrical features of tested plastic scraps were measured by the visual sensor, and their corresponding impact acoustic emission (AE) signals were acquired by the acoustic sensor. The signal processing and feature extraction of visual data as well as acoustic signals were realized by virtual instruments. Impact acoustic features were recognized by using FFT based power spectral density analysis. The results shows that the characteristics of the tested PP and ABS plastics were totally different, but similar to their respective modified materials. The probability of scrap material recognition rate, i.e., the theoretical sorting efficiency between PP and PP-EPDM, could reach about 50%, and between ABS and ABS-PC it could reach about 75% with diameters ranging from 14 mm to 23 mm, and with exclusion of abnormal impacts, the actual separation rates were 39.2% for PP, 41.4% for PP/EPDM scraps as well as 62.4% for ABS, and 70.8% for ABS/PC scraps. Within the diameter range of 8-13 mm, only 25% of PP and 27% of PP/EPDM scraps, as well as 43% of ABS, and 47% of ABS/PC scraps were finally separated. This research proposes a new approach for sensor-aided automatic recognition and sorting of black plastic materials, it is an effective method for ASR reduction and recycling.

  13. Pose and Wind Estimation for Autonomous Parafoils

    DTIC Science & Technology

    2014-09-01

    Communications GT Georgia Institute of Technology IDVD Inverse Dynamics in the Virtual Domain IMU inertial measurement unit INRIA Institut National de Recherche en...sensor. The method used is a nonlinear estimator that combines the visual sensor measurements with those of an inertial measurement unit ( IMU ) on... isolated on the left side of the equation. On the other hand, when the measurement equation of (3.27) is implemented, the probabil- 58 ity

  14. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems

    PubMed Central

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  15. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  16. A Framework for Analyzing the Whole Body Surface Area from a Single View

    PubMed Central

    Doretto, Gianfranco; Adjeroh, Donald

    2017-01-01

    We present a virtual reality (VR) framework for the analysis of whole human body surface area. Usual methods for determining the whole body surface area (WBSA) are based on well known formulae, characterized by large errors when the subject is obese, or belongs to certain subgroups. For these situations, we believe that a computer vision approach can overcome these problems and provide a better estimate of this important body indicator. Unfortunately, using machine learning techniques to design a computer vision system able to provide a new body indicator that goes beyond the use of only body weight and height, entails a long and expensive data acquisition process. A more viable solution is to use a dataset composed of virtual subjects. Generating a virtual dataset allowed us to build a population with different characteristics (obese, underweight, age, gender). However, synthetic data might differ from a real scenario, typical of the physician’s clinic. For this reason we develop a new virtual environment to facilitate the analysis of human subjects in 3D. This framework can simulate the acquisition process of a real camera, making it easy to analyze and to create training data for machine learning algorithms. With this virtual environment, we can easily simulate the real setup of a clinic, where a subject is standing in front of a camera, or may assume a different pose with respect to the camera. We use this newly designated environment to analyze the whole body surface area (WBSA). In particular, we show that we can obtain accurate WBSA estimations with just one view, virtually enabling the possibility to use inexpensive depth sensors (e.g., the Kinect) for large scale quantification of the WBSA from a single view 3D map. PMID:28045895

  17. Embry-Riddle Aeronautical University multispectral sensor and data fusion laboratory: a model for distributed research and education

    NASA Astrophysics Data System (ADS)

    McMullen, Sonya A. H.; Henderson, Troy; Ison, David

    2017-05-01

    The miniaturization of unmanned systems and spacecraft, as well as computing and sensor technologies, has opened new opportunities in the areas of remote sensing and multi-sensor data fusion for a variety of applications. Remote sensing and data fusion historically have been the purview of large government organizations, such as the Department of Defense (DoD), National Aeronautics and Space Administration (NASA), and National Geospatial-Intelligence Agency (NGA) due to the high cost and complexity of developing, fielding, and operating such systems. However, miniaturized computers with high capacity processing capabilities, small and affordable sensors, and emerging, commercially available platforms such as UAS and CubeSats to carry such sensors, have allowed for a vast range of novel applications. In order to leverage these developments, Embry-Riddle Aeronautical University (ERAU) has developed an advanced sensor and data fusion laboratory to research component capabilities and their employment on a wide-range of autonomous, robotic, and transportation systems. This lab is unique in several ways, for example, it provides a traditional campus laboratory for students and faculty to model and test sensors in a range of scenarios, process multi-sensor data sets (both simulated and experimental), and analyze results. Moreover, such allows for "virtual" modeling, testing, and teaching capability reaching beyond the physical confines of the facility for use among ERAU Worldwide students and faculty located around the globe. Although other institutions such as Georgia Institute of Technology, Lockheed Martin, University of Dayton, and University of Central Florida have optical sensor laboratories, the ERAU virtual concept is the first such lab to expand to multispectral sensors and data fusion, while focusing on the data collection and data products and not on the manufacturing aspect. Further, the initiative is a unique effort among Embry-Riddle faculty to develop multi-disciplinary, cross-campus research to facilitate faculty- and student-driven research. Specifically, the ERAU Worldwide Campus, with locations across the globe and delivering curricula online, will be leveraged to provide novel approaches to remote sensor experimentation and simulation. The purpose of this paper and presentation is to present this new laboratory, research, education, and collaboration process.

  18. A virtual robot to model the use of regenerated legs in a web-building spider.

    PubMed

    Krink; Vollrath

    1999-01-01

    The garden cross orb-spider, Araneus diadematus, shows behavioural responses to leg loss and regeneration that are reflected in the geometry of the web's capture spiral. We created a virtual spider robot that mimicked the web construction behaviour of thus handicapped real spiders. We used this approach to test the correctness and consistency of hypotheses about orb web construction. The behaviour of our virtual robot was implemented in a rule-based system supervising behaviour patterns that communicated with the robot's sensors and motors. By building the typical web of a nonhandicapped spider our first model failed and led to new observations on real spiders. We realized that in addition to leg position, leg posture could also be of importance. The implementation of this new hypothesis greatly improved the results of our simulation of a handicapped spider. Now simulated webs, like the real webs of handicapped spiders, had significantly more gaps in successive spiral turns compared with webs of nonhandicapped spiders. Moreover, webs built by the improved virtual spiders intercepted prey as well as the digitized real webs. However, the main factors that affected web interception frequency were prey size, size of capture area and individual variance; having a regenerated leg, surprisingly, was relatively unimportant for this trait. Copyright 1999 The Association for the Study of Animal Behaviour.

  19. GreenVMAS: Virtual Organization Based Platform for Heating Greenhouses Using Waste Energy from Power Plants.

    PubMed

    González-Briones, Alfonso; Chamoso, Pablo; Yoe, Hyun; Corchado, Juan M

    2018-03-14

    The gradual depletion of energy resources makes it necessary to optimize their use and to reuse them. Although great advances have already been made in optimizing energy generation processes, many of these processes generate energy that inevitably gets wasted. A clear example of this are nuclear, thermal and carbon power plants, which lose a large amount of energy that could otherwise be used for different purposes, such as heating greenhouses. The role of GreenVMAS is to maintain the required temperature level in greenhouses by using the waste energy generated by power plants. It incorporates a case-based reasoning system, virtual organizations and algorithms for data analysis and for efficient interaction with sensors and actuators. The system is context aware and scalable as it incorporates an artificial neural network, this means that it can operate correctly even if the number and characteristics of the greenhouses participating in the case study change. The architecture was evaluated empirically and the results show that the user's energy bill is greatly reduced with the implemented system.

  20. GreenVMAS: Virtual Organization Based Platform for Heating Greenhouses Using Waste Energy from Power Plants

    PubMed Central

    Yoe, Hyun

    2018-01-01

    The gradual depletion of energy resources makes it necessary to optimize their use and to reuse them. Although great advances have already been made in optimizing energy generation processes, many of these processes generate energy that inevitably gets wasted. A clear example of this are nuclear, thermal and carbon power plants, which lose a large amount of energy that could otherwise be used for different purposes, such as heating greenhouses. The role of GreenVMAS is to maintain the required temperature level in greenhouses by using the waste energy generated by power plants. It incorporates a case-based reasoning system, virtual organizations and algorithms for data analysis and for efficient interaction with sensors and actuators. The system is context aware and scalable as it incorporates an artificial neural network, this means that it can operate correctly even if the number and characteristics of the greenhouses participating in the case study change. The architecture was evaluated empirically and the results show that the user’s energy bill is greatly reduced with the implemented system. PMID:29538351

  1. Interreality in the management of psychological stress: a clinical scenario.

    PubMed

    Riva, Giuseppe; Raspelli, Simona; Pallavicini, Federica; Grassi, Alessandra; Algeri, Davide; Wiederhold, Brenda K; Gaggioli, Andrea

    2010-01-01

    The term "psychological stress" describes a situation in which a subject perceives that environmental demands tax or exceed his or her adaptive capacity. According to the Cochrane Database of Systematic Reviews, the best validated approach covering both stress management and stress treatment is the Cognitive Behavioral (CBT) approach. We aim to design, develop and test an advanced ICT based solution for the assessment and treatment of psychological stress that is able to improve the actual CBT approach. To reach this goal we will use the "interreality" paradigm integrating assessment and treatment within a hybrid environment, that creates a bridge between the physical and virtual worlds. Our claim is that bridging virtual experiences (fully controlled by the therapist, used to learn coping skills and emotional regulation) with real experiences (allowing both the identification of any critical stressors and the assessment of what has been learned) using advanced technologies (virtual worlds, advanced sensors and PDA/mobile phones) is the best way to address the above limitations. To illustrate the proposed concept, a clinical scenario is also presented and discussed: Paola, a 45 years old nurse, with a mother affected by progressive senile dementia.

  2. Localization of Ferromagnetic Target with Three Magnetic Sensors in the Movement Considering Angular Rotation

    PubMed Central

    Gao, Xiang; Yan, Shenggang; Li, Bin

    2017-01-01

    Magnetic detection techniques have been widely used in many fields, such as virtual reality, surgical robotics systems, and so on. A large number of methods have been developed to obtain the position of a ferromagnetic target. However, the angular rotation of the target relative to the sensor is rarely studied. In this paper, a new method for localization of moving object to determine both the position and rotation angle with three magnetic sensors is proposed. Trajectory localization estimation of three magnetic sensors, which are collinear and noncollinear, were obtained by the simulations, and experimental results demonstrated that the position and rotation angle of ferromagnetic target having roll, pitch or yaw in its movement could be calculated accurately and effectively with three noncollinear vector sensors. PMID:28892006

  3. Spacecraft Alignment Determination and Control for Dual Spacecraft Precision Formation Flying

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip; Novo-Gradac, Anne-Marie; Shah, Neerav

    2017-01-01

    Many proposed formation flying missions seek to advance the state of the art in spacecraft science imaging by utilizing precision dual spacecraft formation flying to enable a virtual space telescope. Using precision dual spacecraft alignment, very long focal lengths can be achieved by locating the optics on one spacecraft and the detector on the other. Proposed science missions include astrophysics concepts with spacecraft separations from 1000 km to 25,000 km, such as the Milli-Arc-Second Structure Imager (MASSIM) and the New Worlds Observer, and Heliophysics concepts for solar coronagraphs and X-ray imaging with smaller separations (50m-500m). All of these proposed missions require advances in guidance, navigation, and control (GNC) for precision formation flying. In particular, very precise astrometric alignment control and estimation is required for precise inertial pointing of the virtual space telescope to enable science imaging orders of magnitude better than can be achieved with conventional single spacecraft instruments. This work develops design architectures, algorithms, and performance analysis of proposed GNC systems for precision dual spacecraft astrometric alignment. These systems employ a variety of GNC sensors and actuators, including laser-based alignment and ranging systems, optical imaging sensors (e.g. guide star telescope), inertial measurement units (IMU), as well as microthruster and precision stabilized platforms. A comprehensive GNC performance analysis is given for Heliophysics dual spacecraft PFF imaging mission concept.

  4. Spacecraft Alignment Determination and Control for Dual Spacecraft Precision Formation Flying

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip C.; Novo-Gradac, Anne-Marie; Shah, Neerav

    2017-01-01

    Many proposed formation flying missions seek to advance the state of the art in spacecraft science imaging by utilizing precision dual spacecraft formation flying to enable a virtual space telescope. Using precision dual spacecraft alignment, very long focal lengths can be achieved by locating the optics on one spacecraft and the detector on the other. Proposed science missions include astrophysics concepts with spacecraft separations from 1000 km to 25,000 km, such as the Milli-Arc-Second Structure Imager (MASSIM) and the New Worlds Observer, and Heliophysics concepts for solar coronagraphs and X-ray imaging with smaller separations (50m 500m). All of these proposed missions require advances in guidance, navigation, and control (GNC) for precision formation flying. In particular, very precise astrometric alignment control and estimation is required for precise inertial pointing of the virtual space telescope to enable science imaging orders of magnitude better than can be achieved with conventional single spacecraft instruments. This work develops design architectures, algorithms, and performance analysis of proposed GNC systems for precision dual spacecraft astrometric alignment. These systems employ a variety of GNC sensors and actuators, including laser-based alignment and ranging systems, optical imaging sensors (e.g. guide star telescope), inertial measurement units (IMU), as well as micro-thruster and precision stabilized platforms. A comprehensive GNC performance analysis is given for Heliophysics dual spacecraft PFF imaging mission concept.

  5. Medipix2 based CdTe microprobe for dental imaging

    NASA Astrophysics Data System (ADS)

    Vykydal, Z.; Fauler, A.; Fiederle, M.; Jakubek, J.; Svestkova, M.; Zwerger, A.

    2011-12-01

    Medical imaging devices and techniques are demanded to provide high resolution and low dose images of samples or patients. Hybrid semiconductor single photon counting devices together with suitable sensor materials and advanced techniques of image reconstruction fulfil these requirements. In particular cases such as the direct observation of dental implants also the size of the imaging device itself plays a critical role. This work presents the comparison of 2D radiographs of tooth provided by a standard commercial dental imaging system (Gendex 765DC X-ray tube with VisualiX scintillation detector) and two Medipix2 USB Lite detectors one equipped with a Si sensor (300 μm thick) and one with a CdTe sensor (1 mm thick). Single photon counting capability of the Medipix2 device allows virtually unlimited dynamic range of the images and thus increases the contrast significantly. The dimensions of the whole USB Lite device are only 15 mm × 60 mm of which 25% consists of the sensitive area. Detector of this compact size can be used directly inside the patients' mouth.

  6. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  7. Ubiquitous health in practice: the interreality paradigm.

    PubMed

    Gaggioli, Andrea; Raspelli, Simona; Grassi, Alessandra; Pallavicini, Federica; Cipresso, Pietro; Wiederhold, Brenda K; Riva, Giuseppe

    2011-01-01

    In this paper we introduce a new ubiquitous computing paradigm for behavioral health care: "Interreality". Interreality integrates assessment and treatment within a hybrid environment, that creates a bridge between the physical and virtual worlds. Our claim is that bridging virtual experiences (fully controlled by the therapist, used to learn coping skills and emotional regulation) with real experiences (allowing both the identification of any critical stressors and the assessment of what has been learned) using advanced technologies (virtual worlds, advanced sensors and PDA/mobile phones) may improve existing psychological treatment. To illustrate the proposed concept, a clinical scenario is also presented and discussed: Daniela, a 40 years old teacher, with a mother affected by Alzheimer's disease.

  8. Novel graphical environment for virtual and real-world operations of tracked mobile manipulators

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.

    1993-08-01

    A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  9. Application of intelligent sensors in the integrated systems health monitoring of a rocket test stand

    NASA Astrophysics Data System (ADS)

    Mahajan, Ajay; Chitikeshi, Sanjeevi; Utterbach, Lucas; Bandhil, Pavan; Figueroa, Fernando

    2006-05-01

    This paper describes the application of intelligent sensors in the Integrated Systems Health Monitoring (ISHM) as applied to a rocket test stand. The development of intelligent sensors is attempted as an integrated system approach, i.e. one treats the sensors as a complete system with its own physical transducer, A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the NASA Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements associated with the rocket tests stands. These smart elements can be sensors, actuators or other devices. Though the immediate application is the monitoring of the rocket test stands, the technology should be generally applicable to the ISHM vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent sensors (PIS) and Virtual Intelligent Sensors (VIS).

  10. Peptide secondary structure modulates single-walled carbon nanotube fluorescence as a chaperone sensor for nitroaromatics

    PubMed Central

    Heller, Daniel A.; Pratt, George W.; Zhang, Jingqing; Nair, Nitish; Hansborough, Adam J.; Boghossian, Ardemis A.; Reuel, Nigel F.; Barone, Paul W.; Strano, Michael S.

    2011-01-01

    A class of peptides from the bombolitin family, not previously identified for nitroaromatic recognition, allows near-infrared fluorescent single-walled carbon nanotubes to transduce specific changes in their conformation. In response to the binding of specific nitroaromatic species, such peptide–nanotube complexes form a virtual “chaperone sensor,” which reports modulation of the peptide secondary structure via changes in single-walled carbon nanotubes, near-infrared photoluminescence. A split-channel microscope constructed to image quantized spectral wavelength shifts in real time, in response to nitroaromatic adsorption, results in the first single-nanotube imaging of solvatochromic events. The described indirect detection mechanism, as well as an additional exciton quenching-based optical nitroaromatic detection method, illustrate that functionalization of the carbon nanotube surface can result in completely unique sites for recognition, resolvable at the single-molecule level. PMID:21555544

  11. Performance analysis of routing protocols for IoT

    NASA Astrophysics Data System (ADS)

    Manda, Sridhar; Nalini, N.

    2018-04-01

    Internet of Things (IoT) is an arrangement of advancements that are between disciplinary. It is utilized to have compelling combination of both physical and computerized things. With IoT physical things can have personal virtual identities and participate in distributed computing. Realization of IoT needs the usage of sensors based on the sector for which IoT is integrated. For instance, in healthcare domain, IoT needs to have integration with wearable sensors used by patients. As sensor devices produce huge amount of data, often called big data, there should be efficient routing protocols in place. To the extent remote systems is worried there are some current protocols, for example, OLSR, DSR and AODV. It additionally tosses light into Trust based routing protocol for low-power and lossy systems (TRPL) for IoT. These are broadly utilized remote directing protocols. As IoT is developing round the corner, it is basic to investigate routing protocols that and evaluate their execution regarding throughput, end to end delay, and directing overhead. The execution experiences can help in settling on very much educated choices while incorporating remote systems with IoT. In this paper, we analyzed different routing protocols and their performance is compared. It is found that AODV showed better performance than other routing protocols aforementioned.

  12. Curvature-Based Environment Description for Robot Navigation Using Laser Range Sensors

    PubMed Central

    Vázquez-Martín, Ricardo; Núñez, Pedro; Bandera, Antonio; Sandoval, Francisco

    2009-01-01

    This work proposes a new feature detection and description approach for mobile robot navigation using 2D laser range sensors. The whole process consists of two main modules: a sensor data segmentation module and a feature detection and characterization module. The segmentation module is divided in two consecutive stages: First, the segmentation stage divides the laser scan into clusters of consecutive range readings using a distance-based criterion. Then, the second stage estimates the curvature function associated to each cluster and uses it to split it into a set of straight-line and curve segments. The curvature is calculated using a triangle-area representation where, contrary to previous approaches, the triangle side lengths at each range reading are adapted to the local variations of the laser scan, removing noise without missing relevant points. This representation remains unchanged in translation or rotation, and it is also robust against noise. Thus, it is able to provide the same segmentation results although the scene will be perceived from different viewpoints. Therefore, segmentation results are used to characterize the environment using line and curve segments, real and virtual corners and edges. Real scan data collected from different environments by using different platforms are used in the experiments in order to evaluate the proposed environment description algorithm. PMID:22461732

  13. An ultrahigh-accuracy Miniature Dew Point Sensor based on an Integrated Photonics Platform

    PubMed Central

    Tao, Jifang; Luo, Yu; Wang, Li; Cai, Hong; Sun, Tao; Song, Junfeng; Liu, Hui; Gu, Yuandong

    2016-01-01

    The dew point is the temperature at which vapour begins to condense out of the gaseous phase. The deterministic relationship between the dew point and humidity is the basis for the industry-standard “chilled-mirror” dew point hygrometers used for highly accurate humidity measurements, which are essential for a broad range of industrial and metrological applications. However, these instruments have several limitations, such as high cost, large size and slow response. In this report, we demonstrate a compact, integrated photonic dew point sensor (DPS) that features high accuracy, a small footprint, and fast response. The fundamental component of this DPS is a partially exposed photonic micro-ring resonator, which serves two functions simultaneously: 1) sensing the condensed water droplets via evanescent fields and 2) functioning as a highly accurate, in situ temperature sensor based on the thermo-optic effect (TOE). This device virtually eliminates most of the temperature-related errors that affect conventional “chilled-mirror” hygrometers. Moreover, this DPS outperforms conventional “chilled-mirror” hygrometers with respect to size, cost and response time, paving the way for on-chip dew point detection and extension to applications for which the conventional technology is unsuitable because of size, cost, and other constraints. PMID:27417734

  14. Network-based collaborative research environment LDRD final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davies, B.R.; McDonald, M.J.

    1997-09-01

    The Virtual Collaborative Environment (VCE) and Distributed Collaborative Workbench (DCW) are new technologies that make it possible for diverse users to synthesize and share mechatronic, sensor, and information resources. Using these technologies, university researchers, manufacturers, design firms, and others can directly access and reconfigure systems located throughout the world. The architecture for implementing VCE and DCW has been developed based on the proposed National Information Infrastructure or Information Highway and a tool kit of Sandia-developed software. Further enhancements to the VCE and DCW technologies will facilitate access to other mechatronic resources. This report describes characteristics of VCE and DCW andmore » also includes background information about the evolution of these technologies.« less

  15. New light field camera based on physical based rendering tracing

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  16. SOA approach to battle command: simulation interoperability

    NASA Astrophysics Data System (ADS)

    Mayott, Gregory; Self, Mid; Miller, Gordon J.; McDonnell, Joseph S.

    2010-04-01

    NVESD is developing a Sensor Data and Management Services (SDMS) Service Oriented Architecture (SOA) that provides an innovative approach to achieve seamless application functionality across simulation and battle command systems. In 2010, CERDEC will conduct a SDMS Battle Command demonstration that will highlight the SDMS SOA capability to couple simulation applications to existing Battle Command systems. The demonstration will leverage RDECOM MATREX simulation tools and TRADOC Maneuver Support Battle Laboratory Virtual Base Defense Operations Center facilities. The battle command systems are those specific to the operation of a base defense operations center in support of force protection missions. The SDMS SOA consists of four components that will be discussed. An Asset Management Service (AMS) will automatically discover the existence, state, and interface definition required to interact with a named asset (sensor or a sensor platform, a process such as level-1 fusion, or an interface to a sensor or other network endpoint). A Streaming Video Service (SVS) will automatically discover the existence, state, and interfaces required to interact with a named video stream, and abstract the consumers of the video stream from the originating device. A Task Manager Service (TMS) will be used to automatically discover the existence of a named mission task, and will interpret, translate and transmit a mission command for the blue force unit(s) described in a mission order. JC3IEDM data objects, and software development kit (SDK), will be utilized as the basic data object definition for implemented web services.

  17. Virtual Reality Simulation of the International Space Welding Experiment

    NASA Technical Reports Server (NTRS)

    Phillips, James A.

    1996-01-01

    Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) It involves 3-dimensional computer graphics; (2) It includes real-time feedback and response to user actions; and (3) It must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, and we have therefore implemented a VR trainer for the International Space Welding Experiment. My role in the development of the ISWE trainer consisted of the following: (1) created texture-mapped models of the ISWE's rotating sample drum, technology block, tool stowage assembly, sliding foot restraint, and control panel; (2) developed C code for control panel button selection and rotation of the sample drum; (3) In collaboration with Tim Clark (Antares Virtual Reality Systems), developed a serial interface box for the PC and the SGI Indigo so that external control devices, similar to ones actually used on the ISWE, could be used to control virtual objects in the ISWE simulation; (4) In collaboration with Peter Wang (SFFP) and Mark Blasingame (Boeing), established the interference characteristics of the VIM 1000 head-mounted-display and tested software filters to correct the problem; (5) In collaboration with Peter Wang and Mark Blasingame, established software and procedures for interfacing the VPL DataGlove and the Polhemus 6DOF position sensors to the SGI Indigo serial ports. The majority of the ISWE modeling effort was conducted on a PC-based VR Workstation, described below.

  18. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.

    PubMed

    Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G

    2016-11-02

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.

  19. Estimating Three-Dimensional Orientation of Human Body Parts by Inertial/Magnetic Sensing

    PubMed Central

    Sabatini, Angelo Maria

    2011-01-01

    User-worn sensing units composed of inertial and magnetic sensors are becoming increasingly popular in various domains, including biomedical engineering, robotics, virtual reality, where they can also be applied for real-time tracking of the orientation of human body parts in the three-dimensional (3D) space. Although they are a promising choice as wearable sensors under many respects, the inertial and magnetic sensors currently in use offer measuring performance that are critical in order to achieve and maintain accurate 3D-orientation estimates, anytime and anywhere. This paper reviews the main sensor fusion and filtering techniques proposed for accurate inertial/magnetic orientation tracking of human body parts; it also gives useful recipes for their actual implementation. PMID:22319365

  20. Estimating three-dimensional orientation of human body parts by inertial/magnetic sensing.

    PubMed

    Sabatini, Angelo Maria

    2011-01-01

    User-worn sensing units composed of inertial and magnetic sensors are becoming increasingly popular in various domains, including biomedical engineering, robotics, virtual reality, where they can also be applied for real-time tracking of the orientation of human body parts in the three-dimensional (3D) space. Although they are a promising choice as wearable sensors under many respects, the inertial and magnetic sensors currently in use offer measuring performance that are critical in order to achieve and maintain accurate 3D-orientation estimates, anytime and anywhere. This paper reviews the main sensor fusion and filtering techniques proposed for accurate inertial/magnetic orientation tracking of human body parts; it also gives useful recipes for their actual implementation.

  1. Kansei Biosensor and IT Society

    NASA Astrophysics Data System (ADS)

    Toko, Kiyoshi

    A taste sensor with global selectivity is composed of several kinds of lipid/polymer membranes for transforming information of taste substances into electric signal. The sensor output shows different patterns for chemical substances which have different taste qualities such as saltiness and sourness. Taste interactions such as suppression effect, which occurs between bitterness and sweetness, can be detected and quantified using the taste sensor. The taste and also smell of foodstuffs such as beer, coffee, mineral water, soup and milk can be discussed quantitatively. The taste sensor provides the objective scale for the human sensory expression. Multi-modal communication becomes possible using a taste/smell recognition microchip, which produces virtual taste. We are now standing at the beginning of a new age of communication using digitized taste.

  2. 3DUI assisted lower and upper member therapy.

    PubMed

    Uribe-Quevedo, Alvaro; Perez-Gutierrez, Byron

    2012-01-01

    3DUIs are becoming very popular among researchers, developers and users as they allow more immersive and interactive experiences by taking advantage of the human dexterity. The features offered by these interfaces outside the gaming environment, have allowed the development of applications in the medical area by enhancing the user experience and aiding the therapy process in controlled and monitored environments. Using mainstream videogame 3DUIs based on inertial and image sensors available in the market, this work presents the development of a virtual environment and its navigation through lower member captured gestures for assisting motion during therapy.

  3. Distributed Sensor Fusion for Scalar Field Mapping Using Mobile Sensor Networks.

    PubMed

    La, Hung Manh; Sheng, Weihua

    2013-04-01

    In this paper, autonomous mobile sensor networks are deployed to measure a scalar field and build its map. We develop a novel method for multiple mobile sensor nodes to build this map using noisy sensor measurements. Our method consists of two parts. First, we develop a distributed sensor fusion algorithm by integrating two different distributed consensus filters to achieve cooperative sensing among sensor nodes. This fusion algorithm has two phases. In the first phase, the weighted average consensus filter is developed, which allows each sensor node to find an estimate of the value of the scalar field at each time step. In the second phase, the average consensus filter is used to allow each sensor node to find a confidence of the estimate at each time step. The final estimate of the value of the scalar field is iteratively updated during the movement of the mobile sensors via weighted average. Second, we develop the distributed flocking-control algorithm to drive the mobile sensors to form a network and track the virtual leader moving along the field when only a small subset of the mobile sensors know the information of the leader. Experimental results are provided to demonstrate our proposed algorithms.

  4. Drawing Inspiration from Human Brain Networks: Construction of Interconnected Virtual Networks

    PubMed Central

    Kominami, Daichi; Leibnitz, Kenji; Murata, Masayuki

    2018-01-01

    Virtualization of wireless sensor networks (WSN) is widely considered as a foundational block of edge/fog computing, which is a key technology that can help realize next-generation Internet of things (IoT) networks. In such scenarios, multiple IoT devices and service modules will be virtually deployed and interconnected over the Internet. Moreover, application services are expected to be more sophisticated and complex, thereby increasing the number of modifications required for the construction of network topologies. Therefore, it is imperative to establish a method for constructing a virtualized WSN (VWSN) topology that achieves low latency on information transmission and high resilience against network failures, while keeping the topological construction cost low. In this study, we draw inspiration from inter-modular connectivity in human brain networks, which achieves high performance when dealing with large-scale networks composed of a large number of modules (i.e., regions) and nodes (i.e., neurons). We propose a method for assigning inter-modular links based on a connectivity model observed in the cerebral cortex of the brain, known as the exponential distance rule (EDR) model. We then choose endpoint nodes of these links by controlling inter-modular assortativity, which characterizes the topological connectivity of brain networks. We test our proposed methods using simulation experiments. The results show that the proposed method based on the EDR model can construct a VWSN topology with an optimal combination of communication efficiency, robustness, and construction cost. Regarding the selection of endpoint nodes for the inter-modular links, the results also show that high assortativity enhances the robustness and communication efficiency because of the existence of inter-modular links of two high-degree nodes. PMID:29642483

  5. Drawing Inspiration from Human Brain Networks: Construction of Interconnected Virtual Networks.

    PubMed

    Murakami, Masaya; Kominami, Daichi; Leibnitz, Kenji; Murata, Masayuki

    2018-04-08

    Virtualization of wireless sensor networks (WSN) is widely considered as a foundational block of edge/fog computing, which is a key technology that can help realize next-generation Internet of things (IoT) networks. In such scenarios, multiple IoT devices and service modules will be virtually deployed and interconnected over the Internet. Moreover, application services are expected to be more sophisticated and complex, thereby increasing the number of modifications required for the construction of network topologies. Therefore, it is imperative to establish a method for constructing a virtualized WSN (VWSN) topology that achieves low latency on information transmission and high resilience against network failures, while keeping the topological construction cost low. In this study, we draw inspiration from inter-modular connectivity in human brain networks, which achieves high performance when dealing with large-scale networks composed of a large number of modules (i.e., regions) and nodes (i.e., neurons). We propose a method for assigning inter-modular links based on a connectivity model observed in the cerebral cortex of the brain, known as the exponential distance rule (EDR) model. We then choose endpoint nodes of these links by controlling inter-modular assortativity, which characterizes the topological connectivity of brain networks. We test our proposed methods using simulation experiments. The results show that the proposed method based on the EDR model can construct a VWSN topology with an optimal combination of communication efficiency, robustness, and construction cost. Regarding the selection of endpoint nodes for the inter-modular links, the results also show that high assortativity enhances the robustness and communication efficiency because of the existence of inter-modular links of two high-degree nodes.

  6. Towards an integrated strategy for monitoring wetland inundation with virtual constellations of optical and radar satellites

    NASA Astrophysics Data System (ADS)

    DeVries, B.; Huang, W.; Huang, C.; Jones, J. W.; Lang, M. W.; Creed, I. F.; Carroll, M.

    2017-12-01

    The function of wetlandscapes in hydrological and biogeochemical cycles is largely governed by surface inundation, with small wetlands that experience periodic inundation playing a disproportionately large role in these processes. However, the spatial distribution and temporal dynamics of inundation in these wetland systems are still poorly understood, resulting in large uncertainties in global water, carbon and greenhouse gas budgets. Satellite imagery provides synoptic and repeat views of the Earth's surface and presents opportunities to fill this knowledge gap. Despite the proliferation of Earth Observation satellite missions in the past decade, no single satellite sensor can simultaneously provide the spatial and temporal detail needed to adequately characterize inundation in small, dynamic wetland systems. Surface water data products must therefore integrate observations from multiple satellite sensors in order to address this objective, requiring the development of improved and coordinated algorithms to generate consistent estimates of surface inundation. We present a suite of algorithms designed to detect surface inundation in wetlands using data from a virtual constellation of optical and radar sensors comprising the Landsat and Sentinel missions (DeVries et al., 2017). Both optical and radar algorithms were able to detect inundation in wetlands without the need for external training data, allowing for high-efficiency monitoring of wetland inundation at large spatial and temporal scales. Applying these algorithms across a gradient of wetlands in North America, preliminary findings suggest that while these fully automated algorithms can detect wetland inundation at higher spatial and temporal resolutions than currently available surface water data products, limitations specific to the satellite sensors and their acquisition strategies are responsible for uncertainties in inundation estimates. Further research is needed to investigate strategies for integrating optical and radar data from virtual constellations, with a focus on reducing uncertainties, maximizing spatial and temporal detail, and establishing consistent records of wetland inundation over time. The findings and conclusions in this article do not necessarily represent the views of the U.S. Government.

  7. Digital imaging and remote sensing image generator (DIRSIG) as applied to NVESD sensor performance modeling

    NASA Astrophysics Data System (ADS)

    Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.

    2016-05-01

    The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.

  8. Interactive balance training integrating sensor-based visual feedback of movement performance: a pilot study in older adults.

    PubMed

    Schwenk, Michael; Grewal, Gurtej S; Honarvar, Bahareh; Schwenk, Stefanie; Mohler, Jane; Khalsa, Dharma S; Najafi, Bijan

    2014-12-13

    Wearable sensor technology can accurately measure body motion and provide incentive feedback during exercising. The aim of this pilot study was to evaluate the effectiveness and user experience of a balance training program in older adults integrating data from wearable sensors into a human-computer interface designed for interactive training. Senior living community residents (mean age 84.6) with confirmed fall risk were randomized to an intervention (IG, n = 17) or control group (CG, n = 16). The IG underwent 4 weeks (twice a week) of balance training including weight shifting and virtual obstacle crossing tasks with visual/auditory real-time joint movement feedback using wearable sensors. The CG received no intervention. Outcome measures included changes in center of mass (CoM) sway, ankle and hip joint sway measured during eyes open (EO) and eyes closed (EC) balance test at baseline and post-intervention. Ankle-hip postural coordination was quantified by a reciprocal compensatory index (RCI). Physical performance was quantified by the Alternate-Step-Test (AST), Timed-up-and-go (TUG), and gait assessment. User experience was measured by a standardized questionnaire. After the intervention sway of CoM, hip, and ankle were reduced in the IG compared to the CG during both EO and EC condition (p = .007-.042). Improvement was obtained for AST (p = .037), TUG (p = .024), fast gait speed (p = . 010), but not normal gait speed (p = .264). Effect sizes were moderate for all outcomes. RCI did not change significantly. Users expressed a positive training experience including fun, safety, and helpfulness of sensor-feedback. Results of this proof-of-concept study suggest that older adults at risk of falling can benefit from the balance training program. Study findings may help to inform future exercise interventions integrating wearable sensors for guided game-based training in home- and community environments. Future studies should evaluate the added value of the proposed sensor-based training paradigm compared to traditional balance training programs and commercial exergames. http://www.clinicaltrials.govNCT02043834.

  9. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation

    PubMed Central

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications. PMID:22389618

  10. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation.

    PubMed

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a "sensor fusion" approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications.

  11. Providing a virtual tour of a glacial watershed

    NASA Astrophysics Data System (ADS)

    Berner, L.; Habermann, M.; Hood, E.; Fatland, R.; Heavner, M.; Knuth, E.

    2007-12-01

    SEAMONSTER, a NASA funded sensor web project, is the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research. Seamonster is leveraging existing open-source software and is an implementation of existing sensor web technologies intended to act as a sensor web testbed, an educational tool, a scientific resource, and a public resource. The primary focus area of initial SEAMONSTER deployment is the Lemon Creek watershed, which includes the Lemon Creek Glacier studied as part of the 1957-58 IPY. This presentation describes our year one efforts to maximize education and public outreach activities of SEAMONSTER. During the first summer, 37 sensors were deployed throughout two partially glaciated watersheds and facilitated data acquisition in temperate rain forest, alpine, lacustrine, and glacial environments. Understanding these environments are important for public understanding of climate change. These environments are geographically isolated, limiting public access to, and understanding of, such locales. In an effort to inform the general public and primary educators about the basic processes occurring in these unique natural systems, we are developing an interactive website. This web portal will supplement and enhance environmental science primary education by providing educators and students with interactive access to basic information from the glaciological, hydrological, and meteorological systems we are studying. In addition, we are developing an interactive virtual tour of the Lemon Glacier and its watershed. This effort will include Google Earth as a means of real-time data visualization and will take advantage of time-lapse movies, photographs, maps, and satellite imagery to promote an understanding of these unique natural systems and the role of sensor webs in education.

  12. Inertial Head-Tracker Sensor Fusion by a Complementary Separate-Bias Kalman Filter

    NASA Technical Reports Server (NTRS)

    Foxlin, Eric

    1996-01-01

    Current virtual environment and teleoperator applications are hampered by the need for an accurate, quick-responding head-tracking system with a large working volume. Gyroscopic orientation sensors can overcome problems with jitter, latency, interference, line-of-sight obscurations, and limited range, but suffer from slow drift. Gravimetric inclinometers can detect attitude without drifting, but are slow and sensitive to transverse accelerations. This paper describes the design of a Kalman filter to integrate the data from these two types of sensors in order to achieve the excellent dynamic response of an inertial system without drift, and without the acceleration sensitivity of inclinometers.

  13. Integrating Fiber Optic Strain Sensors into Metal Using Ultrasonic Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Hehr, Adam; Norfolk, Mark; Wenning, Justin; Sheridan, John; Leser, Paul; Leser, Patrick; Newman, John A.

    2018-03-01

    Ultrasonic additive manufacturing, a rather new three-dimensional (3D) printing technology, uses ultrasonic energy to produce metallurgical bonds between layers of metal foils near room temperature. This low temperature attribute of the process enables integration of temperature sensitive components, such as fiber optic strain sensors, directly into metal structures. This may be an enabling technology for Digital Twin applications, i.e., virtual model interaction and feedback with live load data. This study evaluates the consolidation quality, interface robustness, and load sensing limits of commercially available fiber optic strain sensors embedded into aluminum alloy 6061. Lastly, an outlook on the technology and its applications is described.

  14. Inertial head-tracker sensor fusion by a complementary separate-bias Kalman filter

    NASA Technical Reports Server (NTRS)

    Foxlin, Eric

    1996-01-01

    Current virtual environment and teleoperator applications are hampered by the need for an accurate, quick responding head-tracking system with a large working volume. Gyroscopic orientation sensors can overcome problems with jitter, latency, interference, line-of-sight obscurations, and limited range, but suffer from slow drift. Gravimetric inclinometers can detect attitude without drifting, but are slow and sensitive to transverse accelerations. This paper describes the design of a Kalman filter to integrate the data from these two types of sensors in order to achieve the excellent dynamic response of an inertial system without drift, and without the acceleration sensitivity of inclinometers.

  15. The DOE ARM Aerial Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmid, Beat; Tomlinson, Jason M.; Hubbe, John M.

    2014-05-01

    The Department of Energy Atmospheric Radiation Measurement (ARM) Program is a climate research user facility operating stationary ground sites that provide long-term measurements of climate relevant properties, mobile ground- and ship-based facilities to conduct shorter field campaigns (6-12 months), and the ARM Aerial Facility (AAF). The airborne observations acquired by the AAF enhance the surface-based ARM measurements by providing high-resolution in-situ measurements for process understanding, retrieval-algorithm development, and model evaluation that are not possible using ground- or satellite-based techniques. Several ARM aerial efforts were consolidated into the AAF in 2006. With the exception of a small aircraft used for routinemore » measurements of aerosols and carbon cycle gases, AAF at the time had no dedicated aircraft and only a small number of instruments at its disposal. In this "virtual hangar" mode, AAF successfully carried out several missions contracting with organizations and investigators who provided their research aircraft and instrumentation. In 2009, AAF started managing operations of the Battelle-owned Gulfstream I (G-1) large twin-turboprop research aircraft. Furthermore, the American Recovery and Reinvestment Act of 2009 provided funding for the procurement of over twenty new instruments to be used aboard the G-1 and other AAF virtual-hangar aircraft. AAF now executes missions in the virtual- and real-hangar mode producing freely available datasets for studying aerosol, cloud, and radiative processes in the atmosphere. AAF is also engaged in the maturation and testing of newly developed airborne sensors to help foster the next generation of airborne instruments.« less

  16. The eyes prefer real images

    NASA Technical Reports Server (NTRS)

    Roscoe, Stanley N.

    1989-01-01

    For better or worse, virtual imaging displays are with us in the form of narrow-angle combining-glass presentations, head-up displays (HUD), and head-mounted projections of wide-angle sensor-generated or computer-animated imagery (HMD). All military and civil aviation services and a large number of aerospace companies are involved in one way or another in a frantic competition to develop the best virtual imaging display system. The success or failure of major weapon systems hangs in the balance, and billions of dollars in potential business are at stake. Because of the degree to which national defense is committed to the perfection of virtual imaging displays, a brief consideration of their status, an investigation and analysis of their problems, and a search for realistic alternatives are long overdue.

  17. Application of Virtual, Augmented, and Mixed Reality to Urology.

    PubMed

    Hamacher, Alaric; Kim, Su Jin; Cho, Sung Tae; Pardeshi, Sunil; Lee, Seung Hyun; Eun, Sung-Jong; Whangbo, Taeg Keun

    2016-09-01

    Recent developments in virtual, augmented, and mixed reality have introduced a considerable number of new devices into the consumer market. This momentum is also affecting the medical and health care sector. Although many of the theoretical and practical foundations of virtual reality (VR) were already researched and experienced in the 1980s, the vastly improved features of displays, sensors, interactivity, and computing power currently available in devices offer a new field of applications to the medical sector and also to urology in particular. The purpose of this review article is to review the extent to which VR technology has already influenced certain aspects of medicine, the applications that are currently in use in urology, and the future development trends that could be expected.

  18. Application of Virtual, Augmented, and Mixed Reality to Urology

    PubMed Central

    2016-01-01

    Recent developments in virtual, augmented, and mixed reality have introduced a considerable number of new devices into the consumer market. This momentum is also affecting the medical and health care sector. Although many of the theoretical and practical foundations of virtual reality (VR) were already researched and experienced in the 1980s, the vastly improved features of displays, sensors, interactivity, and computing power currently available in devices offer a new field of applications to the medical sector and also to urology in particular. The purpose of this review article is to review the extent to which VR technology has already influenced certain aspects of medicine, the applications that are currently in use in urology, and the future development trends that could be expected. PMID:27706017

  19. Emergency Response Virtual Environment for Safe Schools

    NASA Technical Reports Server (NTRS)

    Wasfy, Ayman; Walker, Teresa

    2008-01-01

    An intelligent emergency response virtual environment (ERVE) that provides emergency first responders, response planners, and managers with situational awareness as well as training and support for safe schools is presented. ERVE incorporates an intelligent agent facility for guiding and assisting the user in the context of the emergency response operations. Response information folders capture key information about the school. The system enables interactive 3D visualization of schools and academic campuses, including the terrain and the buildings' exteriors and interiors in an easy to use Web..based interface. ERVE incorporates live camera and sensors feeds and can be integrated with other simulations such as chemical plume simulation. The system is integrated with a Geographical Information System (GIS) to enable situational awareness of emergency events and assessment of their effect on schools in a geographic area. ERVE can also be integrated with emergency text messaging notification systems. Using ERVE, it is now possible to address safe schools' emergency management needs with a scaleable, seamlessly integrated and fully interactive intelligent and visually compelling solution.

  20. Scalable and Cost-Effective Assignment of Mobile Crowdsensing Tasks Based on Profiling Trends and Prediction: The ParticipAct Living Lab Experience

    PubMed Central

    Bellavista, Paolo; Corradi, Antonio; Foschini, Luca; Ianniello, Raffaele

    2015-01-01

    Nowadays, sensor-rich smartphones potentially enable the harvesting of huge amounts of valuable sensing data in urban environments, by opportunistically involving citizens to play the role of mobile virtual sensors to cover Smart City areas of interest. This paper proposes an in-depth study of the challenging technical issues related to the efficient assignment of Mobile Crowd Sensing (MCS) data collection tasks to volunteers in a crowdsensing campaign. In particular, the paper originally describes how to increase the effectiveness of the proposed sensing campaigns through the inclusion of several new facilities, including accurate participant selection algorithms able to profile and predict user mobility patterns, gaming techniques, and timely geo-notification. The reported results show the feasibility of exploiting profiling trends/prediction techniques from volunteers’ behavior; moreover, they quantitatively compare different MCS task assignment strategies based on large-scale and real MCS data campaigns run in the ParticipAct living lab, an ongoing MCS real-world experiment that involved more than 170 students of the University of Bologna for more than one year. PMID:26263985

  1. Measurement Capabilities of the DOE ARM Aerial Facility

    NASA Astrophysics Data System (ADS)

    Schmid, B.; Tomlinson, J. M.; Hubbe, J.; Comstock, J. M.; Kluzek, C. D.; Chand, D.; Pekour, M. S.

    2012-12-01

    The Department of Energy Atmospheric Radiation Measurement (ARM) Program is a climate research user facility operating stationary ground sites in three important climatic regimes that provide long-term measurements of climate relevant properties. ARM also operates mobile ground- and ship-based facilities to conduct shorter field campaigns (6-12 months) to investigate understudied climate regimes around the globe. Finally, airborne observations by ARM's Aerial Facility (AAF) enhance the surface-based ARM measurements by providing high-resolution in situ measurements for process understanding, retrieval algorithm development, and model evaluation that is not possible using ground-based techniques. AAF started out in 2007 as a "virtual hangar" with no dedicated aircraft and only a small number of instruments owned by ARM. In this mode, AAF successfully carried out several missions contracting with organizations and investigators who provided their research aircraft and instrumentation. In 2009, the Battelle owned G-1 aircraft was included in the ARM facility. The G-1 is a large twin turboprop aircraft, capable of measurements up to altitudes of 7.5 km and a range of 2,800 kilometers. Furthermore the American Recovery and Reinvestment Act of 2009 provided funding for the procurement of seventeen new instruments to be used aboard the G-1 and other AAF virtual-hangar aircraft. AAF now executes missions in the virtual- and real-hangar mode producing freely available datasets for studying aerosol, cloud, and radiative processes in the atmosphere. AAF is also heavily engaged in the maturation and testing of newly developed airborne sensors to help foster the next generation of airborne instruments. In the presentation we will showcase science applications based on measurements from recent field campaigns such as CARES, CALWATER and TCAP.

  2. Interreality for the management and training of psychological stress: study protocol for a randomized controlled trial

    PubMed Central

    2013-01-01

    Background Psychological stress occurs when an individual perceives that environmental demands tax or exceed his or her adaptive capacity. Its association with severe health and emotional diseases, points out the necessity to find new efficient strategies to treat it. Moreover, psychological stress is a very personal problem and requires training focused on the specific needs of individuals. To overcome the above limitations, the INTERSTRESS project suggests the adoption of a new paradigm for e-health - Interreality - that integrates contextualized assessment and treatment within a hybrid environment, bridging the physical and the virtual worlds. According to this premise, the aim of this study is to investigate the advantages of using advanced technologies, in combination with cognitive behavioral therapy (CBT), based on a protocol for reducing psychological stress. Methods/Design The study is designed as a randomized controlled trial. It includes three groups of approximately 50 subjects each who suffer from psychological stress: (1) the experimental group, (2) the control group, (3) the waiting list group. Participants included in the experimental group will receive a treatment based on cognitive behavioral techniques combined with virtual reality, biofeedback and mobile phone, while the control group will receive traditional stress management CBT-based training, without the use of new technologies. The wait-list group will be reassessed and compared with the two other groups five weeks after the initial evaluation. After the reassessment, the wait-list patients will randomly receive one of the two other treatments. Psychometric and physiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as the qualitative dependent variable. Discussion What we would like to show with the present trial is that bridging virtual experiences, used to learn coping skills and emotional regulation, with real experiences using advanced technologies (virtual reality, advanced sensors and smartphones) is a feasible way to address actual limitations of existing protocols for psychological stress. Trial registration http://clinicaltrials.gov/ct2/show/NCT01683617 PMID:23806013

  3. Using a wireless motion controller for 3D medical image catheter interactions

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    State-of-the-art morphological imaging techniques usually provide high resolution 3D images with a huge number of slices. In clinical practice, however, 2D slice-based examinations are still the method of choice even for these large amounts of data. Providing intuitive interaction methods for specific 3D medical visualization applications is therefore a critical feature for clinical imaging applications. For the domain of catheter navigation and surgery planning, it is crucial to assist the physician with appropriate visualization techniques, such as 3D segmentation maps, fly-through cameras or virtual interaction approaches. There has been an ongoing development and improvement for controllers that help to interact with 3D environments in the domain of computer games. These controllers are based on both motion and infrared sensors and are typically used to detect 3D position and orientation. We have investigated how a state-of-the-art wireless motion sensor controller (Wiimote), developed by Nintendo, can be used for catheter navigation and planning purposes. By default the Wiimote controller only measure rough acceleration over a range of +/- 3g with 10% sensitivity and orientation. Therefore, a pose estimation algorithm was developed for computing accurate position and orientation in 3D space regarding 4 Infrared LEDs. Current results show that for the translation it is possible to obtain a mean error of (0.38cm, 0.41cm, 4.94cm) and for the rotation (0.16, 0.28) respectively. Within this paper we introduce a clinical prototype that allows steering of a virtual fly-through camera attached to the catheter tip by the Wii controller on basis of a segmented vessel tree.

  4. Sensing sheets based on large area electronics for fatigue crack detection

    NASA Astrophysics Data System (ADS)

    Yao, Yao; Glisic, Branko

    2015-03-01

    Reliable early-stage damage detection requires continuous structural health monitoring (SHM) over large areas of structure, and with high spatial resolution of sensors. This paper presents the development stage of prototype strain sensing sheets based on Large Area Electronics (LAE), in which thin-film strain gauges and control circuits are integrated on the flexible electronics and deposited on a polyimide sheet that can cover large areas. These sensing sheets were applied for fatigue crack detection on small-scale steel plates. Two types of sensing-sheet interconnects were designed and manufactured, and dense arrays of strain gauge sensors were assembled onto the interconnects. In total, four (two for each design type) strain sensing sheets were created and tested, which were sensitive to strain at virtually every point over the whole sensing sheet area. The sensing sheets were bonded to small-scale steel plates, which had a notch on the boundary so that fatigue cracks could be generated under cyclic loading. The fatigue tests were carried out at the Carleton Laboratory of Columbia University, and the steel plates were attached through a fixture to the loading machine that applied cyclic fatigue load. Fatigue cracks then occurred and propagated across the steel plates, leading to the failure of these test samples. The strain sensor that was close to the notch successfully detected the initialization of fatigue crack and localized the damage on the plate. The strain sensor that was away from the crack successfully detected the propagation of fatigue crack based on the time history of measured strain. Overall, the results of the fatigue tests validated general principles of the strain sensing sheets for crack detection.

  5. Performance Analysis of Inter-Domain Handoff Scheme Based on Virtual Layer in PMIPv6 Networks for IP-Based Internet of Things.

    PubMed

    Cho, Chulhee; Choi, Jae-Young; Jeong, Jongpil; Chung, Tai-Myoung

    2017-01-01

    Lately, we see that Internet of things (IoT) is introduced in medical services for global connection among patients, sensors, and all nearby things. The principal purpose of this global connection is to provide context awareness for the purpose of bringing convenience to a patient's life and more effectively implementing clinical processes. In health care, monitoring of biosignals of a patient has to be continuously performed while the patient moves inside and outside the hospital. Also, to monitor the accurate location and biosignals of the patient, appropriate mobility management is necessary to maintain connection between the patient and the hospital network. In this paper, a binding update scheme on PMIPv6, which reduces signal traffic during location updates by Virtual LMA (VLMA) on the top original Local Mobility Anchor (LMA) Domain, is proposed to reduce the total cost. If a Mobile Node (MN) moves to a Mobile Access Gateway (MAG)-located boundary of an adjacent LMA domain, the MN changes itself into a virtual mode, and this movement will be assumed to be a part of the VLMA domain. In the proposed scheme, MAGs eliminate global binding updates for MNs between LMA domains and significantly reduce the packet loss and latency by eliminating the handoff between LMAs. In conclusion, the performance analysis results show that the proposed scheme improves performance significantly versus PMIPv6 and HMIPv6 in terms of the binding update rate per user and average handoff latency.

  6. Aircraft panel with sensorless active sound power reduction capabilities through virtual mechanical impedances

    NASA Astrophysics Data System (ADS)

    Boulandet, R.; Michau, M.; Micheau, P.; Berry, A.

    2016-01-01

    This paper deals with an active structural acoustic control approach to reduce the transmission of tonal noise in aircraft cabins. The focus is on the practical implementation of the virtual mechanical impedances method by using sensoriactuators instead of conventional control units composed of separate sensors and actuators. The experimental setup includes two sensoriactuators developed from the electrodynamic inertial exciter and distributed over an aircraft trim panel which is subject to a time-harmonic diffuse sound field. The target mechanical impedances are first defined by solving a linear optimization problem from sound power measurements before being applied to the test panel using a complex envelope controller. Measured data are compared to results obtained with sensor-actuator pairs consisting of an accelerometer and an inertial exciter, particularly as regards sound power reduction. It is shown that the two types of control unit provide similar performance, and that here virtual impedance control stands apart from conventional active damping. In particular, it is clear from this study that extra vibrational energy must be provided by the actuators for optimal sound power reduction, mainly due to the high structural damping in the aircraft trim panel. Concluding remarks on the benefits of using these electrodynamic sensoriactuators to control tonal disturbances are also provided.

  7. Combining millimeter-wave radar and communication paradigms for automotive applications : a signal processing approach.

    DOT National Transportation Integrated Search

    2016-05-01

    As driving becomes more automated, vehicles are being equipped with more sensors generating even higher data rates. Radars (RAdio Detection and Ranging) are used for object detection, visual cameras as virtual mirrors, and LIDARs (LIght Detection and...

  8. Virtualized Traffic: reconstructing traffic flows from discrete spatiotemporal data.

    PubMed

    Sewall, Jason; van den Berg, Jur; Lin, Ming C; Manocha, Dinesh

    2011-01-01

    We present a novel concept, Virtualized Traffic, to reconstruct and visualize continuous traffic flows from discrete spatiotemporal data provided by traffic sensors or generated artificially to enhance a sense of immersion in a dynamic virtual world. Given the positions of each car at two recorded locations on a highway and the corresponding time instances, our approach can reconstruct the traffic flows (i.e., the dynamic motions of multiple cars over time) between the two locations along the highway for immersive visualization of virtual cities or other environments. Our algorithm is applicable to high-density traffic on highways with an arbitrary number of lanes and takes into account the geometric, kinematic, and dynamic constraints on the cars. Our method reconstructs the car motion that automatically minimizes the number of lane changes, respects safety distance to other cars, and computes the acceleration necessary to obtain a smooth traffic flow subject to the given constraints. Furthermore, our framework can process a continuous stream of input data in real time, enabling the users to view virtualized traffic events in a virtual world as they occur. We demonstrate our reconstruction technique with both synthetic and real-world input. © 2011 IEEE Published by the IEEE Computer Society

  9. Real-Time Earthquake Intensity Estimation Using Streaming Data Analysis of Social and Physical Sensors

    NASA Astrophysics Data System (ADS)

    Kropivnitskaya, Yelena; Tiampo, Kristy F.; Qin, Jinhui; Bauer, Michael A.

    2017-06-01

    Earthquake intensity is one of the key components of the decision-making process for disaster response and emergency services. Accurate and rapid intensity calculations can help to reduce total loss and the number of casualties after an earthquake. Modern intensity assessment procedures handle a variety of information sources, which can be divided into two main categories. The first type of data is that derived from physical sensors, such as seismographs and accelerometers, while the second type consists of data obtained from social sensors, such as witness observations of the consequences of the earthquake itself. Estimation approaches using additional data sources or that combine sources from both data types tend to increase intensity uncertainty due to human factors and inadequate procedures for temporal and spatial estimation, resulting in precision errors in both time and space. Here we present a processing approach for the real-time analysis of streams of data from both source types. The physical sensor data is acquired from the U.S. Geological Survey (USGS) seismic network in California and the social sensor data is based on Twitter user observations. First, empirical relationships between tweet rate and observed Modified Mercalli Intensity (MMI) are developed using data from the M6.0 South Napa, CAF earthquake that occurred on August 24, 2014. Second, the streams of both data types are analyzed together in simulated real-time to produce one intensity map. The second implementation is based on IBM InfoSphere Streams, a cloud platform for real-time analytics of big data. To handle large processing workloads for data from various sources, it is deployed and run on a cloud-based cluster of virtual machines. We compare the quality and evolution of intensity maps from different data sources over 10-min time intervals immediately following the earthquake. Results from the joint analysis shows that it provides more complete coverage, with better accuracy and higher resolution over a larger area than either data source alone.

  10. Gait rehabilitation with a high tech platform based on virtual reality conveys improvements in walking ability of children suffering from acquired brain injury.

    PubMed

    Biffi, E; Beretta, E; Diella, E; Panzeri, D; Maghini, C; Turconi, A C; Strazzer, S; Reni, G

    2015-01-01

    The Gait Real-time Analysis Interactive Lab (GRAIL) is an instrumented multi-sensor platform based on immersive virtual reality for gait training and rehabilitation. Few studies have been included GRAIL to evaluate gait patterns in normal and disabled people and to improve gait in adults, while at our knowledge no evidence on its use for the rehabilitation of children is available. In this study, 4 children suffering from acquired brain injury (ABI) underwent a 5 session treatment with GRAIL, to improve walking and balance ability in engaging VR environments. The first and the last sessions were partially dedicated to gait evaluation. Results are promising: improvements were recorded at the ankle level, selectively at the affected side, and at the pelvic level, while small changes were measured at the hip and knee joints, which were already comparable to healthy subjects. All these changes also conveyed advances in the symmetry of the walking pattern. In the next future, a longer intervention will be proposed and more children will be enrolled to strongly prove the effectiveness of GRAIL in the rehabilitation of children with ABI.

  11. Modeling Coniferous Canopy Structure over Extensive Areas for Ray Tracing Simulations: Scaling from the Leaf to the Stand Level

    NASA Astrophysics Data System (ADS)

    van Aardt, J. A.; van Leeuwen, M.; Kelbe, D.; Kampe, T.; Krause, K.

    2015-12-01

    Remote sensing is widely accepted as a useful technology for characterizing the Earth surface in an objective, reproducible, and economically feasible manner. To date, the calibration and validation of remote sensing data sets and biophysical parameter estimates remain challenging due to the requirements to sample large areas for ground-truth data collection, and restrictions to sample these data within narrow temporal windows centered around flight campaigns or satellite overpasses. The computer graphics community have taken significant steps to ameliorate some of these challenges by providing an ability to generate synthetic images based on geometrically and optically realistic representations of complex targets and imaging instruments. These synthetic data can be used for conceptual and diagnostic tests of instrumentation prior to sensor deployment or to examine linkages between biophysical characteristics of the Earth surface and at-sensor radiance. In the last two decades, the use of image generation techniques for remote sensing of the vegetated environment has evolved from the simulation of simple homogeneous, hypothetical vegetation canopies, to advanced scenes and renderings with a high degree of photo-realism. Reported virtual scenes comprise up to 100M surface facets; however, due to the tighter coupling between hardware and software development, the full potential of image generation techniques for forestry applications yet remains to be fully explored. In this presentation, we examine the potential computer graphics techniques have for the analysis of forest structure-function relationships and demonstrate techniques that provide for the modeling of extremely high-faceted virtual forest canopies, comprising billions of scene elements. We demonstrate the use of ray tracing simulations for the analysis of gap size distributions and characterization of foliage clumping within spatial footprints that allow for a tight matching between characteristics derived from these virtual scenes and typical pixel resolutions of remote sensing imagery.

  12. Noncontact Measurement of Humidity and Temperature Using Airborne Ultrasound

    NASA Astrophysics Data System (ADS)

    Kon, Akihiko; Mizutani, Koichi; Wakatsuki, Naoto

    2010-04-01

    We describe a noncontact method for measuring humidity and dry-bulb temperature. Conventional humidity sensors are single-point measurement devices, so that a noncontact method for measuring the relative humidity is required. Ultrasonic temperature sensors are noncontact measurement sensors. Because water vapor in the air increases sound velocity, conventional ultrasonic temperature sensors measure virtual temperature, which is higher than dry-bulb temperature. We performed experiments using an ultrasonic delay line, an atmospheric pressure sensor, and either a thermometer or a relative humidity sensor to confirm the validity of our measurement method at relative humidities of 30, 50, 75, and 100% and at temperatures of 283.15, 293.15, 308.15, and 323.15 K. The results show that the proposed method measures relative humidity with an error rate of less than 16.4% and dry-bulb temperature with an error of less than 0.7 K. Adaptations of the measurement method for use in air-conditioning control systems are discussed.

  13. Analysis of a ferrofluid core differential transformer tilt measurement sensor

    NASA Astrophysics Data System (ADS)

    Medvegy, T.; Molnár, Á.; Molnár, G.; Gugolya, Z.

    2017-04-01

    In our work, we developed a ferrofluid core differential transformer sensor, which can be used to measure tilt and acceleration. The proposed sensor consisted of three coils, from which the primary was excited with an alternating current. In the space surrounded by the coils was a cell half-filled with ferrofluid, therefore in the horizontal state of the sensor the fluid distributes equally in the three sections of the cell surrounded by the three coils. Nevertheless when the cell is being tilted or accelerated (in the direction of the axis of the coils), there is a different amount of ferrofluid in the three sections. The voltage induced in the secondary coils strongly depends on the amount of ferrofluid found in the core surrounded by them, so the tilt or the acceleration of the cell becomes measurable. We constructed the sensor in several layouts. The linearly coiled sensor had an excellent resolution. Another version with a toroidal cell had almost perfect linearity and a virtually infinite measuring range.

  14. SOMM: A New Service Oriented Middleware for Generic Wireless Multimedia Sensor Networks Based on Code Mobility

    PubMed Central

    Faghih, Mohammad Mehdi; Moghaddam, Mohsen Ebrahimi

    2011-01-01

    Although much research in the area of Wireless Multimedia Sensor Networks (WMSNs) has been done in recent years, the programming of sensor nodes is still time-consuming and tedious. It requires expertise in low-level programming, mainly because of the use of resource constrained hardware and also the low level API provided by current operating systems. The code of the resulting systems has typically no clear separation between application and system logic. This minimizes the possibility of reusing code and often leads to the necessity of major changes when the underlying platform is changed. In this paper, we present a service oriented middleware named SOMM to support application development for WMSNs. The main goal of SOMM is to enable the development of modifiable and scalable WMSN applications. A network which uses the SOMM is capable of providing multiple services to multiple clients at the same time with the specified Quality of Service (QoS). SOMM uses a virtual machine with the ability to support mobile agents. Services in SOMM are provided by mobile agents and SOMM also provides a t space on each node which agents can use to communicate with each other. PMID:22346646

  15. SOMM: A new service oriented middleware for generic wireless multimedia sensor networks based on code mobility.

    PubMed

    Faghih, Mohammad Mehdi; Moghaddam, Mohsen Ebrahimi

    2011-01-01

    Although much research in the area of Wireless Multimedia Sensor Networks (WMSNs) has been done in recent years, the programming of sensor nodes is still time-consuming and tedious. It requires expertise in low-level programming, mainly because of the use of resource constrained hardware and also the low level API provided by current operating systems. The code of the resulting systems has typically no clear separation between application and system logic. This minimizes the possibility of reusing code and often leads to the necessity of major changes when the underlying platform is changed. In this paper, we present a service oriented middleware named SOMM to support application development for WMSNs. The main goal of SOMM is to enable the development of modifiable and scalable WMSN applications. A network which uses the SOMM is capable of providing multiple services to multiple clients at the same time with the specified Quality of Service (QoS). SOMM uses a virtual machine with the ability to support mobile agents. Services in SOMM are provided by mobile agents and SOMM also provides a t space on each node which agents can use to communicate with each other.

  16. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion

    PubMed Central

    Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.

    2016-01-01

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation. PMID:27827836

  17. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    NASA Astrophysics Data System (ADS)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.

  18. Autonomic Intelligent Cyber Sensor to Support Industrial Control Network Awareness

    DOE PAGES

    Vollmer, Todd; Manic, Milos; Linda, Ondrej

    2013-06-01

    The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of Autonomic computing and a SOAP based IF-MAP external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, self-managed framework. The contribution of this paper is two-fold: 1) A flexible two level communication layer based on Autonomic computing and Service Oriented Architecture is detailed and 2) Three complementary modules that dynamically reconfiguremore » in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific Operating System and port configurations. Additionally the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure.« less

  19. Sensitivity-based virtual fields for the non-linear virtual fields method

    NASA Astrophysics Data System (ADS)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2017-09-01

    The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.

  20. Study of cross-shaped ultrasonic array sensor applied to partial discharge location in transformer oil.

    PubMed

    Li, Jisheng; Xin, Xiaohu; Luo, Yongfen; Ji, Haiying; Li, Yanming; Deng, Junbo

    2013-11-01

    A conformal combined sensor is designed and it is used in Partial Discharge (PD) location experiments in transformer oil. The sensor includes a cross-shaped ultrasonic phased array of 13 elements and an ultra-high-frequency (UHF) electromagnetic rectangle array of 2 × 2 elements. Virtual expansion with high order cumulants, the ultrasonic array can achieve the effect of array with 61 elements. This greatly improves the aperture and direction sharpness of original array and reduces the cost of follow-up hardware. With the cross-shaped ultrasonic array, the results of PD location experiments are precise and the maximum error of the direction of arrival (DOA) is less than 5°.

  1. USER-CUSTOMIZED ENVIRONMENTAL MAPPING AND DECISION SUPPORT USING NASA WORLD WIND AND DOE GENIE PRO SOFTWARE

    EPA Science Inventory

    Effective environmental stewardship requires timely geospatial information about ecology and

    environment for informed environmental decision support. Unprecedented public access to high resolution

    imagery from earth-looking sensors via online virtual earth browsers ...

  2. Determination of network origin-destination matrices using partial link traffic counts and virtual sensor information in an integrated corridor management framework.

    DOT National Transportation Integrated Search

    2014-04-01

    Trip origin-destination (O-D) demand matrices are critical components in transportation network : modeling, and provide essential information on trip distributions and corresponding spatiotemporal : traffic patterns in traffic zones in vehicular netw...

  3. A Novel Topology Link-Controlling Approach for Active Defense of a Node in a Network.

    PubMed

    Li, Jun; Hu, HanPing; Ke, Qiao; Xiong, Naixue

    2017-03-09

    With the rapid development of virtual machine technology and cloud computing, distributed denial of service (DDoS) attacks, or some peak traffic, poses a great threat to the security of the network. In this paper, a novel topology link control technique and mitigation attacks in real-time environments is proposed. Firstly, a non-invasive method of deploying virtual sensors in the nodes is built, which uses the resource manager of each monitored node as a sensor. Secondly, a general topology-controlling approach of resisting the tolerant invasion is proposed. In the proposed approach, a prediction model is constructed by using copula functions for predicting the peak of a resource through another resource. The result of prediction determines whether or not to initiate the active defense. Finally, a minority game with incomplete strategy is employed to suppress attack flows and improve the permeability of the normal flows. The simulation results show that the proposed approach is very effective in protecting nodes.

  4. A Novel Topology Link-Controlling Approach for Active Defense of Nodes in Networks

    PubMed Central

    Li, Jun; Hu, HanPing; Ke, Qiao; Xiong, Naixue

    2017-01-01

    With the rapid development of virtual machine technology and cloud computing, distributed denial of service (DDoS) attacks, or some peak traffic, poses a great threat to the security of the network. In this paper, a novel topology link control technique and mitigation attacks in real-time environments is proposed. Firstly, a non-invasive method of deploying virtual sensors in the nodes is built, which uses the resource manager of each monitored node as a sensor. Secondly, a general topology-controlling approach of resisting the tolerant invasion is proposed. In the proposed approach, a prediction model is constructed by using copula functions for predicting the peak of a resource through another resource. The result of prediction determines whether or not to initiate the active defense. Finally, a minority game with incomplete strategy is employed to suppress attack flows and improve the permeability of the normal flows. The simulation results show that the proposed approach is very effective in protecting nodes. PMID:28282962

  5. Fixed Base Modal Survey of the MPCV Orion European Service Module Structural Test Article

    NASA Technical Reports Server (NTRS)

    Winkel, James P.; Akers, J. C.; Suarez, Vicente J.; Staab, Lucas D.; Napolitano, Kevin L.

    2017-01-01

    Recently, the MPCV Orion European Service Module Structural Test Article (E-STA) underwent sine vibration testing using the multi-axis shaker system at NASA GRC Plum Brook Station Mechanical Vibration Facility (MVF). An innovative approach using measured constraint shapes at the interface of E-STA to the MVF allowed high-quality fixed base modal parameters of the E-STA to be extracted, which have been used to update the E-STA finite element model (FEM), without the need for a traditional fixed base modal survey. This innovative approach provided considerable program cost and test schedule savings. This paper documents this modal survey, which includes the modal pretest analysis sensor selection, the fixed base methodology using measured constraint shapes as virtual references and measured frequency response functions, and post-survey comparison between measured and analysis fixed base modal parameters.

  6. Water-Based Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Oglesby, Donald M.; Ingram, JoAnne L.; Jordan, Jeffrey D.; Watkins, A. Neal; Leighty, Bradley D.

    2004-01-01

    Preparation and performance of a water-based pressure sensitive paint (PSP) is described. A water emulsion of an oxygen permeable polymer and a platinum porphyrin type luminescent compound were dispersed in a water matrix to produce a PSP that performs well without the use of volatile, toxic solvents. The primary advantages of this PSP are reduced contamination of wind tunnels in which it is used, lower health risk to its users, and easier cleanup and disposal. This also represents a cost reduction by eliminating the need for elaborate ventilation and user protection during application. The water-based PSP described has all the characteristics associated with water-based paints (low toxicity, very low volatile organic chemicals, and easy water cleanup) but also has high performance as a global pressure sensor for PSP measurements in wind tunnels. The use of a water-based PSP virtually eliminates the toxic fumes associated with the application of PSPs to a model in wind tunnels.

  7. Data Convergence - An Australian Perspective

    NASA Astrophysics Data System (ADS)

    Allen, S. S.; Howell, B.

    2012-12-01

    Coupled numerical physical, biogeochemical and sediment models are increasingly being used as integrators to help understand the cumulative or far field effects of change in the coastal environment. This reliance on modeling has forced observations to be delivered as data streams ingestible by modeling frameworks. This has made it easier to create near real-time or forecasting models than to try to recreate the past, and has lead in turn to the conversion of historical data into data streams to allow them to be ingested by the same frameworks. The model and observation frameworks under development within Australia's Commonwealth and Industrial Research Organisation (CSIRO) are now feeding into the Australian Ocean Data Network's (AODN's) MARine Virtual Laboratory (MARVL) . The sensor, or data stream, brokering solution is centred around the "message" and all data flowing through the gateway is wrapped as a message. Messages consist of a topic and a data object and their routing through the gateway to pre-processors and listeners is determined by the topic. The Sensor Message Gateway (SMG) method is allowing data from different sensors measuring the same thing but with different temporal resolutions, units or spatial coverage to be ingested or visualized seamlessly. At the same time the model output as a virtual sensor is being explored, this again being enabled by the SMG. It is only for two way communications with sensor that rigorous adherence to standards is needed, by accepting existing data in less than ideal formats, but exposing them though the SMG we can move a step closer to the Internet Of Things by creating an Internet of Industries where each vested interest can continue with business as usual, contribute to data convergence and adopt more open standards when investment seems appropriate to that sector or business.Architecture Overview

  8. OpenLMD, multimodal monitoring and control of LMD processing

    NASA Astrophysics Data System (ADS)

    Rodríguez-Araújo, Jorge; García-Díaz, Antón

    2017-02-01

    This paper presents OpenLMD, a novel open-source solution for on-line multimodal monitoring of Laser Metal Deposition (LMD). The solution is also applicable to a wider range of laser-based applications that require on-line control (e.g. laser welding). OpenLMD is a middleware that enables the orchestration and virtualization of a LMD robot cell, using several open-source frameworks (e.g. ROS, OpenCV, PCL). The solution also allows reconfiguration by easy integration of multiple sensors and processing equipment. As a result, OpenLMD delivers significant advantages over existing monitoring and control approaches, such as improved scalability, and multimodal monitoring and data sharing capabilities.

  9. Automation and Robotics for Space-Based Systems, 1991

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II (Editor)

    1992-01-01

    The purpose of this in-house workshop was to assess the state-of-the-art of automation and robotics for space operations from an LaRC perspective and to identify areas of opportunity for future research. Over half of the presentations came from the Automation Technology Branch, covering telerobotic control, extravehicular activity (EVA) and intra-vehicular activity (IVA) robotics, hand controllers for teleoperation, sensors, neural networks, and automated structural assembly, all applied to space missions. Other talks covered the Remote Manipulator System (RMS) active damping augmentation, space crane work, modeling, simulation, and control of large, flexible space manipulators, and virtual passive controller designs for space robots.

  10. Automotive Radar and Lidar Systems for Next Generation Driver Assistance Functions

    NASA Astrophysics Data System (ADS)

    Rasshofer, R. H.; Gresser, K.

    2005-05-01

    Automotive radar and lidar sensors represent key components for next generation driver assistance functions (Jones, 2001). Today, their use is limited to comfort applications in premium segment vehicles although an evolution process towards more safety-oriented functions is taking place. Radar sensors available on the market today suffer from low angular resolution and poor target detection in medium ranges (30 to 60m) over azimuth angles larger than ±30°. In contrast, Lidar sensors show large sensitivity towards environmental influences (e.g. snow, fog, dirt). Both sensor technologies today have a rather high cost level, forbidding their wide-spread usage on mass markets. A common approach to overcome individual sensor drawbacks is the employment of data fusion techniques (Bar-Shalom, 2001). Raw data fusion requires a common, standardized data interface to easily integrate a variety of asynchronous sensor data into a fusion network. Moreover, next generation sensors should be able to dynamically adopt to new situations and should have the ability to work in cooperative sensor environments. As vehicular function development today is being shifted more and more towards virtual prototyping, mathematical sensor models should be available. These models should take into account the sensor's functional principle as well as all typical measurement errors generated by the sensor.

  11. Sensor-Based Electromagnetic Navigation (Mediguide®): How Accurate Is It? A Phantom Model Study.

    PubMed

    Bourier, Felix; Reents, Tilko; Ammar-Busch, Sonia; Buiatti, Alessandra; Grebmer, Christian; Telishevska, Marta; Brkic, Amir; Semmler, Verena; Lennerz, Carsten; Kaess, Bernhard; Kottmaier, Marc; Kolb, Christof; Deisenhofer, Isabel; Hessling, Gabriele

    2015-10-01

    Data about localization reproducibility as well as spatial and visual accuracy of the new MediGuide® sensor-based electroanatomic navigation technology are scarce. We therefore sought to quantify these parameters based on phantom experiments. A realistic heart phantom was generated in a 3D-Printer. A CT scan was performed on the phantom. The phantom itself served as ground-truth reference to ensure exact and reproducible catheter placement. A MediGuide® catheter was repeatedly tagged at selected positions to assess accuracy of point localization. The catheter was also used to acquire a MediGuide®-scaled geometry in the EnSite Velocity® electroanatomic mapping system. The acquired geometries (MediGuide®-scaled and EnSite Velocity®-scaled) were compared to a CT segmentation of the phantom to quantify concordance. Distances between landmarks were measured in the EnSite Velocity®- and MediGuide®-scaled geometry and the CT dataset for Bland-Altman comparison. The visualization of virtual MediGuide® catheter tips was compared to their corresponding representation on fluoroscopic cine-loops. Point localization accuracy was 0.5 ± 0.3 mm for MediGuide® and 1.4 ± 0.7 mm for EnSite Velocity®. The 3D accuracy of the geometries was 1.1 ± 1.4 mm (MediGuide®-scaled) and 3.2 ± 1.6 mm (not MediGuide®-scaled). The offset between virtual MediGuide® catheter visualization and catheter representation on corresponding fluoroscopic cine-loops was 0.4 ± 0.1 mm. The MediGuide® system shows a very high level of accuracy regarding localization reproducibility as well as spatial and visual accuracy, which can be ascribed to the magnetic field localization technology. The observed offsets between the geometry visualization and the real phantom are below a clinically relevant threshold. © 2015 Wiley Periodicals, Inc.

  12. Open core control software for surgical robots.

    PubMed

    Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-05-01

    In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a "force guide" for supporting operators to perform precise manipulation by using a master-slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. The Open Core Control software was implemented on a surgical master-slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a "force guide" on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement "General Principles of Software Validation" or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field.

  13. A Type of Low-Latency Data Gathering Method with Multi-Sink for Sensor Networks

    PubMed Central

    Sha, Chao; Qiu, Jian-mei; Li, Shu-yan; Qiang, Meng-ye; Wang, Ru-chuan

    2016-01-01

    To balance energy consumption and reduce latency on data transmission in Wireless Sensor Networks (WSNs), a type of low-latency data gathering method with multi-Sink (LDGM for short) is proposed in this paper. The network is divided into several virtual regions consisting of three or less data gathering units and the leader of each region is selected according to its residual energy as well as distance to all of the other nodes. Only the leaders in each region need to communicate with the mobile Sinks which have effectively reduced energy consumption and the end-to-end delay. Moreover, with the help of the sleep scheduling and the sensing radius adjustment strategies, redundancy in network coverage could also be effectively reduced. Simulation results show that LDGM is energy efficient in comparison with MST as well as MWST and its time efficiency on data collection is higher than one Sink based data gathering methods. PMID:27338401

  14. Portable Multispectral Colorimeter for Metallic Ion Detection and Classification

    PubMed Central

    Jaimes, Ruth F. V. V.; Borysow, Walter; Gomes, Osmar F.; Salcedo, Walter J.

    2017-01-01

    This work deals with a portable device system applied to detect and classify different metallic ions as proposed and developed, aiming its application for hydrological monitoring systems such as rivers, lakes and groundwater. Considering the system features, a portable colorimetric system was developed by using a multispectral optoelectronic sensor. All the technology of quantification and classification of metallic ions using optoelectronic multispectral sensors was fully integrated in the embedded hardware FPGA ( Field Programmable Gate Array) technology and software based on virtual instrumentation (NI LabView®). The system draws on an indicative colorimeter by using the chromogen reagent of 1-(2-pyridylazo)-2-naphthol (PAN). The results obtained with the signal processing and pattern analysis using the method of the linear discriminant analysis, allows excellent results during detection and classification of Pb(II), Cd(II), Zn(II), Cu(II), Fe(III) and Ni(II) ions, with almost the same level of performance as for those obtained from the Ultravioled and visible (UV-VIS) spectrophotometers of high spectral resolution. PMID:28788082

  15. Portable Multispectral Colorimeter for Metallic Ion Detection and Classification.

    PubMed

    Braga, Mauro S; Jaimes, Ruth F V V; Borysow, Walter; Gomes, Osmar F; Salcedo, Walter J

    2017-07-28

    This work deals with a portable device system applied to detect and classify different metallic ions as proposed and developed, aiming its application for hydrological monitoring systems such as rivers, lakes and groundwater. Considering the system features, a portable colorimetric system was developed by using a multispectral optoelectronic sensor. All the technology of quantification and classification of metallic ions using optoelectronic multispectral sensors was fully integrated in the embedded hardware FPGA ( Field Programmable Gate Array) technology and software based on virtual instrumentation (NI LabView ® ). The system draws on an indicative colorimeter by using the chromogen reagent of 1-(2-pyridylazo)-2-naphthol (PAN). The results obtained with the signal processing and pattern analysis using the method of the linear discriminant analysis, allows excellent results during detection and classification of Pb(II), Cd(II), Zn(II), Cu(II), Fe(III) and Ni(II) ions, with almost the same level of performance as for those obtained from the Ultravioled and visible (UV-VIS) spectrophotometers of high spectral resolution.

  16. Multi-pose system for geometric measurement of large-scale assembled rotational parts

    NASA Astrophysics Data System (ADS)

    Deng, Bowen; Wang, Zhaoba; Jin, Yong; Chen, Youxing

    2017-05-01

    To achieve virtual assembly of large-scale assembled rotational parts based on in-field geometric data, we develop a multi-pose rotative arm measurement system with a gantry and 2D laser sensor (RAMSGL) to measure and provide the geometry of these parts. We mount a 2D laser sensor onto the end of a six-jointed rotative arm to guarantee the accuracy and efficiency, combine the rotative arm with a gantry to measure pairs of assembled rotational parts. By establishing and using the D-H model of the system, the 2D laser data is turned into point clouds and finally geometry is calculated. In addition, we design three experiments to evaluate the performance of the system. Experimental results show that the system’s max length measuring deviation using gauge blocks is 35 µm, max length measuring deviation using ball plates is 50 µm, max single-point repeatability error is 25 µm, and measurement scope is from a radius of 0 mm to 500 mm.

  17. The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Zhang, Xin; Zhang, Tianhong

    2017-11-01

    A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.

  18. QuakeSim: a Web Service Environment for Productive Investigations with Earth Surface Sensor Data

    NASA Astrophysics Data System (ADS)

    Parker, J. W.; Donnellan, A.; Granat, R. A.; Lyzenga, G. A.; Glasscoe, M. T.; McLeod, D.; Al-Ghanmi, R.; Pierce, M.; Fox, G.; Grant Ludwig, L.; Rundle, J. B.

    2011-12-01

    The QuakeSim science gateway environment includes a visually rich portal interface, web service access to data and data processing operations, and the QuakeTables ontology-based database of fault models and sensor data. The integrated tools and services are designed to assist investigators by covering the entire earthquake cycle of strain accumulation and release. The Web interface now includes Drupal-based access to diverse and changing content, with new ability to access data and data processing directly from the public page, as well as the traditional project management areas that require password access. The system is designed to make initial browsing of fault models and deformation data particularly engaging for new users. Popular data and data processing include GPS time series with data mining techniques to find anomalies in time and space, experimental forecasting methods based on catalogue seismicity, faulted deformation models (both half-space and finite element), and model-based inversion of sensor data. The fault models include the CGS and UCERF 2.0 faults of California and are easily augmented with self-consistent fault models from other regions. The QuakeTables deformation data include the comprehensive set of UAVSAR interferograms as well as a growing collection of satellite InSAR data.. Fault interaction simulations are also being incorporated in the web environment based on Virtual California. A sample usage scenario is presented which follows an investigation of UAVSAR data from viewing as an overlay in Google Maps, to selection of an area of interest via a polygon tool, to fast extraction of the relevant correlation and phase information from large data files, to a model inversion of fault slip followed by calculation and display of a synthetic model interferogram.

  19. Developing Flexible Networked Lighting Control Systems

    Science.gov Websites

    , Bluetooth, ZigBee and others are increasingly used for building control purposes. Low-cost computation : Bundling digital intelligence at the sensors and lights adds virtually no incremental cost. Coupled with cost. Research Goals and Objectives This project "Developing Flexible, Networked Lighting Control

  20. Distributed Pervasive Worlds: The Case of Exergames

    ERIC Educational Resources Information Center

    Laine, Teemu H.; Sedano, Carolina Islas

    2015-01-01

    Pervasive worlds are computing environments where a virtual world converges with the physical world through context-aware technologies such as sensors. In pervasive worlds, technology is distributed among entities that may be distributed geographically. We explore the concept, possibilities, and challenges of distributed pervasive worlds in a case…

  1. A new chapter in environmental sensing: The Open-Source Published Environmental Sensing (OPENS) laboratory

    NASA Astrophysics Data System (ADS)

    Selker, J. S.; Roques, C.; Higgins, C. W.; Good, S. P.; Hut, R.; Selker, A.

    2015-12-01

    The confluence of 3-Dimensional printing, low-cost solid-state-sensors, low-cost, low-power digital controllers (e.g., Arduinos); and open-source publishing (e.g., Github) is poised to transform environmental sensing. The Open-Source Published Environmental Sensing (OPENS) laboratory has launched and is available for all to use. OPENS combines cutting edge technologies and makes them available to the global environmental sensing community. OPENS includes a Maker lab space where people may collaborate in person or virtually via on-line forum for the publication and discussion of environmental sensing technology (Corvallis, Oregon, USA, please feel free to request a free reservation for space and equipment use). The physical lab houses a test-bed for sensors, as well as a complete classical machine shop, 3-D printers, electronics development benches, and workstations for code development. OPENS will provide a web-based formal publishing framework wherein global students and scientists can peer-review publish (with DOI) novel and evolutionary advancements in environmental sensor systems. This curated and peer-reviewed digital collection will include complete sets of "printable" parts and operating computer code for sensing systems. The physical lab will include all of the machines required to produce these sensing systems. These tools can be addressed in person or virtually, creating a truly global venue for advancement in monitoring earth's environment and agricultural systems. In this talk we will present an example of the process of design and publication the design and data from the OPENS-Permeameter. The publication includes 3-D printing code, Arduino (or other control/logging platform) operational code; sample data sets, and a full discussion of the design set in the scientific context of previous related devices. Editors for the peer-review process are currently sought - contact John.Selker@Oregonstate.edu or Clement.Roques@Oregonstate.edu.

  2. Natural locomotion based on a reduced set of inertial sensors: Decoupling body and head directions indoors

    PubMed Central

    Diaz-Estrella, Antonio; Reyes-Lecuona, Arcadio; Langley, Alyson; Brown, Michael; Sharples, Sarah

    2018-01-01

    Inertial sensors offer the potential for integration into wireless virtual reality systems that allow the users to walk freely through virtual environments. However, owing to drift errors, inertial sensors cannot accurately estimate head and body orientations in the long run, and when walking indoors, this error cannot be corrected by magnetometers, due to the magnetic field distortion created by ferromagnetic materials present in buildings. This paper proposes a technique, called EHBD (Equalization of Head and Body Directions), to address this problem using two head- and shoulder-located magnetometers. Due to their proximity, their distortions are assumed to be similar and the magnetometer measurements are used to detect when the user is looking straight forward. Then, the system corrects the discrepancies between the estimated directions of the head and the shoulder, which are provided by gyroscopes and consequently are affected by drift errors. An experiment is conducted to evaluate the performance of this technique in two tasks (navigation and navigation plus exploration) and using two different locomotion techniques: (1) gaze-directed mode (GD) in which the walking direction is forced to be the same as the head direction, and (2) decoupled direction mode (DD) in which the walking direction can be different from the viewing direction. The obtained results show that both locomotion modes show similar matching of the target path during the navigation task, while DD’s path matches the target path more closely than GD in the navigation plus exploration task. These results validate the EHBD technique especially when allowing different walking and viewing directions in the navigation plus exploration tasks, as expected. While the proposed method does not reach the accuracy of optical tracking (ideal case), it is an acceptable and satisfactory solution for users and is much more compact, portable and economical. PMID:29621298

  3. Sensor-Based Interactive Balance Training with Visual Joint Movement Feedback for Improving Postural Stability in Diabetics with Peripheral Neuropathy: A Randomized Controlled Trial.

    PubMed

    Grewal, Gurtej Singh; Schwenk, Michael; Lee-Eng, Jacqueline; Parvaneh, Saman; Bharara, Manish; Menzies, Robert A; Talal, Talal K; Armstrong, David G; Najafi, Bijan

    2015-01-01

    Individuals with diabetic peripheral neuropathy (DPN) have deficits in sensory and motor skills leading to inadequate proprioceptive feedback, impaired postural balance and higher fall risk. This study investigated the effect of sensor-based interactive balance training on postural stability and daily physical activity in older adults with diabetes. Thirty-nine older adults with DPN were enrolled (age 63.7 ± 8.2 years, BMI 30.6 ± 6, 54% females) and randomized to either an intervention (IG) or a control (CG) group. The IG received sensor-based interactive exercise training tailored for people with diabetes (twice a week for 4 weeks). The exercises focused on shifting weight and crossing virtual obstacles. Body-worn sensors were implemented to acquire kinematic data and provide real-time joint visual feedback during the training. Outcome measurements included changes in center of mass (CoM) sway, ankle and hip joint sway measured during a balance test while the eyes were open and closed at baseline and after the intervention. Daily physical activities were also measured during a 48-hour period at baseline and at follow-up. Analysis of covariance was performed for the post-training outcome comparison. Compared with the CG, the patients in the IG showed a significantly reduced CoM sway (58.31%; p = 0.009), ankle sway (62.7%; p = 0.008) and hip joint sway (72.4%; p = 0.017) during the balance test with open eyes. The ankle sway was also significantly reduced in the IG group (58.8%; p = 0.037) during measurements while the eyes were closed. The number of steps walked showed a substantial but nonsignificant increase (+27.68%; p = 0.064) in the IG following training. The results of this randomized controlled trial demonstrate that people with DPN can significantly improve their postural balance with diabetes-specific, tailored, sensor-based exercise training. The results promote the use of wearable technology in exercise training; however, future studies comparing this technology with commercially available systems are required to evaluate the benefit of interactive visual joint movement feedback. © 2015 S. Karger AG, Basel.

  4. Opportunities and challenges in industrial plantation mapping in big data era

    NASA Astrophysics Data System (ADS)

    Dong, J.; Xiao, X.; Qin, Y.; Chen, B.; Wang, J.; Kou, W.; Zhai, D.

    2017-12-01

    With the increasing demand in timer, rubber, palm oil in the world market, industrial plantations have dramatically expanded, especially in Southeast Asia; which have been affecting ecosystem services and human wellbeing. However, existing efforts on plantation mapping are still limited and blocked our understanding about the magnitude of plantation expansion and their potential environmental effects. Here we would present a literature review about the existing efforts on plantation mapping based on one or multiple remote sensing sources, including rubber, oil palm, and eucalyptus plantations. The biophysical features and spectral characteristics of plantations will be introduced first, a comparison on existing algorithms in terms of different plantation types. Based on that, we proposed potential improvements in large scale plantation mapping based on the virtual constellation of multiple sensors, citizen science tools, and cloud computing technology. Based on the literature review, we discussed a series of issues for future scale operational paddy rice mapping.

  5. Feasibility of a Customized, In-Home, Game-Based Stroke Exercise Program Using the Microsoft Kinect® Sensor.

    PubMed

    Proffitt, Rachel; Lange, Belinda

    2015-01-01

    The objective of this study was to determine the feasibility of a 6-week, game-based, in-home telerehabilitation exercise program using the Microsoft Kinect® for individuals with chronic stroke. Four participants with chronic stroke completed the intervention based on games designed with the customized Mystic Isle software. The games were tailored to each participant's specific rehabilitation needs to facilitate the attainment of individualized goals determined through the Canadian Occupational Performance Measure. Likert scale questionnaires assessed the feasibility and utility of the game-based intervention. Supplementary clinical outcome data were collected. All participants played the games with moderately high enjoyment. Participant feedback helped identify barriers to use (especially, limited free time) and possible improvements. An in-home, customized, virtual reality game intervention to provide rehabilitative exercises for persons with chronic stroke is practicable. However, future studies are necessary to determine the intervention's impact on participant function, activity, and involvement.

  6. Interreality in practice: bridging virtual and real worlds in the treatment of posttraumatic stress disorders.

    PubMed

    Riva, Giuseppe; Raspelli, Simona; Algeri, Davide; Pallavicini, Federica; Gorini, Alessandra; Wiederhold, Brenda K; Gaggioli, Andrea

    2010-02-01

    The use of new technologies, particularly virtual reality, is not new in the treatment of posttraumatic stress disorders (PTSD): VR is used to facilitate the activation of the traumatic event during exposure therapy. However, during the therapy, VR is a new and distinct realm, separate from the emotions and behaviors experienced by the patient in the real world: the behavior of the patient in VR has no direct effects on the real-life experience; the emotions and problems experienced by the patient in the real world are not directly addressed in the VR exposure. In this article, we suggest that the use of a new technological paradigm, Interreality, may improve the clinical outcome of PTSD. The main feature of Interreality is a twofold link between the virtual and real worlds: (a) behavior in the physical world influences the experience in the virtual one; (b) behavior in the virtual world influences the experience in the real one. This is achieved through 3D shared virtual worlds; biosensors and activity sensors (from the real to the virtual world); and personal digital assistants and/or mobile phones (from the virtual world to the real one). We describe different technologies that are involved in the Interreality vision and its clinical rationale. To illustrate the concept of Interreality in practice, a clinical scenario is also presented and discussed: Rosa, a 55-year-old nurse, involved in a major car accident.

  7. Open Source Dataturbine (OSDT) Android Sensorpod in Environmental Observing Systems

    NASA Astrophysics Data System (ADS)

    Fountain, T. R.; Shin, P.; Tilak, S.; Trinh, T.; Smith, J.; Kram, S.

    2014-12-01

    The OSDT Android SensorPod is a custom-designed mobile computing platform for assembling wireless sensor networks for environmental monitoring applications. Funded by an award from the Gordon and Betty Moore Foundation, the OSDT SensorPod represents a significant technological advance in the application of mobile and cloud computing technologies to near-real-time applications in environmental science, natural resources management, and disaster response and recovery. It provides a modular architecture based on open standards and open-source software that allows system developers to align their projects with industry best practices and technology trends, while avoiding commercial vendor lock-in to expensive proprietary software and hardware systems. The integration of mobile and cloud-computing infrastructure represents a disruptive technology in the field of environmental science, since basic assumptions about technology requirements are now open to revision, e.g., the roles of special purpose data loggers and dedicated site infrastructure. The OSDT Android SensorPod was designed with these considerations in mind, and the resulting system exhibits the following characteristics - it is flexible, efficient and robust. The system was developed and tested in the three science applications: 1) a fresh water limnology deployment in Wisconsin, 2) a near coastal marine science deployment at the UCSD Scripps Pier, and 3) a terrestrial ecological deployment in the mountains of Taiwan. As part of a public education and outreach effort, a Facebook page with daily ocean pH measurements from the UCSD Scripps pier was developed. Wireless sensor networks and the virtualization of data and network services is the future of environmental science infrastructure. The OSDT Android SensorPod was designed and developed to harness these new technology developments for environmental monitoring applications.

  8. Virtual Place Value

    ERIC Educational Resources Information Center

    Burris, Justin T.

    2013-01-01

    Technology permeates every aspect of daily life, from the sensors that control the traffic signals to the cameras that allow real-time video chats with family around the world. At times, technology may make life easier, faster, and more productive. However, does technology do the same in schools and classrooms? Will the benefits of technology…

  9. MATREX: A Unifying Modeling and Simulation Architecture for Live-Virtual-Constructive Applications

    DTIC Science & Technology

    2007-05-23

    Deployment Systems Acquisition Operations & Support B C Sustainment FRP Decision Review FOC LRIP/IOT& ECritical Design Review Pre-Systems...CMS2 – Comprehensive Munitions & Sensor Server • CSAT – C4ISR Static Analysis Tool • C4ISR – Command & Control, Communications, Computers

  10. Performance Analysis of Inter-Domain Handoff Scheme Based on Virtual Layer in PMIPv6 Networks for IP-Based Internet of Things

    PubMed Central

    Choi, Jae-Young; Jeong, Jongpil; Chung, Tai-Myoung

    2017-01-01

    Lately, we see that Internet of things (IoT) is introduced in medical services for global connection among patients, sensors, and all nearby things. The principal purpose of this global connection is to provide context awareness for the purpose of bringing convenience to a patient’s life and more effectively implementing clinical processes. In health care, monitoring of biosignals of a patient has to be continuously performed while the patient moves inside and outside the hospital. Also, to monitor the accurate location and biosignals of the patient, appropriate mobility management is necessary to maintain connection between the patient and the hospital network. In this paper, a binding update scheme on PMIPv6, which reduces signal traffic during location updates by Virtual LMA (VLMA) on the top original Local Mobility Anchor (LMA) Domain, is proposed to reduce the total cost. If a Mobile Node (MN) moves to a Mobile Access Gateway (MAG)-located boundary of an adjacent LMA domain, the MN changes itself into a virtual mode, and this movement will be assumed to be a part of the VLMA domain. In the proposed scheme, MAGs eliminate global binding updates for MNs between LMA domains and significantly reduce the packet loss and latency by eliminating the handoff between LMAs. In conclusion, the performance analysis results show that the proposed scheme improves performance significantly versus PMIPv6 and HMIPv6 in terms of the binding update rate per user and average handoff latency. PMID:28129355

  11. Applying Web-Based Tools for Research, Engineering, and Operations

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    2011-01-01

    Personnel in the NASA Glenn Research Center Network and Architectures branch have performed a variety of research related to space-based sensor webs, network centric operations, security and delay tolerant networking (DTN). Quality documentation and communications, real-time monitoring and information dissemination are critical in order to perform quality research while maintaining low cost and utilizing multiple remote systems. This has been accomplished using a variety of Internet technologies often operating simultaneously. This paper describes important features of various technologies and provides a number of real-world examples of how combining Internet technologies can enable a virtual team to act efficiently as one unit to perform advanced research in operational systems. Finally, real and potential abuses of power and manipulation of information and information access is addressed.

  12. An Interactive Logistics Centre Information Integration System Using Virtual Reality

    NASA Astrophysics Data System (ADS)

    Hong, S.; Mao, B.

    2018-04-01

    The logistics industry plays a very important role in the operation of modern cities. Meanwhile, the development of logistics industry has derived various problems that are urgent to be solved, such as the safety of logistics products. This paper combines the study of logistics industry traceability and logistics centre environment safety supervision with virtual reality technology, creates an interactive logistics centre information integration system. The proposed system utilizes the immerse characteristic of virtual reality, to simulate the real logistics centre scene distinctly, which can make operation staff conduct safety supervision training at any time without regional restrictions. On the one hand, a large number of sensor data can be used to simulate a variety of disaster emergency situations. On the other hand, collecting personnel operation data, to analyse the improper operation, which can improve the training efficiency greatly.

  13. A virtual pointer to support the adoption of professional vision in laparoscopic training.

    PubMed

    Feng, Yuanyuan; McGowan, Hannah; Semsar, Azin; Zahiri, Hamid R; George, Ivan M; Turner, Timothy; Park, Adrian; Kleinsmith, Andrea; Mentis, Helena M

    2018-05-23

    To assess a virtual pointer in supporting surgical trainees' development of professional vision in laparoscopic surgery. We developed a virtual pointing and telestration system utilizing the Microsoft Kinect movement sensor as an overlay for any imagine system. Training with the application was compared to a standard condition, i.e., verbal instruction with un-mediated gestures, in a laparoscopic training environment. Seven trainees performed four simulated laparoscopic tasks guided by an experienced surgeon as the trainer. Trainee performance was subjectively assessed by the trainee and trainer, and objectively measured by number of errors, time to task completion, and economy of movement. No significant differences in errors and time to task completion were obtained between virtual pointer and standard conditions. Economy of movement in the non-dominant hand was significantly improved when using virtual pointer ([Formula: see text]). The trainers perceived a significant improvement in trainee performance in virtual pointer condition ([Formula: see text]), while the trainees perceived no difference. The trainers' perception of economy of movement was similar between the two conditions in the initial three runs and became significantly improved in virtual pointer condition in the fourth run ([Formula: see text]). Results show that the virtual pointer system improves the trainer's perception of trainee's performance and this is reflected in the objective performance measures in the third and fourth training runs. The benefit of a virtual pointing and telestration system may be perceived by the trainers early on in training, but this is not evident in objective trainee performance until further mastery has been attained. In addition, the performance improvement of economy of motion specifically shows that the virtual pointer improves the adoption of professional vision- improved ability to see and use laparoscopic video results in more direct instrument movement.

  14. Evaluation of decadal predictions using a satellite simulator for the Special Sensor Microwave Imager (SSM/I)

    NASA Astrophysics Data System (ADS)

    Spangehl, Thomas; Schröder, Marc; Bodas-Salcedo, Alejandro; Glowienka-Hense, Rita; Hense, Andreas; Hollmann, Rainer; Dietzsch, Felix

    2017-04-01

    Decadal climate predictions are commonly evaluated focusing on geophysical parameters such as temperature, precipitation or wind speed using observational datasets and reanalysis. Alternatively, satellite based radiance measurements combined with satellite simulator techniques to deduce virtual satellite observations from the numerical model simulations can be used. The latter approach enables an evaluation in the instrument's parameter space and has the potential to reduce uncertainties on the reference side. Here we present evaluation methods focusing on forward operator techniques for the Special Sensor Microwave Imager (SSM/I). The simulator is developed as an integrated part of the CFMIP Observation Simulator Package (COSP). On the observational side the SSM/I and SSMIS Fundamental Climate Data Record (FCDR) released by CM SAF (http://dx.doi.org/10.5676/EUM_SAF_CM/FCDR_MWI/V002) is used, which provides brightness temperatures for different channels and covers the period from 1987 to 2013. The simulator is applied to hindcast simulations performed within the MiKlip project (http://fona-miklip.de) which is funded by the BMBF (Federal Ministry of Education and Research in Germany). Probabilistic evaluation results are shown based on a subset of the hindcast simulations covering the observational period.

  15. Use of Occupancy Sensors in LED Parking Lot and Garage Applications: Early Experiences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kinzey, Bruce R.; Myer, Michael; Royer, Michael P.

    2012-11-07

    Occupancy sensor systems are gaining traction as an effective technological approach to reducing energy use in exterior commercial lighting applications. Done correctly, occupancy sensors can substantially enhance the savings from an already efficient lighting system. However, this technology is confronted by several potential challenges and pitfalls that can leave a significant amount of the prospective savings on the table. This report describes anecdotal experiences from field installations of occupancy sensor controlled light-emitting diode (LED) lighting at two parking structures and two parking lots. The relative levels of success at these installations reflect a marked range of potential outcomes: from anmore » additional 76% in energy savings to virtually no additional savings. Several issues that influenced savings were encountered in these early stage installations and are detailed in the report. Ultimately, care must be taken in the design, selection, and commissioning of a sensor-controlled lighting installation, else the only guaranteed result may be its cost.« less

  16. Occupant detection using support vector machines with a polynomial kernel function

    NASA Astrophysics Data System (ADS)

    Destefanis, Eduardo A.; Kienzle, Eberhard; Canali, Luis R.

    2000-10-01

    The use of air bags in the presence of bad passenger and baby seat positions in car seats can injure or kill these individuals in case of an accident when this device is inflated. A proposed solution is the use of range sensors to detect passenger and baby seat risky positions. Such sensors allow the Airbag inflation to be controlled. This work is concerned with the application of different classification schemes to a real world problem and the optimization of a sensor as a function of the classification performance. The sensor is constructed using a new technology which is called Photo-Mixer-Device (PMD). A systematic analysis of the occupant detection problem was made using real and virtual environments. The challenge is to find the best sensor geometry and to adapt a classification scheme under the current technological constraints. Passenger head position detection is also a desirable issue. A couple of classifiers have been used into a simple configuration to reach this goal. Experiences and results are described.

  17. Detection and identification of human targets in radar data

    NASA Astrophysics Data System (ADS)

    Gürbüz, Sevgi Z.; Melvin, William L.; Williams, Douglas B.

    2007-04-01

    Radar offers unique advantages over other sensors, such as visual or seismic sensors, for human target detection. Many situations, especially military applications, prevent the placement of video cameras or implantment seismic sensors in the area being observed, because of security or other threats. However, radar can operate far away from potential targets, and functions during daytime as well as nighttime, in virtually all weather conditions. In this paper, we examine the problem of human target detection and identification using single-channel, airborne, synthetic aperture radar (SAR). Human targets are differentiated from other detected slow-moving targets by analyzing the spectrogram of each potential target. Human spectrograms are unique, and can be used not just to identify targets as human, but also to determine features about the human target being observed, such as size, gender, action, and speed. A 12-point human model, together with kinematic equations of motion for each body part, is used to calculate the expected target return and spectrogram. A MATLAB simulation environment is developed including ground clutter, human and non-human targets for the testing of spectrogram-based detection and identification algorithms. Simulations show that spectrograms have some ability to detect and identify human targets in low noise. An example gender discrimination system correctly detected 83.97% of males and 91.11% of females. The problems and limitations of spectrogram-based methods in high clutter environments are discussed. The SNR loss inherent to spectrogram-based methods is quantified. An alternate detection and identification method that will be used as a basis for future work is proposed.

  18. Integration of stereotactic ultrasonic data into an interactive image-guided neurosurgical system

    NASA Astrophysics Data System (ADS)

    Shima, Daniel W.; Galloway, Robert L., Jr.

    1998-06-01

    Stereotactic ultrasound can be incorporated into an interactive, image-guide neurosurgical system by using an optical position sensor to define the location of an intraoperative scanner in physical space. A C-program has been developed that communicates with the OptotrakTM system developed by Northern Digital Inc. to optically track the three-dimensional position and orientation of a fan-shaped area defined with respect to a hand-held probe. (i.e., a virtual B-mode ultrasound fan beam) Volumes of CT and MR head scans from the same patient are registered to a location in physical space using a point-based technique. The coordinates of the virtual fan beam in physical space are continuously calculated and updated on-the-fly. During each program loop, the CT and MR data volumes are reformatted along the same plane and displayed as two fan-shaped images that correspond to the current physical-space location of the virtual fan beam. When the reformatted preoperative tomographic images are eventually paired with a real-time intraoperative ultrasound image, a neurosurgeon will be able to use the unique information of each imaging modality (e.g., the high resolution and tissue contrast of CT and MR and the real-time functionality of ultrasound) in a complementary manner to identify structures in the brain more easily and to guide surgical procedures more effectively.

  19. Method of the Determination of Exterior Orientation of Sensors in Hilbert Type Space.

    PubMed

    Stępień, Grzegorz

    2018-03-17

    The following article presents a new isometric transformation algorithm based on the transformation in the newly normed Hilbert type space. The presented method is based on so-called virtual translations, already known in advance, of two relative oblique orthogonal coordinate systems-interior and exterior orientation of sensors-to a common, known in both systems, point. Each of the systems is translated along its axis (the systems have common origins) and at the same time the angular relative orientation of both coordinate systems is constant. The translation of both coordinate systems is defined by the spatial norm determining the length of vectors in the new Hilbert type space. As such, the displacement of two relative oblique orthogonal systems is reduced to zero. This makes it possible to directly calculate the rotation matrix of the sensor. The next and final step is the return translation of the system along an already known track. The method can be used for big rotation angles. The method was verified in laboratory conditions for the test data set and measurement data (field data). The accuracy of the results in the laboratory test is on the level of 10 -6 of the input data. This confirmed the correctness of the assumed calculation method. The method is a further development of the author's 2017 Total Free Station (TFS) transformation to several centroids in Hilbert type space. This is the reason why the method is called Multi-Centroid Isometric Transformation-MCIT. MCIT is very fast and enables, by reducing to zero the translation of two relative oblique orthogonal coordinate systems, direct calculation of the exterior orientation of the sensors.

  20. Providing haptic feedback in robot-assisted minimally invasive surgery: a direct optical force-sensing solution for haptic rendering of deformable bodies.

    PubMed

    Ehrampoosh, Shervin; Dave, Mohit; Kia, Michael A; Rablau, Corneliu; Zadeh, Mehrdad H

    2013-01-01

    This paper presents an enhanced haptic-enabled master-slave teleoperation system which can be used to provide force feedback to surgeons in minimally invasive surgery (MIS). One of the research goals was to develop a combined-control architecture framework that included both direct force reflection (DFR) and position-error-based (PEB) control strategies. To achieve this goal, it was essential to measure accurately the direct contact forces between deformable bodies and a robotic tool tip. To measure the forces at a surgical tool tip and enhance the performance of the teleoperation system, an optical force sensor was designed, prototyped, and added to a robot manipulator. The enhanced teleoperation architecture was formulated by developing mathematical models for the optical force sensor, the extended slave robot manipulator, and the combined-control strategy. Human factor studies were also conducted to (a) examine experimentally the performance of the enhanced teleoperation system with the optical force sensor, and (b) study human haptic perception during the identification of remote object deformability. The first experiment was carried out to discriminate deformability of objects when human subjects were in direct contact with deformable objects by means of a laparoscopic tool. The control parameters were then tuned based on the results of this experiment using a gain-scheduling method. The second experiment was conducted to study the effectiveness of the force feedback provided through the enhanced teleoperation system. The results show that the force feedback increased the ability of subjects to correctly identify materials of different deformable types. In addition, the virtual force feedback provided by the teleoperation system comes close to the real force feedback experienced in direct MIS. The experimental results provide design guidelines for choosing and validating the control architecture and the optical force sensor.

  1. NASA Tech Briefs, January 2003

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Topics covered include: Optoelectronic Tool Adds Scale Marks to Photographic Images; Compact Interconnection Networks Based on Quantum Dots; Laterally Coupled Quantum-Dot Distributed-Feedback Lasers; Bit-Serial Adder Based on Quantum Dots; Stabilized Fiber-Optic Distribution of Reference Frequency; Delay/Doppler-Mapping GPS-Reflection Remote-Sensing System; Ladar System Identifies Obstacles Partly Hidden by Grass; Survivable Failure Data Recorders for Spacecraft; Fiber-Optic Ammonia Sensors; Silicon Membrane Mirrors with Electrostatic Shape Actuators; Nanoscale Hot-Wire Probes for Boundary-Layer Flows; Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing; Efficient Coupling of Lasers to Telescopes with Obscuration; Aligning Three Off-Axis Mirrors with Help of a DOE; Calibrating Laser Gas Measurements by Use of Natural CO2; Laser Ranging Simulation Program; Micro-Ball-Lens Optical Switch Driven by SMA Actuator; Evaluation of Charge Storage and Decay in Spacecraft Insulators; Alkaline Capacitors Based on Nitride Nanoparticles; Low-EC-Content Electrolytes for Low-Temperature Li-Ion Cells; Software for a GPS-Reflection Remote-Sensing System; Software for Building Models of 3D Objects via the Internet; "Virtual Cockpit Window" for a Windowless Aerospacecraft; CLARAty Functional-Layer Software; Java Library for Input and Output of Image Data and Metadata; Software for Estimating Costs of Testing Rocket Engines; Energy-Absorbing, Lightweight Wheels; Viscoelastic Vibration Dampers for Turbomachine Blades; Soft Landing of Spacecraft on Energy-Absorbing Self-Deployable Cushions; Pneumatically Actuated Miniature Peristaltic Vacuum Pumps; Miniature Gas-Turbine Power Generator; Pressure-Sensor Assembly Technique; Wafer-Level Membrane-Transfer Process for Fabricating MEMS; A Reactive-Ion Etch for Patterning Piezoelectric Thin Film; Wavelet-Based Real-Time Diagnosis of Complex Systems; Quantum Search in Hilbert Space; Analytic Method for Computing Instrument Pointing Jitter; and Semiselective Optoelectronic Sensors for Monitoring Microbes.

  2. Real-time Data Access to First Responders: A VORB application

    NASA Astrophysics Data System (ADS)

    Lu, S.; Kim, J. B.; Bryant, P.; Foley, S.; Vernon, F.; Rajasekar, A.; Meier, S.

    2006-12-01

    Getting information to first responders is not an easy task. The sensors that provide the information are diverse in formats and come from many disciplines. They are also distributed by location, transmit data at different frequencies and are managed and owned by autonomous administrative entities. Pulling such types of data in real-time, needs a very robust sensor network with reliable data transport and buffering capabilities. Moreover, the system should be extensible and scalable in numbers and sensor types. ROADNet is a real- time sensor network project at UCSD gathering diverse environmental data in real-time or near-real-time. VORB (Virtual Object Ring Buffer) is the middleware used in ROADNet offering simple, uniform and scalable real-time data management for discovering (through metadata), accessing and archiving real-time data and data streams. Recent development in VORB, a web API, has offered quick and simple real-time data integration with web applications. In this poster, we discuss one application developed as part of ROADNet. SMER (Santa Margarita Ecological Reserve) is located in interior Southern California, a region prone to catastrophic wildfires each summer and fall. To provide data during emergencies, we have applied the VORB framework to develop a web-based application for providing access to diverse sensor data including weather data, heat sensor information, and images from cameras. Wildfire fighters have access to real-time data about weather and heat conditions in the area and view pictures taken from cameras at multiple points in the Reserve to pinpoint problem areas. Moreover, they can browse archived images and sensor data from earlier times to provide a comparison framework. To show scalability of the system, we have expanded the sensor network under consideration through other areas in Southern California including sensors accessible by Los Angeles County Fire Department (LACOFD) and those available through the High Performance Wireless Research and Education Network (HPWREN). The poster will discuss the system architecture and components, the types of sensor being used and usage scenarios. The system is currently operational through the SMER web-site.

  3. Toward autonomous avian-inspired grasping for micro aerial vehicles.

    PubMed

    Thomas, Justin; Loianno, Giuseppe; Polin, Joseph; Sreenath, Koushil; Kumar, Vijay

    2014-06-01

    Micro aerial vehicles, particularly quadrotors, have been used in a wide range of applications. However, the literature on aerial manipulation and grasping is limited and the work is based on quasi-static models. In this paper, we draw inspiration from agile, fast-moving birds such as raptors, that are able to capture moving prey on the ground or in water, and develop similar capabilities for quadrotors. We address dynamic grasping, an approach to prehensile grasping in which the dynamics of the robot and its gripper are significant and must be explicitly modeled and controlled for successful execution. Dynamic grasping is relevant for fast pick-and-place operations, transportation and delivery of objects, and placing or retrieving sensors. We show how this capability can be realized (a) using a motion capture system and (b) without external sensors relying only on onboard sensors. In both cases we describe the dynamic model, and trajectory planning and control algorithms. In particular, we present a methodology for flying and grasping a cylindrical object using feedback from a monocular camera and an inertial measurement unit onboard the aerial robot. This is accomplished by mapping the dynamics of the quadrotor to a level virtual image plane, which in turn enables dynamically-feasible trajectory planning for image features in the image space, and a vision-based controller with guaranteed convergence properties. We also present experimental results obtained with a quadrotor equipped with an articulated gripper to illustrate both approaches.

  4. New biometric modalities using internal physical characteristics

    NASA Astrophysics Data System (ADS)

    Mortenson, Juliana (Brooks)

    2010-04-01

    Biometrics is described as the science of identifying people based on physical characteristics such as their fingerprints, facial features, hand geometry, iris patterns, palm prints, or speech recognition. Notably, all of these physical characteristics are visible or detectable from the exterior of the body. These external characteristics can be lifted, photographed, copied or recorded for unauthorized access to a biometric system. Individual humans are unique internally, however, just as they are unique externally. New biometric modalities have been developed which identify people based on their unique internal characteristics. For example, "BoneprintsTM" use acoustic fields to scan the unique bone density pattern of a thumb pressed on a small acoustic sensor. Thanks to advances in piezoelectric materials the acoustic sensor can be placed in virtually any device such as a steering wheel, door handle, or keyboard. Similarly, "Imp-PrintsTM" measure the electrical impedance patterns of a hand to identify or verify a person's identity. Small impedance sensors can be easily embedded in devices such as smart cards, handles, or wall mounts. These internal biometric modalities rely on physical characteristics which are not visible or photographable, providing an added level of security. In addition, both the acoustic and impedance methods can be combined with physiologic measurements such as acoustic Doppler or impedance plethysmography, respectively. Added verification that the biometric pattern came from a living person can be obtained. These new biometric modalities have the potential to allay user concerns over protection of privacy, while providing a higher level of security.*

  5. Intraoral fiber optic-based diagnostic for periodontal disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, P W; Gutierrez, D M; Everett, M J

    2000-01-21

    The purpose of this initial study was to begin development of a new, objective diagnostic instrument that will allow simultaneous quantitation of multiple proteases within a single periodontal pocket using a chemical fiber optic sensor. This approach could potentially be adapted to use specific antibodies and chemiluminescence to detect and quantitate virtually any compound and compare concentrations of different compounds within the same periodontal pocket. The device could also be used to assay secretions in salivary ducts or from a variety of wounds. The applicability is, therefore, not solely limited to dentistry and the device would be important both formore » clinical diagnostics and as a research tool.« less

  6. Integrated development of light armored vehicles based on wargaming simulators

    NASA Astrophysics Data System (ADS)

    Palmarini, Marc; Rapanotti, John

    2004-08-01

    Vehicles are evolving into vehicle networks through improved sensors, computers and communications. Unless carefully planned, these complex systems can result in excessive crew workload and difficulty in optimizing the use of the vehicle. To overcome these problems, a war-gaming simulator is being developed as a common platform to integrate contributions from three different groups. The simulator, OneSAF, is used to integrate simplified models of technology and natural phenomena from scientists and engineers with tactics and doctrine from the military and analyzed in detail by operations analysts. This approach ensures the modelling of processes known to be important regardless of the level of information available about the system. Vehicle survivability can be improved as well with better sensors, computers and countermeasures to detect and avoid or destroy threats. To improve threat detection and reliability, Defensive Aids Suite (DAS) designs are based on three complementary sensor technologies including: acoustics, visible and infrared optics and radar. Both active armour and softkill countermeasures are considered. In a typical scenario, a search radar, providing continuous hemispherical coverage, detects and classifies the threat and cues a tracking radar. Data from the tracking radar is processed and an explosive grenade is launched to destroy or deflect the threat. The angle of attack and velocity from the search radar can be used by the soft-kill system to carry out an infrared search and track or an illuminated range-gated scan for the threat platform. Upon detection, obscuration, countermanoeuvres and counterfire can be used against the threat. The sensor suite is completed by acoustic detection of muzzle blast and shock waves. Automation and networking at the platoon level contribute to improved vehicle survivability. Sensor data fusion is essential in avoiding catastrophic failure of the DAS. The modular DAS components can be used with Light Armoured Vehicle (LAV) variants including: armoured personnel carriers and direct-fire support vehicles. OneSAF will be used to assess the performance of these DAS-equipped vehicles on a virtual battlefield.

  7. Soft Pushing Operation with Dual Compliance Controllers Based on Estimated Torque and Visual Force

    NASA Astrophysics Data System (ADS)

    Muis, Abdul; Ohnishi, Kouhei

    Sensor fusion extends robot ability to perform more complex tasks. An interesting application in such an issue is pushing operation, in which through multi-sensor, the robot moves an object by pushing it. Generally, a pushing operation consists of “approaching, touching, and pushing"(1). However, most researches in this field are dealing with how the pushed object follows the predefined trajectory. In which, the implication as the robot body or the tool-tip hits an object is neglected. Obviously on collision, the robot momentum may crash sensor, robot's surface or even the object. For that reason, this paper proposes a soft pushing operation with dual compliance controllers. Mainly, a compliance control is a control system with trajectory compensation so that the external force may be followed. In this paper, the first compliance controller is driven by estimated external force based on reaction torque observer(2), which compensates contact sensation. The other one compensates non-contact sensation. Obviously, a contact sensation, acquired from force sensor either reaction torque observer of an object, is measurable once the robot touched the object. Therefore, a non-contact sensation is introduced before touching an object, which is realized with visual sensor in this paper. Here, instead of using visual information as command reference, the visual information such as depth, is treated as virtual force for the second compliance controller. Thus, having contact and non-contact sensation, the robot will be compliant with wider sensation. This paper considers a heavy mobile manipulator and a heavy object, which have significant momentum on touching stage. A chopstick is attached on the object side to show the effectiveness of the proposed method. Here, both compliance controllers adjust the mobile manipulator command reference to provide soft pushing operation. Finally, the experimental result shows the validity of the proposed method.

  8. Scaling up close-range surveys, a challenge for the generalization of as-built data in industrial applications

    NASA Astrophysics Data System (ADS)

    Hullo, J.-F.; Thibault, G.

    2014-06-01

    As-built CAD data reconstructed from Terrestrial Laser Scanner (TLS) data are used for more than two decades by Electricité de France (EDF) to prepare maintenance operations in its facilities. But today, the big picture is renewed: "as-built virtual reality" must address a huge scale-up to provide data to an increasing number of applications. In this paper, we first present a wide multi-sensor multi-purpose scanning campaign performed in a 10 floor building of a power plant in 2013: 1083 TLS stations (about 40.109 3D points referenced under a 2 cm tolerance) and 1025 RGB panoramic images (340.106 pixels per point of view). As expected, this very large survey of high precision measurements in a complex environment stressed sensors and tools that were developed for more favourable conditions and smaller data sets. The whole survey process (tools and methods used from acquisition and processing to CAD reconstruction) underwent a detailed follow-up in order to state on the locks to a possible generalization to other buildings. Based on these recent feedbacks, we have highlighted some of these current bottlenecks in this paper: sensors denoising, automation in processes, data validation tools improvements, standardization of formats and (meta-) data structures.

  9. Knock probability estimation through an in-cylinder temperature model with exogenous noise

    NASA Astrophysics Data System (ADS)

    Bares, P.; Selmanaj, D.; Guardiola, C.; Onder, C.

    2018-01-01

    This paper presents a new knock model which combines a deterministic knock model based on the in-cylinder temperature and an exogenous noise disturbing this temperature. The autoignition of the end-gas is modelled by an Arrhenius-like function and the knock probability is estimated by propagating a virtual error probability distribution. Results show that the random nature of knock can be explained by uncertainties at the in-cylinder temperature estimation. The model only has one parameter for calibration and thus can be easily adapted online. In order to reduce the measurement uncertainties associated with the air mass flow sensor, the trapped mass is derived from the in-cylinder pressure resonance, which improves the knock probability estimation and reduces the number of sensors needed for the model. A four stroke SI engine was used for model validation. By varying the intake temperature, the engine speed, the injected fuel mass, and the spark advance, specific tests were conducted, which furnished data with various knock intensities and probabilities. The new model is able to predict the knock probability within a sufficient range at various operating conditions. The trapped mass obtained by the acoustical model was compared in steady conditions by using a fuel balance and a lambda sensor and differences below 1 % were found.

  10. Generalized compliant motion primitive

    NASA Technical Reports Server (NTRS)

    Backes, Paul G. (Inventor)

    1994-01-01

    This invention relates to a general primitive for controlling a telerobot with a set of input parameters. The primitive includes a trajectory generator; a teleoperation sensor; a joint limit generator; a force setpoint generator; a dither function generator, which produces telerobot motion inputs in a common coordinate frame for simultaneous combination in sensor summers. Virtual return spring motion input is provided by a restoration spring subsystem. The novel features of this invention include use of a single general motion primitive at a remote site to permit the shared and supervisory control of the robot manipulator to perform tasks via a remotely transferred input parameter set.

  11. Wide-angle vision for road views

    NASA Astrophysics Data System (ADS)

    Huang, F.; Fehrs, K.-K.; Hartmann, G.; Klette, R.

    2013-03-01

    The field-of-view of a wide-angle image is greater than (say) 90 degrees, and so contains more information than available in a standard image. A wide field-of-view is more advantageous than standard input for understanding the geometry of 3D scenes, and for estimating the poses of panoramic sensors within such scenes. Thus, wide-angle imaging sensors and methodologies are commonly used in various road-safety, street surveillance, street virtual touring, or street 3D modelling applications. The paper reviews related wide-angle vision technologies by focusing on mathematical issues rather than on hardware.

  12. Digital Photography and Its Impact on Instruction.

    ERIC Educational Resources Information Center

    Lantz, Chris

    Today the chemical processing of film is being replaced by a virtual digital darkroom. Digital image storage makes new levels of consistency possible because its nature is less volatile and more mutable than traditional photography. The potential of digital imaging is great, but issues of disk storage, computer speed, camera sensor resolution,…

  13. Final Report-Rail Sensor Testbed Program: Active Agents in Containers for Transport Chain Security

    DTIC Science & Technology

    2011-03-21

    information. These trust approaches have been applied to a variety of regimes, including virtual communities [14], email [15] and ecommerce [16...2004(http://www .arxiv.org/abs/cond-mat/0402143). 16. Melnik, M., Aim, J., Does a seller’s eCommerce reputation matter? evidence from eBay auctions

  14. Landsat's role in ecological applications of remote sensing.

    Treesearch

    Warren B. Cohen; Samuel N. Goward

    2004-01-01

    Remote sensing, geographic information systems, and modeling have combined to produce a virtual explosion of growth in ecological investigations and applications that are explicitly spatial and temporal. Of all remotely sensed data, those acquired by landsat sensors have played the most pivotal role in spatial and temporal scaling. Modern terrestrial ecology relies on...

  15. Cyber entertainment system using an immersive networked virtual environment

    NASA Astrophysics Data System (ADS)

    Ihara, Masayuki; Honda, Shinkuro; Kobayashi, Minoru; Ishibashi, Satoshi

    2002-05-01

    Authors are examining a cyber entertainment system that applies IPT (Immersive Projection Technology) displays to the entertainment field. This system enables users who are in remote locations to communicate with each other so that they feel as if they are together. Moreover, the system enables those users to experience a high degree of presence, this is due to provision of stereoscopic vision as well as a haptic interface and stereo sound. This paper introduces this system from the viewpoint of space sharing across the network and elucidates its operation using the theme of golf. The system is developed by integrating avatar control, an I/O device, communication links, virtual interaction, mixed reality, and physical simulations. Pairs of these environments are connected across the network. This allows the two players to experience competition. An avatar of each player is displayed by the other player's IPT display in the remote location and is driven by only two magnetic sensors. That is, in the proposed system, users don't need to wear any data suit with a lot of sensors and they are able to play golf without any encumbrance.

  16. Virtual reality system for treatment of the fear of public speaking using image-based rendering and moving pictures.

    PubMed

    Lee, Jae M; Ku, Jeong H; Jang, Dong P; Kim, Dong H; Choi, Young H; Kim, In Y; Kim, Sun I

    2002-06-01

    The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology enabled us to use virtual reality (VR) for the treatment of the fear of public speaking. There have been two techniques used to construct a virtual environment for the treatment of the fear of public speaking: model-based and movie-based. Virtual audiences and virtual environments made by model-based technique are unrealistic and unnatural. The movie-based technique has a disadvantage in that each virtual audience cannot be controlled respectively, because all virtual audiences are included in one moving picture file. To address this disadvantage, this paper presents a virtual environment made by using image-based rendering (IBR) and chroma keying simultaneously. IBR enables us to make the virtual environment realistic because the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma keying allows a virtual audience to be controlled individually. In addition, a real-time capture technique was applied in constructing the virtual environment to give the subjects more interaction, in that they can talk with a therapist or another subject.

  17. Wearable Virtual White Cane Network for navigating people with visual impairment.

    PubMed

    Gao, Yabiao; Chandrawanshi, Rahul; Nau, Amy C; Tse, Zion Tsz Ho

    2015-09-01

    Navigating the world with visual impairments presents inconveniences and safety concerns. Although a traditional white cane is the most commonly used mobility aid due to its low cost and acceptable functionality, electronic traveling aids can provide more functionality as well as additional benefits. The Wearable Virtual Cane Network is an electronic traveling aid that utilizes ultrasound sonar technology to scan the surrounding environment for spatial information. The Wearable Virtual Cane Network is composed of four sensing nodes: one on each of the user's wrists, one on the waist, and one on the ankle. The Wearable Virtual Cane Network employs vibration and sound to communicate object proximity to the user. While conventional navigation devices are typically hand-held and bulky, the hands-free design of our prototype allows the user to perform other tasks while using the Wearable Virtual Cane Network. When the Wearable Virtual Cane Network prototype was tested for distance resolution and range detection limits at various displacements and compared with a traditional white cane, all participants performed significantly above the control bar (p < 4.3 × 10(-5), standard t-test) in distance estimation. Each sensor unit can detect an object with a surface area as small as 1 cm(2) (1 cm × 1 cm) located 70 cm away. Our results showed that the walking speed for an obstacle course was increased by 23% on average when subjects used the Wearable Virtual Cane Network rather than the white cane. The obstacle course experiment also shows that the use of the white cane in combination with the Wearable Virtual Cane Network can significantly improve navigation over using either the white cane or the Wearable Virtual Cane Network alone (p < 0.05, paired t-test). © IMechE 2015.

  18. Embodied social interaction constitutes social cognition in pairs of humans: a minimalist virtual reality experiment.

    PubMed

    Froese, Tom; Iizuka, Hiroyuki; Ikegami, Takashi

    2014-01-14

    Scientists have traditionally limited the mechanisms of social cognition to one brain, but recent approaches claim that interaction also realizes cognitive work. Experiments under constrained virtual settings revealed that interaction dynamics implicitly guide social cognition. Here we show that embodied social interaction can be constitutive of agency detection and of experiencing another's presence. Pairs of participants moved their "avatars" along an invisible virtual line and could make haptic contact with three identical objects, two of which embodied the other's motions, but only one, the other's avatar, also embodied the other's contact sensor and thereby enabled responsive interaction. Co-regulated interactions were significantly correlated with identifications of the other's avatar and reports of the clearest awareness of the other's presence. These results challenge folk psychological notions about the boundaries of mind, but make sense from evolutionary and developmental perspectives: an extendible mind can offload cognitive work into its environment.

  19. Embodied social interaction constitutes social cognition in pairs of humans: A minimalist virtual reality experiment

    PubMed Central

    Froese, Tom; Iizuka, Hiroyuki; Ikegami, Takashi

    2014-01-01

    Scientists have traditionally limited the mechanisms of social cognition to one brain, but recent approaches claim that interaction also realizes cognitive work. Experiments under constrained virtual settings revealed that interaction dynamics implicitly guide social cognition. Here we show that embodied social interaction can be constitutive of agency detection and of experiencing another's presence. Pairs of participants moved their “avatars” along an invisible virtual line and could make haptic contact with three identical objects, two of which embodied the other's motions, but only one, the other's avatar, also embodied the other's contact sensor and thereby enabled responsive interaction. Co-regulated interactions were significantly correlated with identifications of the other's avatar and reports of the clearest awareness of the other's presence. These results challenge folk psychological notions about the boundaries of mind, but make sense from evolutionary and developmental perspectives: an extendible mind can offload cognitive work into its environment. PMID:24419102

  20. VEVI: A Virtual Reality Tool For Robotic Planetary Explorations

    NASA Technical Reports Server (NTRS)

    Piguet, Laurent; Fong, Terry; Hine, Butler; Hontalas, Phil; Nygren, Erik

    1994-01-01

    The Virtual Environment Vehicle Interface (VEVI), developed by the NASA Ames Research Center's Intelligent Mechanisms Group, is a modular operator interface for direct teleoperation and supervisory control of robotic vehicles. Virtual environments enable the efficient display and visualization of complex data. This characteristic allows operators to perceive and control complex systems in a natural fashion, utilizing the highly-evolved human sensory system. VEVI utilizes real-time, interactive, 3D graphics and position / orientation sensors to produce a range of interface modalities from the flat panel (windowed or stereoscopic) screen displays to head mounted/head-tracking stereo displays. The interface provides generic video control capability and has been used to control wheeled, legged, air bearing, and underwater vehicles in a variety of different environments. VEVI was designed and implemented to be modular, distributed and easily operated through long-distance communication links, using a communication paradigm called SYNERGY.

Top