Sample records for virtual sensor system

  1. Virtual sensors for active noise control in acoustic-structural coupled enclosures using structural sensing: part II--Optimization of structural sensor placement.

    PubMed

    Halim, Dunant; Cheng, Li; Su, Zhongqing

    2011-04-01

    The work proposed an optimization approach for structural sensor placement to improve the performance of vibro-acoustic virtual sensor for active noise control applications. The vibro-acoustic virtual sensor was designed to estimate the interior sound pressure of an acoustic-structural coupled enclosure using structural sensors. A spectral-spatial performance metric was proposed, which was used to quantify the averaged structural sensor output energy of a vibro-acoustic system excited by a spatially varying point source. It was shown that (i) the overall virtual sensing error energy was contributed additively by the modal virtual sensing error and the measurement noise energy; (ii) each of the modal virtual sensing error system was contributed by both the modal observability levels for the structural sensing and the target acoustic virtual sensing; and further (iii) the strength of each modal observability level was influenced by the modal coupling and resonance frequencies of the associated uncoupled structural/cavity modes. An optimal design of structural sensor placement was proposed to achieve sufficiently high modal observability levels for certain important panel- and cavity-controlled modes. Numerical analysis on a panel-cavity system demonstrated the importance of structural sensor placement on virtual sensing and active noise control performance, particularly for cavity-controlled modes.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tricaud, Christophe; Ernst, Timothy C.; Zigan, James A.

    The disclosure provides a waste heat recovery system with a system and method for calculation of the net output torque from the waste heat recovery system. The calculation uses inputs from existing pressure and speed sensors to create a virtual pump torque sensor and a virtual expander torque sensor, and uses these sensors to provide an accurate net torque output from the WHR system.

  3. Head-mounted active noise control system with virtual sensing technique

    NASA Astrophysics Data System (ADS)

    Miyazaki, Nobuhiro; Kajikawa, Yoshinobu

    2015-03-01

    In this paper, we apply a virtual sensing technique to a head-mounted active noise control (ANC) system we have already proposed. The proposed ANC system can reduce narrowband noise while improving the noise reduction ability at the desired locations. A head-mounted ANC system based on an adaptive feedback structure can reduce noise with periodicity or narrowband components. However, since quiet zones are formed only at the locations of error microphones, an adequate noise reduction cannot be achieved at the locations where error microphones cannot be placed such as near the eardrums. A solution to this problem is to apply a virtual sensing technique. A virtual sensing ANC system can achieve higher noise reduction at the desired locations by measuring the system models from physical sensors to virtual sensors, which will be used in the online operation of the virtual sensing ANC algorithm. Hence, we attempt to achieve the maximum noise reduction near the eardrums by applying the virtual sensing technique to the head-mounted ANC system. However, it is impossible to place the microphone near the eardrums. Therefore, the system models from physical sensors to virtual sensors are estimated using the Head And Torso Simulator (HATS) instead of human ears. Some simulation, experimental, and subjective assessment results demonstrate that the head-mounted ANC system with virtual sensing is superior to that without virtual sensing in terms of the noise reduction ability at the desired locations.

  4. Virtual sensors for active noise control in acoustic-structural coupled enclosures using structural sensing: robust virtual sensor design.

    PubMed

    Halim, Dunant; Cheng, Li; Su, Zhongqing

    2011-03-01

    The work was aimed to develop a robust virtual sensing design methodology for sensing and active control applications of vibro-acoustic systems. The proposed virtual sensor was designed to estimate a broadband acoustic interior sound pressure using structural sensors, with robustness against certain dynamic uncertainties occurring in an acoustic-structural coupled enclosure. A convex combination of Kalman sub-filters was used during the design, accommodating different sets of perturbed dynamic model of the vibro-acoustic enclosure. A minimax optimization problem was set up to determine an optimal convex combination of Kalman sub-filters, ensuring an optimal worst-case virtual sensing performance. The virtual sensing and active noise control performance was numerically investigated on a rectangular panel-cavity system. It was demonstrated that the proposed virtual sensor could accurately estimate the interior sound pressure, particularly the one dominated by cavity-controlled modes, by using a structural sensor. With such a virtual sensing technique, effective active noise control performance was also obtained even for the worst-case dynamics. © 2011 Acoustical Society of America

  5. Low-complexity piecewise-affine virtual sensors: theory and design

    NASA Astrophysics Data System (ADS)

    Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco

    2014-03-01

    This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.

  6. An Integrated FDD System for HVAC&R Based on Virtual Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Woohyun

    According to the U.S Department of Energy, space heating, ventilation and air conditioning system account for 40% of residential primary energy use and for 30% of primary energy use in commercial buildings. A study released by the Energy Information Administration indicated that packaged air conditioners are widely used in 46% of all commercial buildings in the U.S. This study indicates that the annual cooling energy consumption related to the packaged air conditioner is about 160 trillion Btus. Therefore, an automated FDD system that can automatically detect and diagnose faults and evaluate fault impacts has the potential for improving energy efficiencymore » along with reducing service costs and comfort complaints. The primary bottlenecks to diagnostic implementation in the field are the high initial costs of additional sensors. To prevent those limitations, virtual sensors with low cost measurements and simple models are developed to estimate quantities that would be expensive and or difficult to measure directly. The use of virtual sensors can reduce costs compared to the use of real sensors and provide additional information for economic assessment. The virtual sensor can be embedded in a permanently installed control or monitoring system and continuous monitoring potentially leads to early detection of faults. The virtual sensors of individual equipment components can be integrated to estimate overall diagnostic information using the output of each virtual sensor.« less

  7. New virtual sonar and wireless sensor system concepts

    NASA Astrophysics Data System (ADS)

    Houston, B. H.; Bucaro, J. A.; Romano, A. J.

    2004-05-01

    Recently, exciting new sensor array concepts have been proposed which, if realized, could revolutionize how we approach surface mounted acoustic sensor systems for underwater vehicles. Two such schemes are so-called ``virtual sonar'' which is formulated around Helmholtz integral processing and ``wireless'' systems which transfer sensor information through radiated RF signals. The ``virtual sonar'' concept provides an interesting framework through which to combat the dilatory effects of the structure on surface mounted sensor systems including structure-borne vibration and variations in structure-backing impedance. The ``wireless'' concept would eliminate the necessity of a complex wiring or fiber-optic external network while minimizing vehicle penetrations. Such systems, however, would require a number of advances in sensor and RF waveguide technologies. In this presentation, we will discuss those sensor and sensor-related developments which are desired or required in order to make practical such new sensor system concepts, and we will present several underwater applications from the perspective of exploiting these new sonar concepts. [Work supported by ONR.

  8. Adaptive Fault Detection on Liquid Propulsion Systems with Virtual Sensors: Algorithms and Architectures

    NASA Technical Reports Server (NTRS)

    Matthews, Bryan L.; Srivastava, Ashok N.

    2010-01-01

    Prior to the launch of STS-119 NASA had completed a study of an issue in the flow control valve (FCV) in the Main Propulsion System of the Space Shuttle using an adaptive learning method known as Virtual Sensors. Virtual Sensors are a class of algorithms that estimate the value of a time series given other potentially nonlinearly correlated sensor readings. In the case presented here, the Virtual Sensors algorithm is based on an ensemble learning approach and takes sensor readings and control signals as input to estimate the pressure in a subsystem of the Main Propulsion System. Our results indicate that this method can detect faults in the FCV at the time when they occur. We use the standard deviation of the predictions of the ensemble as a measure of uncertainty in the estimate. This uncertainty estimate was crucial to understanding the nature and magnitude of transient characteristics during startup of the engine. This paper overviews the Virtual Sensors algorithm and discusses results on a comprehensive set of Shuttle missions and also discusses the architecture necessary for deploying such algorithms in a real-time, closed-loop system or a human-in-the-loop monitoring system. These results were presented at a Flight Readiness Review of the Space Shuttle in early 2009.

  9. An Intelligent Active Video Surveillance System Based on the Integration of Virtual Neural Sensors and BDI Agents

    NASA Astrophysics Data System (ADS)

    Gregorio, Massimo De

    In this paper we present an intelligent active video surveillance system currently adopted in two different application domains: railway tunnels and outdoor storage areas. The system takes advantages of the integration of Artificial Neural Networks (ANN) and symbolic Artificial Intelligence (AI). This hybrid system is formed by virtual neural sensors (implemented as WiSARD-like systems) and BDI agents. The coupling of virtual neural sensors with symbolic reasoning for interpreting their outputs, makes this approach both very light from a computational and hardware point of view, and rather robust in performances. The system works on different scenarios and in difficult light conditions.

  10. Experimental Characterization of Microfabricated VirtualImpactor Efficiency

    EPA Science Inventory

    The Air-Microfluidics Group is developing a microelectromechanical systems-based direct reading particulate matter (PM) mass sensor. The sensor consists of two main components: a microfabricated virtual impactor (VI) and a PM mass sensor. The VI leverages particle inertia to sepa...

  11. Enhancing Autonomy of Aerial Systems Via Integration of Visual Sensors into Their Avionics Suite

    DTIC Science & Technology

    2016-09-01

    aerial platform for subsequent visual sensor integration. 14. SUBJECT TERMS autonomous system, quadrotors, direct method, inverse ...CONTROLLER ARCHITECTURE .....................................................43 B. INVERSE DYNAMICS IN THE VIRTUAL DOMAIN ......................45 1...control station GPS Global-Positioning System IDVD inverse dynamics in the virtual domain ILP integer linear program INS inertial-navigation system

  12. Virtual Sensors for Designing Irrigation Controllers in Greenhouses

    PubMed Central

    Sánchez, Jorge Antonio; Rodríguez, Francisco; Guzmán, José Luis; Arahal, Manuel R

    2012-01-01

    Monitoring the greenhouse transpiration for control purposes is currently a difficult task. The absence of affordable sensors that provide continuous transpiration measurements motivates the use of estimators. In the case of tomato crops, the availability of estimators allows the design of automatic fertirrigation (irrigation + fertilization) schemes in greenhouses, minimizing the dispensed water while fulfilling crop needs. This paper shows how system identification techniques can be applied to obtain nonlinear virtual sensors for estimating transpiration. The greenhouse used for this study is equipped with a microlysimeter, which allows one to continuously sample the transpiration values. While the microlysimeter is an advantageous piece of equipment for research, it is also expensive and requires maintenance. This paper presents the design and development of a virtual sensor to model the crop transpiration, hence avoiding the use of this kind of expensive sensor. The resulting virtual sensor is obtained by dynamical system identification techniques based on regressors taken from variables typically found in a greenhouse, such as global radiation and vapor pressure deficit. The virtual sensor is thus based on empirical data. In this paper, some effort has been made to eliminate some problems associated with grey-box models: advance phenomenon and overestimation. The results are tested with real data and compared with other approaches. Better results are obtained with the use of nonlinear Black-box virtual sensors. This sensor is based on global radiation and vapor pressure deficit (VPD) measurements. Predictive results for the three models are developed for comparative purposes. PMID:23202208

  13. Sensor Webs in Digital Earth

    NASA Astrophysics Data System (ADS)

    Heavner, M. J.; Fatland, D. R.; Moeller, H.; Hood, E.; Schultz, M.

    2007-12-01

    The University of Alaska Southeast is currently implementing a sensor web identified as the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research (SEAMONSTER). From power systems and instrumentation through data management, visualization, education, and public outreach, SEAMONSTER is designed with modularity in mind. We are utilizing virtual earth infrastructures to enhance both sensor web management and data access. We will describe how the design philosophy of using open, modular components contributes to the exploration of different virtual earth environments. We will also describe the sensor web physical implementation and how the many components have corresponding virtual earth representations. This presentation will provide an example of the integration of sensor webs into a virtual earth. We suggest that IPY sensor networks and sensor webs may integrate into virtual earth systems and provide an IPY legacy easily accessible to both scientists and the public. SEAMONSTER utilizes geobrowsers for education and public outreach, sensor web management, data dissemination, and enabling collaboration. We generate near-real-time auto-updating geobrowser files of the data. In this presentation we will describe how we have implemented these technologies to date, the lessons learned, and our efforts towards greater OGC standard implementation. A major focus will be on demonstrating how geobrowsers have made this project possible.

  14. Virtual odors to transmit emotions in virtual agents

    NASA Astrophysics Data System (ADS)

    Delgado-Mata, Carlos; Aylett, Ruth

    2003-04-01

    In this paper we describe an emotional-behvioral architecture. The emotional engine sits at a higher layer than the behavior system, and can alter behavior patterns, the engine is designed to simulate Emotionally-Intelligent Agents in a Virtual Environment, where each agent senses its own emotions, and other creature emotions through a virtual smell sensor; senses obstacles and other moving creatures in the environment and reacts to them. The architecture consists of an emotion engine, behavior synthesis system, a motor layer and a library of sensors.

  15. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System.

    PubMed

    de Moura, Karina de O A; Balbinot, Alexandre

    2018-05-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior.

  16. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System

    PubMed Central

    Balbinot, Alexandre

    2018-01-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior. PMID:29723994

  17. Virtual Sensor Test Instrumentation

    NASA Technical Reports Server (NTRS)

    Wang, Roy

    2011-01-01

    Virtual Sensor Test Instrumentation is based on the concept of smart sensor technology for testing with intelligence needed to perform sell-diagnosis of health, and to participate in a hierarchy of health determination at sensor, process, and system levels. A virtual sensor test instrumentation consists of five elements: (1) a common sensor interface, (2) microprocessor, (3) wireless interface, (4) signal conditioning and ADC/DAC (analog-to-digital conversion/ digital-to-analog conversion), and (5) onboard EEPROM (electrically erasable programmable read-only memory) for metadata storage and executable software to create powerful, scalable, reconfigurable, and reliable embedded and distributed test instruments. In order to maximize the efficient data conversion through the smart sensor node, plug-and-play functionality is required to interface with traditional sensors to enhance their identity and capabilities for data processing and communications. Virtual sensor test instrumentation can be accessible wirelessly via a Network Capable Application Processor (NCAP) or a Smart Transducer Interlace Module (STIM) that may be managed under real-time rule engines for mission-critical applications. The transducer senses the physical quantity being measured and converts it into an electrical signal. The signal is fed to an A/D converter, and is ready for use by the processor to execute functional transformation based on the sensor characteristics stored in a Transducer Electronic Data Sheet (TEDS). Virtual sensor test instrumentation is built upon an open-system architecture with standardized protocol modules/stacks to interface with industry standards and commonly used software. One major benefit for deploying the virtual sensor test instrumentation is the ability, through a plug-and-play common interface, to convert raw sensor data in either analog or digital form, to an IEEE 1451 standard-based smart sensor, which has instructions to program sensors for a wide variety of functions. The sensor data is processed in a distributed fashion across the network, providing a large pool of resources in real time to meet stringent latency requirements.

  18. Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis

    PubMed Central

    Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel

    2013-01-01

    This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007

  19. Scientific Workflows and the Sensor Web for Virtual Environmental Observatories

    NASA Astrophysics Data System (ADS)

    Simonis, I.; Vahed, A.

    2008-12-01

    Virtual observatories mature from their original domain and become common practice for earth observation research and policy building. The term Virtual Observatory originally came from the astronomical research community. Here, virtual observatories provide universal access to the available astronomical data archives of space and ground-based observatories. Further on, as those virtual observatories aim at integrating heterogeneous ressources provided by a number of participating organizations, the virtual observatory acts as a coordinating entity that strives for common data analysis techniques and tools based on common standards. The Sensor Web is on its way to become one of the major virtual observatories outside of the astronomical research community. Like the original observatory that consists of a number of telescopes, each observing a specific part of the wave spectrum and with a collection of astronomical instruments, the Sensor Web provides a multi-eyes perspective on the current, past, as well as future situation of our planet and its surrounding spheres. The current view of the Sensor Web is that of a single worldwide collaborative, coherent, consistent and consolidated sensor data collection, fusion and distribution system. The Sensor Web can perform as an extensive monitoring and sensing system that provides timely, comprehensive, continuous and multi-mode observations. This technology is key to monitoring and understanding our natural environment, including key areas such as climate change, biodiversity, or natural disasters on local, regional, and global scales. The Sensor Web concept has been well established with ongoing global research and deployment of Sensor Web middleware and standards and represents the foundation layer of systems like the Global Earth Observation System of Systems (GEOSS). The Sensor Web consists of a huge variety of physical and virtual sensors as well as observational data, made available on the Internet at standardized interfaces. All data sets and sensor communication follow well-defined abstract models and corresponding encodings, mostly developed by the OGC Sensor Web Enablement initiative. Scientific progress is currently accelerated by an emerging new concept called scientific workflows, which organize and manage complex distributed computations. A scientific workflow represents and records the highly complex processes that a domain scientist typically would follow in exploration, discovery and ultimately, transformation of raw data to publishable results. The challenge is now to integrate the benefits of scientific workflows with those provided by the Sensor Web in order to leverage all resources for scientific exploration, problem solving, and knowledge generation. Scientific workflows for the Sensor Web represent the next evolutionary step towards efficient, powerful, and flexible earth observation frameworks and platforms. Those platforms support the entire process from capturing data, sharing and integrating, to requesting additional observations. Multiple sites and organizations will participate on single platforms and scientists from different countries and organizations interact and contribute to large-scale research projects. Simultaneously, the data- and information overload becomes manageable, as multiple layers of abstraction will free scientists to deal with underlying data-, processing or storage peculiarities. The vision are automated investigation and discovery mechanisms that allow scientists to pose queries to the system, which in turn would identify potentially related resources, schedules processing tasks and assembles all parts in workflows that may satisfy the query.

  20. Minimizing Input-to-Output Latency in Virtual Environment

    NASA Technical Reports Server (NTRS)

    Adelstein, Bernard D.; Ellis, Stephen R.; Hill, Michael I.

    2009-01-01

    A method and apparatus were developed to minimize latency (time delay ) in virtual environment (VE) and other discrete- time computer-base d systems that require real-time display in response to sensor input s. Latency in such systems is due to the sum of the finite time requi red for information processing and communication within and between sensors, software, and displays.

  1. Human-computer interface glove using flexible piezoelectric sensors

    NASA Astrophysics Data System (ADS)

    Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min

    2017-05-01

    In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.

  2. Performance analysis of cooperative virtual MIMO systems for wireless sensor networks.

    PubMed

    Rafique, Zimran; Seet, Boon-Chong; Al-Anbuky, Adnan

    2013-05-28

    Multi-Input Multi-Output (MIMO) techniques can be used to increase the data rate for a given bit error rate (BER) and transmission power. Due to the small form factor, energy and processing constraints of wireless sensor nodes, a cooperative Virtual MIMO as opposed to True MIMO system architecture is considered more feasible for wireless sensor network (WSN) applications. Virtual MIMO with Vertical-Bell Labs Layered Space-Time (V-BLAST) multiplexing architecture has been recently established to enhance WSN performance. In this paper, we further investigate the impact of different modulation techniques, and analyze for the first time, the performance of a cooperative Virtual MIMO system based on V-BLAST architecture with multi-carrier modulation techniques. Through analytical models and simulations using real hardware and environment settings, both communication and processing energy consumptions, BER, spectral efficiency, and total time delay of multiple cooperative nodes each with single antenna are evaluated. The results show that cooperative Virtual-MIMO with Binary Phase Shift Keying-Wavelet based Orthogonal Frequency Division Multiplexing (BPSK-WOFDM) modulation is a promising solution for future high data-rate and energy-efficient WSNs.

  3. Performance Analysis of Cooperative Virtual MIMO Systems for Wireless Sensor Networks

    PubMed Central

    Rafique, Zimran; Seet, Boon-Chong; Al-Anbuky, Adnan

    2013-01-01

    Multi-Input Multi-Output (MIMO) techniques can be used to increase the data rate for a given bit error rate (BER) and transmission power. Due to the small form factor, energy and processing constraints of wireless sensor nodes, a cooperative Virtual MIMO as opposed to True MIMO system architecture is considered more feasible for wireless sensor network (WSN) applications. Virtual MIMO with Vertical-Bell Labs Layered Space-Time (V-BLAST) multiplexing architecture has been recently established to enhance WSN performance. In this paper, we further investigate the impact of different modulation techniques, and analyze for the first time, the performance of a cooperative Virtual MIMO system based on V-BLAST architecture with multi-carrier modulation techniques. Through analytical models and simulations using real hardware and environment settings, both communication and processing energy consumptions, BER, spectral efficiency, and total time delay of multiple cooperative nodes each with single antenna are evaluated. The results show that cooperative Virtual-MIMO with Binary Phase Shift Keying-Wavelet based Orthogonal Frequency Division Multiplexing (BPSK-WOFDM) modulation is a promising solution for future high data-rate and energy-efficient WSNs. PMID:23760087

  4. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network.

    PubMed

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-12-12

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.

  5. A Data-Driven Response Virtual Sensor Technique with Partial Vibration Measurements Using Convolutional Neural Network

    PubMed Central

    Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang

    2017-01-01

    Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868

  6. Sensor Webs as Virtual Data Systems for Earth Science

    NASA Astrophysics Data System (ADS)

    Moe, K. L.; Sherwood, R.

    2008-05-01

    The NASA Earth Science Technology Office established a 3-year Advanced Information Systems Technology (AIST) development program in late 2006 to explore the technical challenges associated with integrating sensors, sensor networks, data assimilation and modeling components into virtual data systems called "sensor webs". The AIST sensor web program was initiated in response to a renewed emphasis on the sensor web concepts. In 2004, NASA proposed an Earth science vision for a more robust Earth observing system, coupled with remote sensing data analysis tools and advances in Earth system models. The AIST program is conducting the research and developing components to explore the technology infrastructure that will enable the visionary goals. A working statement for a NASA Earth science sensor web vision is the following: On-demand sensing of a broad array of environmental and ecological phenomena across a wide range of spatial and temporal scales, from a heterogeneous suite of sensors both in-situ and in orbit. Sensor webs will be dynamically organized to collect data, extract information from it, accept input from other sensor / forecast / tasking systems, interact with the environment based on what they detect or are tasked to perform, and communicate observations and results in real time. The focus on sensor webs is to develop the technology and prototypes to demonstrate the evolving sensor web capabilities. There are 35 AIST projects ranging from 1 to 3 years in duration addressing various aspects of sensor webs involving space sensors such as Earth Observing-1, in situ sensor networks such as the southern California earthquake network, and various modeling and forecasting systems. Some of these projects build on proof-of-concept demonstrations of sensor web capabilities like the EO-1 rapid fire response initially implemented in 2003. Other projects simulate future sensor web configurations to evaluate the effectiveness of sensor-model interactions for producing improved science predictions. Still other projects are maturing technology to support autonomous operations, communications and system interoperability. This paper will highlight lessons learned by various projects during the first half of the AIST program. Several sensor web demonstrations have been implemented and resulting experience with evolving standards, such as the Open Geospatial Consortium (OGC) Sensor Web Enablement (SWE) among others, will be featured. The role of sensor webs in support of the intergovernmental Group on Earth Observations' Global Earth Observation System of Systems (GEOSS) will also be discussed. The GEOSS vision is a distributed system of systems that builds on international components to supply observing and processing systems that are, in the whole, comprehensive, coordinated and sustained. Sensor web prototypes are under development to demonstrate how remote sensing satellite data, in situ sensor networks and decision support systems collaborate in applications of interest to GEO, such as flood monitoring. Furthermore, the international Committee on Earth Observation Satellites (CEOS) has stepped up to the challenge to provide the space-based systems component for GEOSS. CEOS has proposed "virtual constellations" to address emerging data gaps in environmental monitoring, avoid overlap among observing systems, and make maximum use of existing space and ground assets. Exploratory applications that support the objectives of virtual constellations will also be discussed as a future role for sensor webs.

  7. Virtual Instrument for Emissions Measurement of Internal Combustion Engines

    PubMed Central

    Pérez, Armando; Montero, Gisela; Coronado, Marcos; García, Conrado; Pérez, Rubén

    2016-01-01

    The gases emissions measurement systems in internal combustion engines are strict and expensive nowadays. For this reason, a virtual instrument was developed to measure the combustion emissions from an internal combustion diesel engine, running with diesel-biodiesel mixtures. This software is called virtual instrument for emissions measurement (VIEM), and it was developed in the platform of LabVIEW 2010® virtual programming. VIEM works with sensors connected to a signal conditioning system, and a data acquisition system is used as interface for a computer in order to measure and monitor in real time the emissions of O2, NO, CO, SO2, and CO2 gases. This paper shows the results of the VIEM programming, the integrated circuits diagrams used for the signal conditioning of sensors, and the sensors characterization of O2, NO, CO, SO2, and CO2. VIEM is a low-cost instrument and is simple and easy to use. Besides, it is scalable, making it flexible and defined by the user. PMID:27034893

  8. Digital Signal Processing by Virtual Instrumentation of a MEMS Magnetic Field Sensor for Biomedical Applications

    PubMed Central

    Juárez-Aguirre, Raúl; Domínguez-Nicolás, Saúl M.; Manjarrez, Elías; Tapia, Jesús A.; Figueras, Eduard; Vázquez-Leal, Héctor; Aguilera-Cortés, Luz A.; Herrera-May, Agustín L.

    2013-01-01

    We present a signal processing system with virtual instrumentation of a MEMS sensor to detect magnetic flux density for biomedical applications. This system consists of a magnetic field sensor, electronic components implemented on a printed circuit board (PCB), a data acquisition (DAQ) card, and a virtual instrument. It allows the development of a semi-portable prototype with the capacity to filter small electromagnetic interference signals through digital signal processing. The virtual instrument includes an algorithm to implement different configurations of infinite impulse response (IIR) filters. The PCB contains a precision instrumentation amplifier, a demodulator, a low-pass filter (LPF) and a buffer with operational amplifier. The proposed prototype is used for real-time non-invasive monitoring of magnetic flux density in the thoracic cage of rats. The response of the rat respiratory magnetogram displays a similar behavior as the rat electromyogram (EMG). PMID:24196434

  9. Digital signal processing by virtual instrumentation of a MEMS magnetic field sensor for biomedical applications.

    PubMed

    Juárez-Aguirre, Raúl; Domínguez-Nicolás, Saúl M; Manjarrez, Elías; Tapia, Jesús A; Figueras, Eduard; Vázquez-Leal, Héctor; Aguilera-Cortés, Luz A; Herrera-May, Agustín L

    2013-11-05

    We present a signal processing system with virtual instrumentation of a MEMS sensor to detect magnetic flux density for biomedical applications. This system consists of a magnetic field sensor, electronic components implemented on a printed circuit board (PCB), a data acquisition (DAQ) card, and a virtual instrument. It allows the development of a semi-portable prototype with the capacity to filter small electromagnetic interference signals through digital signal processing. The virtual instrument includes an algorithm to implement different configurations of infinite impulse response (IIR) filters. The PCB contains a precision instrumentation amplifier, a demodulator, a low-pass filter (LPF) and a buffer with operational amplifier. The proposed prototype is used for real-time non-invasive monitoring of magnetic flux density in the thoracic cage of rats. The response of the rat respiratory magnetogram displays a similar behavior as the rat electromyogram (EMG).

  10. Modular mechatronic system for stationary bicycles interfaced with virtual environment for rehabilitation.

    PubMed

    Ranky, Richard G; Sivak, Mark L; Lewis, Jeffrey A; Gade, Venkata K; Deutsch, Judith E; Mavroidis, Constantinos

    2014-06-05

    Cycling has been used in the rehabilitation of individuals with both chronic and post-surgical conditions. Among the challenges with implementing bicycling for rehabilitation is the recruitment of both extremities, in particular when one is weaker or less coordinated. Feedback embedded in virtual reality (VR) augmented cycling may serve to address the requirement for efficacious cycling; specifically recruitment of both extremities and exercising at a high intensity. In this paper a mechatronic rehabilitation bicycling system with an interactive virtual environment, called Virtual Reality Augmented Cycling Kit (VRACK), is presented. Novel hardware components embedded with sensors were implemented on a stationary exercise bicycle to monitor physiological and biomechanical parameters of participants while immersing them in an augmented reality simulation providing the user with visual, auditory and haptic feedback. This modular and adaptable system attaches to commercially-available stationary bicycle systems and interfaces with a personal computer for simulation and data acquisition processes. The complete bicycle system includes: a) handle bars based on hydraulic pressure sensors; b) pedals that monitor pedal kinematics with an inertial measurement unit (IMU) and forces on the pedals while providing vibratory feedback; c) off the shelf electronics to monitor heart rate and d) customized software for rehabilitation. Bench testing for the handle and pedal systems is presented for calibration of the sensors detecting force and angle. The modular mechatronic kit for exercise bicycles was tested in bench testing and human tests. Bench tests performed on the sensorized handle bars and the instrumented pedals validated the measurement accuracy of these components. Rider tests with the VRACK system focused on the pedal system and successfully monitored kinetic and kinematic parameters of the rider's lower extremities. The VRACK system, a virtual reality mechatronic bicycle rehabilitation modular system was designed to convert most bicycles in virtual reality (VR) cycles. Preliminary testing of the augmented reality bicycle system was successful in demonstrating that a modular mechatronic kit can monitor and record kinetic and kinematic parameters of several riders.

  11. Ubiquitous virtual private network: a solution for WSN seamless integration.

    PubMed

    Villa, David; Moya, Francisco; Villanueva, Félix Jesús; Aceña, Óscar; López, Juan Carlos

    2014-01-06

    Sensor networks are becoming an essential part of ubiquitous systems and applications. However, there are no well-defined protocols or mechanisms to access the sensor network from the enterprise information system. We consider this issue as a heterogeneous network interconnection problem, and as a result, the same concepts may be applied. Specifically, we propose the use of object-oriented middlewares to provide a virtual private network in which all involved elements (sensor nodes or computer applications) will be able to communicate as if all of them were in a single and uniform network.

  12. Two-Time Scale Virtual Sensor Design for Vibration Observation of a Translational Flexible-Link Manipulator Based on Singular Perturbation and Differential Games

    PubMed Central

    Ju, Jinyong; Li, Wei; Wang, Yuqiao; Fan, Mengbao; Yang, Xuefeng

    2016-01-01

    Effective feedback control requires all state variable information of the system. However, in the translational flexible-link manipulator (TFM) system, it is unrealistic to measure the vibration signals and their time derivative of any points of the TFM by infinite sensors. With the rigid-flexible coupling between the global motion of the rigid base and the elastic vibration of the flexible-link manipulator considered, a two-time scale virtual sensor, which includes the speed observer and the vibration observer, is designed to achieve the estimation for the vibration signals and their time derivative of the TFM, as well as the speed observer and the vibration observer are separately designed for the slow and fast subsystems, which are decomposed from the dynamic model of the TFM by the singular perturbation. Additionally, based on the linear-quadratic differential games, the observer gains of the two-time scale virtual sensor are optimized, which aims to minimize the estimation error while keeping the observer stable. Finally, the numerical calculation and experiment verify the efficiency of the designed two-time scale virtual sensor. PMID:27801840

  13. Open Source Virtual Worlds and Low Cost Sensors for Physical Rehab of Patients with Chronic Diseases

    NASA Astrophysics Data System (ADS)

    Romero, Salvador J.; Fernandez-Luque, Luis; Sevillano, José L.; Vognild, Lars

    For patients with chronic diseases, exercise is a key part of rehab to deal better with their illness. Some of them do rehabilitation at home with telemedicine systems. However, keeping to their exercising program is challenging and many abandon the rehabilitation. We postulate that information technologies for socializing and serious games can encourage patients to keep doing physical exercise and rehab. In this paper we present Virtual Valley, a low cost telemedicine system for home exercising, based on open source virtual worlds and utilizing popular low cost motion controllers (e.g. Wii Remote) and medical sensors. Virtual Valley allows patient to socialize, learn, and play group based serious games while exercising.

  14. Virtual Sensor for Failure Detection, Identification and Recovery in the Transition Phase of a Morphing Aircraft

    PubMed Central

    Heredia, Guillermo; Ollero, Aníbal

    2010-01-01

    The Helicopter Adaptive Aircraft (HADA) is a morphing aircraft which is able to take-off as a helicopter and, when in forward flight, unfold the wings that are hidden under the fuselage, and transfer the power from the main rotor to a propeller, thus morphing from a helicopter to an airplane. In this process, the reliable folding and unfolding of the wings is critical, since a failure may determine the ability to perform a mission, and may even be catastrophic. This paper proposes a virtual sensor based Fault Detection, Identification and Recovery (FDIR) system to increase the reliability of the HADA aircraft. The virtual sensor is able to capture the nonlinear interaction between the folding/unfolding wings aerodynamics and the HADA airframe using the navigation sensor measurements. The proposed FDIR system has been validated using a simulation model of the HADA aircraft, which includes real phenomena as sensor noise and sampling characteristics and turbulence and wind perturbations. PMID:22294922

  15. Virtual sensor for failure detection, identification and recovery in the transition phase of a morphing aircraft.

    PubMed

    Heredia, Guillermo; Ollero, Aníbal

    2010-01-01

    The Helicopter Adaptive Aircraft (HADA) is a morphing aircraft which is able to take-off as a helicopter and, when in forward flight, unfold the wings that are hidden under the fuselage, and transfer the power from the main rotor to a propeller, thus morphing from a helicopter to an airplane. In this process, the reliable folding and unfolding of the wings is critical, since a failure may determine the ability to perform a mission, and may even be catastrophic. This paper proposes a virtual sensor based Fault Detection, Identification and Recovery (FDIR) system to increase the reliability of the HADA aircraft. The virtual sensor is able to capture the nonlinear interaction between the folding/unfolding wings aerodynamics and the HADA airframe using the navigation sensor measurements. The proposed FDIR system has been validated using a simulation model of the HADA aircraft, which includes real phenomena as sensor noise and sampling characteristics and turbulence and wind perturbations.

  16. Compact and high resolution virtual mouse using lens array and light sensor

    NASA Astrophysics Data System (ADS)

    Qin, Zong; Chang, Yu-Cheng; Su, Yu-Jie; Huang, Yi-Pai; Shieh, Han-Ping David

    2016-06-01

    Virtual mouse based on IR source, lens array and light sensor was designed and implemented. Optical architecture including lens amount, lens pitch, baseline length, sensor length, lens-sensor gap, focal length etc. was carefully designed to achieve low detective error, high resolution, and simultaneously, compact system volume. System volume is 3.1mm (thickness) × 4.5mm (length) × 2, which is much smaller than that of camera-based device. Relative detective error of 0.41mm and minimum resolution of 26ppi were verified in experiments, so that it can replace conventional touchpad/touchscreen. If system thickness is eased to 20mm, resolution higher than 200ppi can be achieved to replace real mouse.

  17. VERDEX: A virtual environment demonstrator for remote driving applications

    NASA Technical Reports Server (NTRS)

    Stone, Robert J.

    1991-01-01

    One of the key areas of the National Advanced Robotics Centre's enabling technologies research program is that of the human system interface, phase 1 of which started in July 1989 and is currently addressing the potential of virtual environments to permit intuitive and natural interactions between a human operator and a remote robotic vehicle. The aim of the first 12 months of this program (to September, 1990) is to develop a virtual human-interface demonstrator for use later as a test bed for human factors experimentation. This presentation will describe the current state of development of the test bed, and will outline some human factors issues and problems for more general discussion. In brief, the virtual telepresence system for remote driving has been designed to take the following form. The human operator will be provided with a helmet-mounted stereo display assembly, facilities for speech recognition and synthesis (using the Marconi Macrospeak system), and a VPL DataGlove Model 2 unit. The vehicle to be used for the purposes of remote driving is a Cybermotion Navmaster K2A system, which will be equipped with a stereo camera and microphone pair, mounted on a motorized high-speed pan-and-tilt head incorporating a closed-loop laser ranging sensor for camera convergence control (currently under contractual development). It will be possible to relay information to and from the vehicle and sensory system via an umbilical or RF link. The aim is to develop an interactive audio-visual display system capable of presenting combined stereo TV pictures and virtual graphics windows, the latter featuring control representations appropriate for vehicle driving and interaction using a graphical 'hand,' slaved to the flex and tracking sensors of the DataGlove and an additional helmet-mounted Polhemus IsoTrack sensor. Developments planned for the virtual environment test bed include transfer of operator control between remote driving and remote manipulation, dexterous end effector integration, virtual force and tactile sensing (also the focus of a current ARRL contract, initially employing a 14-pneumatic bladder glove attachment), and sensor-driven world modeling for total virtual environment generation and operator-assistance in remote scene interrogation.

  18. a New ER Fluid Based Haptic Actuator System for Virtual Reality

    NASA Astrophysics Data System (ADS)

    Böse, H.; Baumann, M.; Monkman, G. J.; Egersdörfer, S.; Tunayar, A.; Freimuth, H.; Ermert, H.; Khaled, W.

    The concept and some steps in the development of a new actuator system which enables the haptic perception of mechanically inhomogeneous virtual objects are introduced. The system consists of a two-dimensional planar array of actuator elements containing an electrorheological (ER) fluid. When a user presses his fingers onto the surface of the actuator array, he perceives locally variable resistance forces generated by vertical pistons which slide in the ER fluid through the gaps between electrode pairs. The voltage in each actuator element can be individually controlled by a novel sophisticated switching technology based on optoelectric gallium arsenide elements. The haptic information which is represented at the actuator array can be transferred from a corresponding sensor system based on ultrasonic elastography. The combined sensor-actuator system may serve as a technology platform for various applications in virtual reality, like telemedicine where the information on the consistency of tissue of a real patient is detected by the sensor part and recorded by the actuator part at a remote location.

  19. Ubiquitous Virtual Private Network: A Solution for WSN Seamless Integration

    PubMed Central

    Villa, David; Moya, Francisco; Villanueva, Félix Jesús; Aceña, Óscar; López, Juan Carlos

    2014-01-01

    Sensor networks are becoming an essential part of ubiquitous systems and applications. However, there are no well-defined protocols or mechanisms to access the sensor network from the enterprise information system. We consider this issue as a heterogeneous network interconnection problem, and as a result, the same concepts may be applied. Specifically, we propose the use of object-oriented middlewares to provide a virtual private network in which all involved elements (sensor nodes or computer applications) will be able to communicate as if all of them were in a single and uniform network. PMID:24399154

  20. Virtual microphone sensing through vibro-acoustic modelling and Kalman filtering

    NASA Astrophysics Data System (ADS)

    van de Walle, A.; Naets, F.; Desmet, W.

    2018-05-01

    This work proposes a virtual microphone methodology which enables full field acoustic measurements for vibro-acoustic systems. The methodology employs a Kalman filtering framework in order to combine a reduced high-fidelity vibro-acoustic model with a structural excitation measurement and small set of real microphone measurements on the system under investigation. By employing model order reduction techniques, a high order finite element model can be converted in a much smaller model which preserves the desired accuracy and maintains the main physical properties of the original model. Due to the low order of the reduced-order model, it can be effectively employed in a Kalman filter. The proposed methodology is validated experimentally on a strongly coupled vibro-acoustic system. The virtual sensor vastly improves the accuracy with respect to regular forward simulation. The virtual sensor also allows to recreate the full sound field of the system, which is very difficult/impossible to do through classical measurements.

  1. Avatar - a multi-sensory system for real time body position monitoring.

    PubMed

    Jovanov, E; Hanish, N; Courson, V; Stidham, J; Stinson, H; Webb, C; Denny, K

    2009-01-01

    Virtual reality and computer assisted physical rehabilitation applications require an unobtrusive and inexpensive real time monitoring systems. Existing systems are usually complex and expensive and based on infrared monitoring. In this paper we propose Avatar, a hybrid system consisting of off-the-shelf components and sensors. Absolute positioning of a few reference points is determined using infrared diode on subject's body and a set of Wii Remotes as optical sensors. Individual body segments are monitored by intelligent inertial sensor nodes iSense. A network of inertial nodes is controlled by a master node that serves as a gateway for communication with a capture device. Each sensor features a 3D accelerometer and a 2 axis gyroscope. Avatar system is used for control of avatars in Virtual Reality applications, but could be used in a variety of augmented reality, gaming, and computer assisted physical rehabilitation applications.

  2. Modular mechatronic system for stationary bicycles interfaced with virtual environment for rehabilitation

    PubMed Central

    2014-01-01

    Background Cycling has been used in the rehabilitation of individuals with both chronic and post-surgical conditions. Among the challenges with implementing bicycling for rehabilitation is the recruitment of both extremities, in particular when one is weaker or less coordinated. Feedback embedded in virtual reality (VR) augmented cycling may serve to address the requirement for efficacious cycling; specifically recruitment of both extremities and exercising at a high intensity. Methods In this paper a mechatronic rehabilitation bicycling system with an interactive virtual environment, called Virtual Reality Augmented Cycling Kit (VRACK), is presented. Novel hardware components embedded with sensors were implemented on a stationary exercise bicycle to monitor physiological and biomechanical parameters of participants while immersing them in an augmented reality simulation providing the user with visual, auditory and haptic feedback. This modular and adaptable system attaches to commercially-available stationary bicycle systems and interfaces with a personal computer for simulation and data acquisition processes. The complete bicycle system includes: a) handle bars based on hydraulic pressure sensors; b) pedals that monitor pedal kinematics with an inertial measurement unit (IMU) and forces on the pedals while providing vibratory feedback; c) off the shelf electronics to monitor heart rate and d) customized software for rehabilitation. Bench testing for the handle and pedal systems is presented for calibration of the sensors detecting force and angle. Results The modular mechatronic kit for exercise bicycles was tested in bench testing and human tests. Bench tests performed on the sensorized handle bars and the instrumented pedals validated the measurement accuracy of these components. Rider tests with the VRACK system focused on the pedal system and successfully monitored kinetic and kinematic parameters of the rider’s lower extremities. Conclusions The VRACK system, a virtual reality mechatronic bicycle rehabilitation modular system was designed to convert most bicycles in virtual reality (VR) cycles. Preliminary testing of the augmented reality bicycle system was successful in demonstrating that a modular mechatronic kit can monitor and record kinetic and kinematic parameters of several riders. PMID:24902780

  3. Roi-Orientated Sensor Correction Based on Virtual Steady Reimaging Model for Wide Swath High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.

    2017-09-01

    To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.

  4. Implementation of a Virtual Microphone Array to Obtain High Resolution Acoustic Images

    PubMed Central

    Izquierdo, Alberto; Suárez, Luis; Suárez, David

    2017-01-01

    Using arrays with digital MEMS (Micro-Electro-Mechanical System) microphones and FPGA-based (Field Programmable Gate Array) acquisition/processing systems allows building systems with hundreds of sensors at a reduced cost. The problem arises when systems with thousands of sensors are needed. This work analyzes the implementation and performance of a virtual array with 6400 (80 × 80) MEMS microphones. This virtual array is implemented by changing the position of a physical array of 64 (8 × 8) microphones in a grid with 10 × 10 positions, using a 2D positioning system. This virtual array obtains an array spatial aperture of 1 × 1 m2. Based on the SODAR (SOund Detection And Ranging) principle, the measured beampattern and the focusing capacity of the virtual array have been analyzed, since beamforming algorithms assume to be working with spherical waves, due to the large dimensions of the array in comparison with the distance between the target (a mannequin) and the array. Finally, the acoustic images of the mannequin, obtained for different frequency and range values, have been obtained, showing high angular resolutions and the possibility to identify different parts of the body of the mannequin. PMID:29295485

  5. Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery.

    PubMed

    Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell

    2011-06-01

    This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information.

  6. Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery

    PubMed Central

    Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell

    2013-01-01

    This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information. PMID:24398557

  7. SensorDB: a virtual laboratory for the integration, visualization and analysis of varied biological sensor data.

    PubMed

    Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T

    2015-01-01

    To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.

  8. A Miniature System for Separating Aerosol Particles and Measuring Mass Concentrations

    PubMed Central

    Liang, Dao; Shih, Wen-Pin; Chen, Chuin-Shan; Dai, Chi-An

    2010-01-01

    We designed and fabricated a new sensing system which consists of two virtual impactors and two quartz-crystal microbalance (QCM) sensors for measuring particle mass concentration and size distribution. The virtual impactors utilized different inertial forces of particles in air flow to classify different particle sizes. They were designed to classify particle diameter, d, into three different ranges: d < 2.28 μm, 2.28 μm ≤ d ≤ 3.20 μm, d > 3.20 μm. The QCM sensors were coated with a hydrogel, which was found to be a reliable adhesive for capturing aerosol particles. The QCM sensor coated with hydrogel was used to measure the mass loading of particles by utilizing its characteristic of resonant frequency shift. An integrated system has been demonstrated. PMID:22319317

  9. Virtual sensors for robust on-line monitoring (OLM) and Diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tipireddy, Ramakrishna; Lerchen, Megan E.; Ramuhalli, Pradeep

    Unscheduled shutdown of nuclear power facilities for recalibration and replacement of faulty sensors can be expensive and disruptive to grid management. In this work, we present virtual (software) sensors that can replace a faulty physical sensor for a short duration thus allowing recalibration to be safely deferred to a later time. The virtual sensor model uses a Gaussian process model to process input data from redundant and other nearby sensors. Predicted data includes uncertainty bounds including spatial association uncertainty and measurement noise and error. Using data from an instrumented cooling water flow loop testbed, the virtual sensor model has predictedmore » correct sensor measurements and the associated error corresponding to a faulty sensor.« less

  10. Virtual Sensor for Kinematic Estimation of Flexible Links in Parallel Robots

    PubMed Central

    Cabanes, Itziar; Mancisidor, Aitziber; Pinto, Charles

    2017-01-01

    The control of flexible link parallel manipulators is still an open area of research, endpoint trajectory tracking being one of the main challenges in this type of robot. The flexibility and deformations of the limbs make the estimation of the Tool Centre Point (TCP) position a challenging one. Authors have proposed different approaches to estimate this deformation and deduce the location of the TCP. However, most of these approaches require expensive measurement systems or the use of high computational cost integration methods. This work presents a novel approach based on a virtual sensor which can not only precisely estimate the deformation of the flexible links in control applications (less than 2% error), but also its derivatives (less than 6% error in velocity and 13% error in acceleration) according to simulation results. The validity of the proposed Virtual Sensor is tested in a Delta Robot, where the position of the TCP is estimated based on the Virtual Sensor measurements with less than a 0.03% of error in comparison with the flexible approach developed in ADAMS Multibody Software. PMID:28832510

  11. Robust controller designs for second-order dynamic system: A virtual passive approach

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Phan, Minh

    1990-01-01

    A robust controller design is presented for second-order dynamic systems. The controller is model-independent and itself is a virtual second-order dynamic system. Conditions on actuator and sensor placements are identified for controller designs that guarantee overall closed-loop stability. The dynamic controller can be viewed as a virtual passive damping system that serves to stabilize the actual dynamic system. The control gains are interpreted as virtual mass, spring, and dashpot elements that play the same roles as actual physical elements in stability analysis. Position, velocity, and acceleration feedback are considered. Simple examples are provided to illustrate the physical meaning of this controller design.

  12. Software as a service approach to sensor simulation software deployment

    NASA Astrophysics Data System (ADS)

    Webster, Steven; Miller, Gordon; Mayott, Gregory

    2012-05-01

    Traditionally, military simulation has been problem domain specific. Executing an exercise currently requires multiple simulation software providers to specialize, deploy, and configure their respective implementations, integrate the collection of software to achieve a specific system behavior, and then execute for the purpose at hand. This approach leads to rigid system integrations which require simulation expertise for each deployment due to changes in location, hardware, and software. Our alternative is Software as a Service (SaaS) predicated on the virtualization of Night Vision Electronic Sensors (NVESD) sensor simulations as an exemplary case. Management middleware elements layer self provisioning, configuration, and integration services onto the virtualized sensors to present a system of services at run time. Given an Infrastructure as a Service (IaaS) environment, enabled and managed system of simulations yields a durable SaaS delivery without requiring user simulation expertise. Persistent SaaS simulations would provide on demand availability to connected users, decrease integration costs and timelines, and benefit the domain community from immediate deployment of lessons learned.

  13. Air-condition Control System of Weaving Workshop Based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Song, Jian

    The project of air-condition measurement and control system based on LabVIEW is put forward for the sake of controlling effectively the environmental targets in the weaving workshop. In this project, which is based on the virtual instrument technology and in which LabVIEW development platform by NI is adopted, the system is constructed on the basis of the virtual instrument technology. It is composed of the upper PC, central control nodes based on CC2530, sensor nodes, sensor modules and executive device. Fuzzy control algorithm is employed to achieve the accuracy control of the temperature and humidity. A user-friendly man-machine interaction interface is designed with virtual instrument technology at the core of the software. It is shown by experiments that the measurement and control system can run stably and reliably and meet the functional requirements for controlling the weaving workshop.

  14. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    NASA Technical Reports Server (NTRS)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.

  15. Real-time 3D visualization of volumetric video motion sensor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.; Stansfield, S.; Shawver, D.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less

  16. Virtual Collaboration: Advantages and Disadvantages in the Planning and Execution of Operations in the Information Age

    DTIC Science & Technology

    2004-02-09

    FINAL 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE VIRTUAL COLLABORATION: 5a. CONTRACT NUMBER ADVANTAGES AND DISADVANTAGES IN THE PLANNING AND...warfare is not one system; it is a system of systems from sensors to information flow. In analyzing the specific advantages and disadvantages of one of...Standard Form 298 (Rev. 8-98) NAVAL WAR COLLEGE Newport, R.I. VIRTUAL COLLABORATION: ADVANTAGES AND DISADVANTAGES IN THE PLANNING AND EXECUTION OF OPERATIONS

  17. Intelligent approach to prognostic enhancements of diagnostic systems

    NASA Astrophysics Data System (ADS)

    Vachtsevanos, George; Wang, Peng; Khiripet, Noppadon; Thakker, Ash; Galie, Thomas R.

    2001-07-01

    This paper introduces a novel methodology to prognostics based on a dynamic wavelet neural network construct and notions from the virtual sensor area. This research has been motivated and supported by the U.S. Navy's active interest in integrating advanced diagnostic and prognostic algorithms in existing Naval digital control and monitoring systems. A rudimentary diagnostic platform is assumed to be available providing timely information about incipient or impending failure conditions. We focus on the development of a prognostic algorithm capable of predicting accurately and reliably the remaining useful lifetime of a failing machine or component. The prognostic module consists of a virtual sensor and a dynamic wavelet neural network as the predictor. The virtual sensor employs process data to map real measurements into difficult to monitor fault quantities. The prognosticator uses a dynamic wavelet neural network as a nonlinear predictor. Means to manage uncertainty and performance metrics are suggested for comparison purposes. An interface to an available shipboard Integrated Condition Assessment System is described and applications to shipboard equipment are discussed. Typical results from pump failures are presented to illustrate the effectiveness of the methodology.

  18. Augmented reality visualization of deformable tubular structures for surgical simulation.

    PubMed

    Ferrari, Vincenzo; Viglialoro, Rosanna Maria; Nicoli, Paola; Cutolo, Fabrizio; Condino, Sara; Carbone, Marina; Siesto, Mentore; Ferrari, Mauro

    2016-06-01

    Surgical simulation based on augmented reality (AR), mixing the benefits of physical and virtual simulation, represents a step forward in surgical training. However, available systems are unable to update the virtual anatomy following deformations impressed on actual anatomy. A proof-of-concept solution is described providing AR visualization of hidden deformable tubular structures using nitinol tubes sensorized with electromagnetic sensors. This system was tested in vitro on a setup comprised of sensorized cystic, left and right hepatic, and proper hepatic arteries. In the trial session, the surgeon deformed the tubular structures with surgical forceps in 10 positions. The mean, standard deviation, and maximum misalignment between virtual and real arteries were 0.35, 0.22, and 0.99 mm, respectively. The alignment accuracy obtained demonstrates the feasibility of the approach, which can be adopted in advanced AR simulations, in particular as an aid to the identification and isolation of tubular structures. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Virtual Passive Controller for Robot Systems Using Joint Torque Sensors

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.; Juang, Jer-Nan

    1997-01-01

    This paper presents a control method based on virtual passive dynamic control that will stabilize a robot manipulator using joint torque sensors and a simple joint model. The method does not require joint position or velocity feedback for stabilization. The proposed control method is stable in the sense of Lyaponov. The control method was implemented on several joints of a laboratory robot. The controller showed good stability robustness to system parameter error and to the exclusion of nonlinear dynamic effects on the joints. The controller enhanced position tracking performance and, in the absence of position control, dissipated joint energy.

  20. Design and Development of Card-Sized Virtual Keyboard Using Permanent Magnets and Hall Sensors

    NASA Astrophysics Data System (ADS)

    Demachi, Kazuyuki; Ohyama, Makoto; Kanemoto, Yoshiki; Masaie, Issei

    This paper proposes a method to distinguish the key-type of human fingers attached with the small permanent magnets. The Hall sensors arrayed in the credit card size area feel the distribution of the magnetic field due to the key-typing movement of the human fingers as if the keyboard exists, and the signal is analyzed using the generic algorithm or the neural network algorism to distinguish the typed keys. By this method, the keyboard can be miniaturized to the credit card size (54mm×85mm). We called this system `The virtual keyboard system'.

  1. Autonomous Satellite Operations Via Secure Virtual Mission Operations Center

    NASA Technical Reports Server (NTRS)

    Miller, Eric; Paulsen, Phillip E.; Pasciuto, Michael

    2011-01-01

    The science community is interested in improving their ability to respond to rapidly evolving, transient phenomena via autonomous rapid reconfiguration, which derives from the ability to assemble separate but collaborating sensors and data forecasting systems to meet a broad range of research and application needs. Current satellite systems typically require human intervention to respond to triggers from dissimilar sensor systems. Additionally, satellite ground services often need to be coordinated days or weeks in advance. Finally, the boundaries between the various sensor systems that make up such a Sensor Web are defined by such things as link delay and connectivity, data and error rate asymmetry, data reliability, quality of service provisions, and trust, complicating autonomous operations. Over the past ten years, researchers from the NASA Glenn Research Center (GRC), General Dynamics, Surrey Satellite Technology Limited (SSTL), Cisco, Universal Space Networks (USN), the U.S. Geological Survey (USGS), the Naval Research Laboratory, the DoD Operationally Responsive Space (ORS) Office, and others have worked collaboratively to develop a virtual mission operations capability. Called VMOC (Virtual Mission Operations Center), this new capability allows cross-system queuing of dissimilar mission unique systems through the use of a common security scheme and published application programming interfaces (APIs). Collaborative VMOC demonstrations over the last several years have supported the standardization of spacecraft to ground interfaces needed to reduce costs, maximize space effects to the user, and allow the generation of new tactics, techniques and procedures that lead to responsive space employment.

  2. Sensor supervision and multiagent commanding by means of projective virtual reality

    NASA Astrophysics Data System (ADS)

    Rossmann, Juergen

    1998-10-01

    When autonomous systems with multiple agents are considered, conventional control- and supervision technologies are often inadequate because the amount of information available is often presented in a way that the user is effectively overwhelmed by the displayed data. New virtual reality (VR) techniques can help to cope with this problem, because VR offers the chance to convey information in an intuitive manner and can combine supervision capabilities and new, intuitive approaches to the control of autonomous systems. In the approach taken, control and supervision issues were equally stressed and finally led to the new ideas and the general framework for Projective Virtual Reality. The key idea of this new approach for an intuitively operable man machine interface for decentrally controlled multi-agent systems is to let the user act in the virtual world, detect the changes and have an action planning component automatically generate task descriptions for the agents involved to project actions that have been carried out by users in the virtual world into the physical world, e.g. with the help of robots. Thus the Projective Virtual Reality approach is to split the job between the task deduction in the VR and the task `projection' onto the physical automation components by the automatic action planning component. Besides describing the realized projective virtual reality system, the paper will also describe in detail the metaphors and visualization aids used to present different types of (e.g. sensor-) information in an intuitively comprehensible manner.

  3. A miniature disposable radio (MiDR) for unattended ground sensor systems (UGSS) and munitions

    NASA Astrophysics Data System (ADS)

    Wells, Jeffrey S.; Wurth, Timothy J.

    2004-09-01

    Unattended and tactical sensors are used by the U.S. Army"s Future Combat Systems (FCS) and Objective Force Warrior (OFW) to detect and identify enemy targets on the battlefield. The radios being developed as part of the Networked Sensors for the Objective Force (NSOF) are too costly and too large to deploy in missions requiring throw-away hardware. A low-cost miniature radio is required to satisfy the communication needs for unmanned sensor and munitions systems that are deployed in a disposable manner. A low cost miniature disposable communications suite is leveraged using the commercial off-the-shelf market and employing a miniature universal frequency conversion architecture. Employing the technology of universal frequency architecture in a commercially available communication unit delivers a robust disposable transceiver that can operate at virtually any frequency. A low-cost RF communication radio has applicability in the commercial, homeland defense, military, and other government markets. Specific uses include perimeter monitoring, infrastructure defense, unattended ground sensors, tactical sensors, and border patrol. This paper describes a low-cost radio architecture to meet the requirements of throw-away radios that can be easily modified or tuned to virtually any operating frequency required for the specific mission.

  4. A Survey on Virtualization of Wireless Sensor Networks

    PubMed Central

    Islam, Md. Motaharul; Hassan, Mohammad Mehedi; Lee, Ga-Won; Huh, Eui-Nam

    2012-01-01

    Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization. PMID:22438759

  5. A survey on virtualization of Wireless Sensor Networks.

    PubMed

    Islam, Md Motaharul; Hassan, Mohammad Mehedi; Lee, Ga-Won; Huh, Eui-Nam

    2012-01-01

    Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization.

  6. Assessing Arthroscopic Skills Using Wireless Elbow-Worn Motion Sensors.

    PubMed

    Kirby, Georgina S J; Guyver, Paul; Strickland, Louise; Alvand, Abtin; Yang, Guang-Zhong; Hargrove, Caroline; Lo, Benny P L; Rees, Jonathan L

    2015-07-01

    Assessment of surgical skill is a critical component of surgical training. Approaches to assessment remain predominantly subjective, although more objective measures such as Global Rating Scales are in use. This study aimed to validate the use of elbow-worn, wireless, miniaturized motion sensors to assess the technical skill of trainees performing arthroscopic procedures in a simulated environment. Thirty participants were divided into three groups on the basis of their surgical experience: novices (n = 15), intermediates (n = 10), and experts (n = 5). All participants performed three standardized tasks on an arthroscopic virtual reality simulator while wearing wireless wrist and elbow motion sensors. Video output was recorded and a validated Global Rating Scale was used to assess performance; dexterity metrics were recorded from the simulator. Finally, live motion data were recorded via Bluetooth from the wireless wrist and elbow motion sensors and custom algorithms produced an arthroscopic performance score. Construct validity was demonstrated for all tasks, with Global Rating Scale scores and virtual reality output metrics showing significant differences between novices, intermediates, and experts (p < 0.001). The correlation of the virtual reality path length to the number of hand movements calculated from the wireless sensors was very high (p < 0.001). A comparison of the arthroscopic performance score levels with virtual reality output metrics also showed highly significant differences (p < 0.01). Comparisons of the arthroscopic performance score levels with the Global Rating Scale scores showed strong and highly significant correlations (p < 0.001) for both sensor locations, but those of the elbow-worn sensors were stronger and more significant (p < 0.001) than those of the wrist-worn sensors. A new wireless assessment of surgical performance system for objective assessment of surgical skills has proven valid for assessing arthroscopic skills. The elbow-worn sensors were shown to achieve an accurate assessment of surgical dexterity and performance. The validation of an entirely objective assessment of arthroscopic skill with wireless elbow-worn motion sensors introduces, for the first time, a feasible assessment system for the live operating theater with the added potential to be applied to other surgical and interventional specialties. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.

  7. Virtual Sensors for Advanced Controllers in Rehabilitation Robotics.

    PubMed

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Portillo, Eva; Jung, Je Hyung

    2018-03-05

    In order to properly control rehabilitation robotic devices, the measurement of interaction force and motion between patient and robot is an essential part. Usually, however, this is a complex task that requires the use of accurate sensors which increase the cost and the complexity of the robotic device. In this work, we address the development of virtual sensors that can be used as an alternative of actual force and motion sensors for the Universal Haptic Pantograph (UHP) rehabilitation robot for upper limbs training. These virtual sensors estimate the force and motion at the contact point where the patient interacts with the robot using the mathematical model of the robotic device and measurement through low cost position sensors. To demonstrate the performance of the proposed virtual sensors, they have been implemented in an advanced position/force controller of the UHP rehabilitation robot and experimentally evaluated. The experimental results reveal that the controller based on the virtual sensors has similar performance to the one using direct measurement (less than 0.005 m and 1.5 N difference in mean error). Hence, the developed virtual sensors to estimate interaction force and motion can be adopted to replace actual precise but normally high-priced sensors which are fundamental components for advanced control of rehabilitation robotic devices.

  8. Virtual Mission Operations of Remote Sensors With Rapid Access To and From Space

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Stewart, Dave; Walke, Jon; Dikeman, Larry; Sage, Steven; Miller, Eric; Northam, James; Jackson, Chris; Taylor, John; Lynch, Scott; hide

    2010-01-01

    This paper describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the United Kingdom Disaster Monitoring Constellation (UK-DMC), is used as the space-based sensor. The UK-DMC s availability is determined via machine-to-machine communications using SSTL s mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL s and Universal Space Network s (USN) ground assets. The availability and scheduling of USN s assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards.

  9. Virtual Simulation Capability for Deployable Force Protection Analysis (VSCDFP) FY 15 Plan

    DTIC Science & Technology

    2014-07-30

    Unmanned Aircraft Systems ( SUAS ) outfitted with a baseline two-axis steerable “Infini-spin” electro- optic/infrared (EO/IR) sensor payload. The current...Payload (EPRP) enhanced sensor system to the Puma SUAS will be beneficial for Soldiers executing RCP mission sets. • Develop the RCP EPRP Concept of

  10. ROBUST ONLINE MONITORING FOR CALIBRATION ASSESSMENT OF TRANSMITTERS AND INSTRUMENTATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Tipireddy, Ramakrishna; Lerchen, Megan E.

    Robust online monitoring (OLM) technologies are expected to enable the extension or elimination of periodic sensor calibration intervals in operating and new reactors. Specifically, the next generation of OLM technology is expected to include newly developed advanced algorithms that improve monitoring of sensor/system performance and enable the use of plant data to derive information that currently cannot be measured. These advances in OLM technologies will improve the safety and reliability of current and planned nuclear power systems through improved accuracy and increased reliability of sensors used to monitor key parameters. In this paper, we discuss an overview of research beingmore » performed within the Nuclear Energy Enabling Technologies (NEET)/Advanced Sensors and Instrumentation (ASI) program, for the development of OLM algorithms to use sensor outputs and, in combination with other available information, 1) determine whether one or more sensors are out of calibration or failing and 2) replace a failing sensor with reliable, accurate sensor outputs. Algorithm development is focused on the following OLM functions: • Signal validation – fault detection and selection of acceptance criteria • Virtual sensing – signal value prediction and acceptance criteria • Response-time assessment – fault detection and acceptance criteria selection A GP-based uncertainty quantification (UQ) method previously developed for UQ in OLM, was adapted for use in sensor-fault detection and virtual sensing. For signal validation, the various components to the OLM residual (which is computed using an AAKR model) were explicitly defined and modeled using a GP. Evaluation was conducted using flow loop data from multiple sources. Results using experimental data from laboratory-scale flow loops indicate that the approach, while capable of detecting sensor drift, may be incapable of discriminating between sensor drift and model inadequacy. This may be due to a simplification applied in the initial modeling, where the sensor degradation is assumed to be stationary. In the case of virtual sensors, the GP model was used in a predictive mode to estimate the correct sensor reading for sensors that may have failed. Results have indicated the viability of using this approach for virtual sensing. However, the GP model has proven to be computationally expensive, and so alternative algorithms for virtual sensing are being evaluated. Finally, automated approaches to performing noise analysis for extracting sensor response time were developed. Evaluation of this technique using laboratory-scale data indicates that it compares well with manual techniques previously used for noise analysis. Moreover, the automated and manual approaches for noise analysis also compare well with the current “gold standard”, hydraulic ramp testing, for response time monitoring. Ongoing research in this project is focused on further evaluation of the algorithms, optimization for accuracy and computational efficiency, and integration into a suite of tools for robust OLM that are applicable to monitoring sensor calibration state in nuclear power plants.« less

  11. Development of a Locomotion Interface for Portable Virtual Environment Systems Using an Inertial/Magnetic Sensor-Based System and a Ranging Measurement System

    DTIC Science & Technology

    2014-03-01

    56 1. Motivation ...83 1. Motivation ...........................................................................................83 2. Environment Requirements...ENVIRONMENT SYSTEMS ......................................................97 A. BACKGROUND AND MOTIVATION

  12. Open multi-agent control architecture to support virtual-reality-based man-machine interfaces

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel

    2001-10-01

    Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.

  13. Embodied collaboration support system for 3D shape evaluation in virtual space

    NASA Astrophysics Data System (ADS)

    Okubo, Masashi; Watanabe, Tomio

    2005-12-01

    Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.

  14. Inertial Motion-Tracking Technology for Virtual 3-D

    NASA Technical Reports Server (NTRS)

    2005-01-01

    In the 1990s, NASA pioneered virtual reality research. The concept was present long before, but, prior to this, the technology did not exist to make a viable virtual reality system. Scientists had theories and ideas they knew that the concept had potential, but the computers of the 1970s and 1980s were not fast enough, sensors were heavy and cumbersome, and people had difficulty blending fluidly with the machines. Scientists at Ames Research Center built upon the research of previous decades and put the necessary technology behind them, making the theories of virtual reality a reality. Virtual reality systems depend on complex motion-tracking sensors to convey information between the user and the computer to give the user the feeling that he is operating in the real world. These motion-tracking sensors measure and report an object s position and orientation as it changes. A simple example of motion tracking would be the cursor on a computer screen moving in correspondence to the shifting of the mouse. Tracking in 3-D, necessary to create virtual reality, however, is much more complex. To be successful, the perspective of the virtual image seen on the computer must be an accurate representation of what is seen in the real world. As the user s head or camera moves, turns, or tilts, the computer-generated environment must change accordingly with no noticeable lag, jitter, or distortion. Historically, the lack of smooth and rapid tracking of the user s motion has thwarted the widespread use of immersive 3-D computer graphics. NASA uses virtual reality technology for a variety of purposes, mostly training of astronauts. The actual missions are costly and dangerous, so any opportunity the crews have to practice their maneuvering in accurate situations before the mission is valuable and instructive. For that purpose, NASA has funded a great deal of virtual reality research, and benefited from the results.

  15. A Study on Immersion and Presence of a Portable Hand Haptic System for Immersive Virtual Reality

    PubMed Central

    Kim, Mingyu; Jeon, Changyu; Kim, Jinmo

    2017-01-01

    This paper proposes a portable hand haptic system using Leap Motion as a haptic interface that can be used in various virtual reality (VR) applications. The proposed hand haptic system was designed as an Arduino-based sensor architecture to enable a variety of tactile senses at low cost, and is also equipped with a portable wristband. As a haptic system designed for tactile feedback, the proposed system first identifies the left and right hands and then sends tactile senses (vibration and heat) to each fingertip (thumb and index finger). It is incorporated into a wearable band-type system, making its use easy and convenient. Next, hand motion is accurately captured using the sensor of the hand tracking system and is used for virtual object control, thus achieving interaction that enhances immersion. A VR application was designed with the purpose of testing the immersion and presence aspects of the proposed system. Lastly, technical and statistical tests were carried out to assess whether the proposed haptic system can provide a new immersive presence to users. According to the results of the presence questionnaire and the simulator sickness questionnaire, we confirmed that the proposed hand haptic system, in comparison to the existing interaction that uses only the hand tracking system, provided greater presence and a more immersive environment in the virtual reality. PMID:28513545

  16. A Study on Immersion and Presence of a Portable Hand Haptic System for Immersive Virtual Reality.

    PubMed

    Kim, Mingyu; Jeon, Changyu; Kim, Jinmo

    2017-05-17

    This paper proposes a portable hand haptic system using Leap Motion as a haptic interface that can be used in various virtual reality (VR) applications. The proposed hand haptic system was designed as an Arduino-based sensor architecture to enable a variety of tactile senses at low cost, and is also equipped with a portable wristband. As a haptic system designed for tactile feedback, the proposed system first identifies the left and right hands and then sends tactile senses (vibration and heat) to each fingertip (thumb and index finger). It is incorporated into a wearable band-type system, making its use easy and convenient. Next, hand motion is accurately captured using the sensor of the hand tracking system and is used for virtual object control, thus achieving interaction that enhances immersion. A VR application was designed with the purpose of testing the immersion and presence aspects of the proposed system. Lastly, technical and statistical tests were carried out to assess whether the proposed haptic system can provide a new immersive presence to users. According to the results of the presence questionnaire and the simulator sickness questionnaire, we confirmed that the proposed hand haptic system, in comparison to the existing interaction that uses only the hand tracking system, provided greater presence and a more immersive environment in the virtual reality.

  17. An Architecture for Real-Time Interpretation and Visualization of Structural Sensor Data in a Laboratory Environment

    NASA Technical Reports Server (NTRS)

    Doggett, William; Vazquez, Sixto

    2000-01-01

    A visualization system is being developed out of the need to monitor, interpret, and make decisions based on the information from several thousand sensors during experimental testing to facilitate development and validation of structural health monitoring algorithms. As an added benefit the system will enable complete real-time sensor assessment of complex test specimens. Complex structural specimens are routinely tested that have hundreds or thousands of sensors. During a test, it is impossible for a single researcher to effectively monitor all the sensors and subsequently interesting phenomena occur that are not recognized until post-test analysis. The ability to detect and alert the researcher to these unexpected phenomena as the test progresses will significantly enhance the understanding and utilization of complex test articles. Utilization is increased by the ability to halt a test when the health monitoring algorithm response is not satisfactory or when an unexpected phenomenon occurs, enabling focused investigation potentially through the installation of additional sensors. Often if the test continues, structural changes make it impossible to reproduce the conditions that exhibited the phenomena. The prohibitive time and costs associated with fabrication, sensoring, and subsequent testing of additional test articles generally makes it impossible to further investigate the phenomena. A scalable architecture is described to address the complex computational demands of structural health monitoring algorithm development and laboratory experimental test monitoring. The researcher monitors the test using a photographic quality 3D graphical model with actual sensor locations identified. In addition, researchers can quickly activate plots displaying time or load versus selected sensor response along with the expected values and predefined limits. The architecture has several key features. First, distributed dissimilar computers may be seamlessly integrated into the information flow. Second, virtual sensors may be defined that are complex functions of existing sensors or other virtual sensors. Virtual sensors represent a calculated value not directly measured by particular physical instrument. They can be used, for example, to represent the maximum difference in a range of sensors or the calculated buckling load based on the current strains. Third, the architecture enables autonomous response to preconceived events, where by the system can be configured to suspend or abort a test if a failure is detected in the load introduction system. Fourth, the architecture is designed to allow cooperative monitoring and control of the test progression from multiple stations both remote and local to the test system. To illustrate the architecture, a preliminary implementation is described monitoring the Stitched Composite Wing recently tested at LaRC.

  18. Insects modify their behaviour depending on the feedback sensor used when walking on a trackball in virtual reality.

    PubMed

    Taylor, Gavin J; Paulk, Angelique C; Pearson, Thomas W J; Moore, Richard J D; Stacey, Jacqui A; Ball, David; van Swinderen, Bruno; Srinivasan, Mandyam V

    2015-10-01

    When using virtual-reality paradigms to study animal behaviour, careful attention must be paid to how the animal's actions are detected. This is particularly relevant in closed-loop experiments where the animal interacts with a stimulus. Many different sensor types have been used to measure aspects of behaviour, and although some sensors may be more accurate than others, few studies have examined whether, and how, such differences affect an animal's behaviour in a closed-loop experiment. To investigate this issue, we conducted experiments with tethered honeybees walking on an air-supported trackball and fixating a visual object in closed-loop. Bees walked faster and along straighter paths when the motion of the trackball was measured in the classical fashion - using optical motion sensors repurposed from computer mice - than when measured more accurately using a computer vision algorithm called 'FicTrac'. When computer mouse sensors were used to measure bees' behaviour, the bees modified their behaviour and achieved improved control of the stimulus. This behavioural change appears to be a response to a systematic error in the computer mouse sensor that reduces the sensitivity of this sensor system under certain conditions. Although the large perceived inertia and mass of the trackball relative to the honeybee is a limitation of tethered walking paradigms, observing differences depending on the sensor system used to measure bee behaviour was not expected. This study suggests that bees are capable of fine-tuning their motor control to improve the outcome of the task they are performing. Further, our findings show that caution is required when designing virtual-reality experiments, as animals can potentially respond to the artificial scenario in unexpected and unintended ways. © 2015. Published by The Company of Biologists Ltd.

  19. The Virtual Tablet: Virtual Reality as a Control System

    NASA Technical Reports Server (NTRS)

    Chronister, Andrew

    2016-01-01

    In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.

  20. Intelligent Sensors: Strategies for an Integrated Systems Approach

    NASA Technical Reports Server (NTRS)

    Chitikeshi, Sanjeevi; Mahajan, Ajay; Bandhil, Pavan; Utterbach, Lucas; Figueroa, Fernando

    2005-01-01

    This paper proposes the development of intelligent sensors as an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Intelligent Systems Health Monitoring (ISHM) vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent Sensors (PIS) and Virtual Intelligent Sensors (VIS).

  1. Learning a detection map for a network of unattended ground sensors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Hung D.; Koch, Mark William

    2010-03-01

    We have developed algorithms to automatically learn a detection map of a deployed sensor field for a virtual presence and extended defense (VPED) system without apriori knowledge of the local terrain. The VPED system is an unattended network of sensor pods, with each pod containing acoustic and seismic sensors. Each pod has the ability to detect and classify moving targets at a limited range. By using a network of pods we can form a virtual perimeter with each pod responsible for a certain section of the perimeter. The site's geography and soil conditions can affect the detection performance of themore » pods. Thus, a network in the field may not have the same performance as a network designed in the lab. To solve this problem we automatically estimate a network's detection performance as it is being installed at a site by a mobile deployment unit (MDU). The MDU will wear a GPS unit, so the system not only knows when it can detect the MDU, but also the MDU's location. In this paper, we demonstrate how to handle anisotropic sensor-configurations, geography, and soil conditions.« less

  2. Hybrid architecture for building secure sensor networks

    NASA Astrophysics Data System (ADS)

    Owens, Ken R., Jr.; Watkins, Steve E.

    2012-04-01

    Sensor networks have various communication and security architectural concerns. Three approaches are defined to address these concerns for sensor networks. The first area is the utilization of new computing architectures that leverage embedded virtualization software on the sensor. Deploying a small, embedded virtualization operating system on the sensor nodes that is designed to communicate to low-cost cloud computing infrastructure in the network is the foundation to delivering low-cost, secure sensor networks. The second area focuses on securing the sensor. Sensor security components include developing an identification scheme, and leveraging authentication algorithms and protocols that address security assurance within the physical, communication network, and application layers. This function will primarily be accomplished through encrypting the communication channel and integrating sensor network firewall and intrusion detection/prevention components to the sensor network architecture. Hence, sensor networks will be able to maintain high levels of security. The third area addresses the real-time and high priority nature of the data that sensor networks collect. This function requires that a quality-of-service (QoS) definition and algorithm be developed for delivering the right data at the right time. A hybrid architecture is proposed that combines software and hardware features to handle network traffic with diverse QoS requirements.

  3. Virtual sensor models for real-time applications

    NASA Astrophysics Data System (ADS)

    Hirsenkorn, Nils; Hanke, Timo; Rauch, Andreas; Dehlink, Bernhard; Rasshofer, Ralph; Biebl, Erwin

    2016-09-01

    Increased complexity and severity of future driver assistance systems demand extensive testing and validation. As supplement to road tests, driving simulations offer various benefits. For driver assistance functions the perception of the sensors is crucial. Therefore, sensors also have to be modeled. In this contribution, a statistical data-driven sensor-model, is described. The state-space based method is capable of modeling various types behavior. In this contribution, the modeling of the position estimation of an automotive radar system, including autocorrelations, is presented. For rendering real-time capability, an efficient implementation is presented.

  4. Virtual pyramid wavefront sensor for phase unwrapping.

    PubMed

    Akondi, Vyas; Vohnsen, Brian; Marcos, Susana

    2016-10-10

    Noise affects wavefront reconstruction from wrapped phase data. A novel method of phase unwrapping is proposed with the help of a virtual pyramid wavefront sensor. The method was tested on noisy wrapped phase images obtained experimentally with a digital phase-shifting point diffraction interferometer. The virtuality of the pyramid wavefront sensor allows easy tuning of the pyramid apex angle and modulation amplitude. It is shown that an optimal modulation amplitude obtained by monitoring the Strehl ratio helps in achieving better accuracy. Through simulation studies and iterative estimation, it is shown that the virtual pyramid wavefront sensor is robust to random noise.

  5. The Evolution of Sonic Ecosystems

    NASA Astrophysics Data System (ADS)

    McCormack, Jon

    This chapter describes a novel type of artistic artificial life software environment. Agents that have the ability to make and listen to sound populate a synthetic world. An evolvable, rule-based classifier system drives agent behavior. Agents compete for limited resources in a virtual environment that is influenced by the presence and movement of people observing the system. Electronic sensors create a link between the real and virtual spaces, virtual agents evolve implicitly to try to maintain the interest of the human audience, whose presence provides them with life-sustaining food.

  6. Core body temperature control by total liquid ventilation using a virtual lung temperature sensor.

    PubMed

    Nadeau, Mathieu; Micheau, Philippe; Robert, Raymond; Avoine, Olivier; Tissier, Renaud; Germim, Pamela Samanta; Vandamme, Jonathan; Praud, Jean-Paul; Walti, Herve

    2014-12-01

    In total liquid ventilation (TLV), the lungs are filled with a breathable liquid perfluorocarbon (PFC) while a liquid ventilator ensures proper gas exchange by renewal of a tidal volume of oxygenated and temperature-controlled PFC. Given the rapid changes in core body temperature generated by TLV using the lung has a heat exchanger, it is crucial to have accurate and reliable core body temperature monitoring and control. This study presents the design of a virtual lung temperature sensor to control core temperature. In the first step, the virtual sensor, using expired PFC to estimate lung temperature noninvasively, was validated both in vitro and in vivo. The virtual lung temperature was then used to rapidly and automatically control core temperature. Experimentations were performed using the Inolivent-5.0 liquid ventilator with a feedback controller to modulate inspired PFC temperature thereby controlling lung temperature. The in vivo experimental protocol was conducted on seven newborn lambs instrumented with temperature sensors at the femoral artery, pulmonary artery, oesophagus, right ear drum, and rectum. After stabilization in conventional mechanical ventilation, TLV was initiated with fast hypothermia induction, followed by slow posthypothermic rewarming for 1 h, then by fast rewarming to normothermia and finally a second fast hypothermia induction phase. Results showed that the virtual lung temperature was able to provide an accurate estimation of systemic arterial temperature. Results also demonstrate that TLV can precisely control core body temperature and can be favorably compared to extracorporeal circulation in terms of speed.

  7. Virtual reality: a reality for future military pilotage?

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Martinsen, Gary L.; Marasco, Peter L.; Havig, Paul R.

    2009-05-01

    Virtual reality (VR) systems provide exciting new ways to interact with information and with the world. The visual VR environment can be synthetic (computer generated) or be an indirect view of the real world using sensors and displays. With the potential opportunities of a VR system, the question arises about what benefits or detriments a military pilot might incur by operating in such an environment. Immersive and compelling VR displays could be accomplished with an HMD (e.g., imagery on the visor), large area collimated displays, or by putting the imagery on an opaque canopy. But what issues arise when, instead of viewing the world directly, a pilot views a "virtual" image of the world? Is 20/20 visual acuity in a VR system good enough? To deliver this acuity over the entire visual field would require over 43 megapixels (MP) of display surface for an HMD or about 150 MP for an immersive CAVE system, either of which presents a serious challenge with current technology. Additionally, the same number of sensor pixels would be required to drive the displays to this resolution (and formidable network architectures required to relay this information), or massive computer clusters are necessary to create an entirely computer-generated virtual reality with this resolution. Can we presently implement such a system? What other visual requirements or engineering issues should be considered? With the evolving technology, there are many technological issues and human factors considerations that need to be addressed before a pilot is placed within a virtual cockpit.

  8. Virtual Vision

    NASA Astrophysics Data System (ADS)

    Terzopoulos, Demetri; Qureshi, Faisal Z.

    Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.

  9. A Virtual Sensor for Online Fault Detection of Multitooth-Tools

    PubMed Central

    Bustillo, Andres; Correa, Maritza; Reñones, Anibal

    2011-01-01

    The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a Bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases. PMID:22163766

  10. A virtual sensor for online fault detection of multitooth-tools.

    PubMed

    Bustillo, Andres; Correa, Maritza; Reñones, Anibal

    2011-01-01

    The installation of suitable sensors close to the tool tip on milling centres is not possible in industrial environments. It is therefore necessary to design virtual sensors for these machines to perform online fault detection in many industrial tasks. This paper presents a virtual sensor for online fault detection of multitooth tools based on a bayesian classifier. The device that performs this task applies mathematical models that function in conjunction with physical sensors. Only two experimental variables are collected from the milling centre that performs the machining operations: the electrical power consumption of the feed drive and the time required for machining each workpiece. The task of achieving reliable signals from a milling process is especially complex when multitooth tools are used, because each kind of cutting insert in the milling centre only works on each workpiece during a certain time window. Great effort has gone into designing a robust virtual sensor that can avoid re-calibration due to, e.g., maintenance operations. The virtual sensor developed as a result of this research is successfully validated under real conditions on a milling centre used for the mass production of automobile engine crankshafts. Recognition accuracy, calculated with a k-fold cross validation, had on average 0.957 of true positives and 0.986 of true negatives. Moreover, measured accuracy was 98%, which suggests that the virtual sensor correctly identifies new cases.

  11. TinyONet: A Cache-Based Sensor Network Bridge Enabling Sensing Data Reusability and Customized Wireless Sensor Network Services

    PubMed Central

    Jung, Eui-Hyun; Park, Yong-Jin

    2008-01-01

    In recent years, a few protocol bridge research projects have been announced to enable a seamless integration of Wireless Sensor Networks (WSNs) with the TCP/IP network. These studies have ensured the transparent end-to-end communication between two network sides in the node-centric manner. Researchers expect this integration will trigger the development of various application domains. However, prior research projects have not fully explored some essential features for WSNs, especially the reusability of sensing data and the data-centric communication. To resolve these issues, we suggested a new protocol bridge system named TinyONet. In TinyONet, virtual sensors play roles as virtual counterparts of physical sensors and they dynamically group to make a functional entity, Slice. Instead of direct interaction with individual physical sensors, each sensor application uses its own WSN service provided by Slices. If a new kind of service is required in TinyONet, the corresponding function can be dynamically added at runtime. Beside the data-centric communication, it also supports the node-centric communication and the synchronous access. In order to show the effectiveness of the system, we implemented TinyONet on an embedded Linux machine and evaluated it with several experimental scenarios. PMID:27873968

  12. Novel graphical environment for virtual and real-world operations of tracked mobile manipulators

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.; Azam, Mir; Lassiter, Nils T.

    1993-08-01

    A simulation, animation, visualization and interactive control (SAVIC) environment has been developed for the design and operation of an integrated mobile manipulator system. This unique system possesses the abilities for (1) multi-sensor simulation, (2) kinematics and locomotion animation, (3) dynamic motion and manipulation animation, (4) transformation between real and virtual modes within the same graphics system, (5) ease in exchanging software modules and hardware devices between real and virtual world operations, and (6) interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  13. Structural health management of aerospace hotspots under fatigue loading

    NASA Astrophysics Data System (ADS)

    Soni, Sunilkumar

    Sustainability and life-cycle assessments of aerospace systems, such as aircraft structures and propulsion systems, represent growing challenges in engineering. Hence, there has been an increasing demand in using structural health monitoring (SHM) techniques for continuous monitoring of these systems in an effort to improve safety and reduce maintenance costs. The current research is part of an ongoing multidisciplinary effort to develop a robust SHM framework resulting in improved models for damage-state awareness and life prediction, and enhancing capability of future aircraft systems. Lug joints, a typical structural hotspot, were chosen as the test article for the current study. The thesis focuses on integrated SHM techniques for damage detection and characterization in lug joints. Piezoelectric wafer sensors (PZTs) are used to generate guided Lamb waves as they can be easily used for onboard applications. Sensor placement in certain regions of a structural component is not feasible due to the inaccessibility of the area to be monitored. Therefore, a virtual sensing concept is introduced to acquire sensor data from finite element (FE) models. A full three dimensional FE analysis of lug joints with piezoelectric transducers, accounting for piezoelectrical-mechanical coupling, was performed in Abaqus and the sensor signals were simulated. These modeled sensors are called virtual sensors. A combination of real data from PZTs and virtual sensing data from FE analysis is used to monitor and detect fatigue damage in aluminum lug joints. Experiments were conducted on lug joints under fatigue loads and sensor signals collected were used to validate the simulated sensor response. An optimal sensor placement methodology for lug joints is developed based on a detection theory framework to maximize the detection rate and minimize the false alarm rate. The placement technique is such that the sensor features can be directly correlated to damage. The technique accounts for a number of factors, such as actuation frequency and strength, minimum damage size, damage detection scheme, material damping, signal to noise ratio and sensing radius. Advanced information processing methodologies are discussed for damage diagnosis. A new, instantaneous approach for damage detection, localization and quantification is proposed for applications to practical problems associated with changes in reference states under different environmental and operational conditions. Such an approach improves feature extraction for state awareness, resulting in robust life prediction capabilities.

  14. Elderly Healthcare Monitoring Using an Avatar-Based 3D Virtual Environment

    PubMed Central

    Pouke, Matti; Häkkilä, Jonna

    2013-01-01

    Homecare systems for elderly people are becoming increasingly important due to both economic reasons as well as patients’ preferences. Sensor-based surveillance technologies are an expected future trend, but research so far has devoted little attention to the User Interface (UI) design of such systems and the user-centric design approach. In this paper, we explore the possibilities of an avatar-based 3D visualization system, which exploits wearable sensors and human activity simulations. We present a technical prototype and the evaluation of alternative concept designs for UIs based on a 3D virtual world. The evaluation was conducted with homecare providers through focus groups and an online survey. Our results show firstly that systems taking advantage of 3D virtual world visualization techniques have potential especially due to the privacy preserving and simplified information presentation style, and secondly that simple representations and glancability should be emphasized in the design. The identified key use cases highlight that avatar-based 3D presentations can be helpful if they provide an overview as well as details on demand. PMID:24351747

  15. An Energy-Efficient Approach to Enhance Virtual Sensors Provisioning in Sensor Clouds Environments

    PubMed Central

    Filho, Raimir Holanda; Rabêlo, Ricardo de Andrade L.; de Carvalho, Carlos Giovanni N.; Mendes, Douglas Lopes de S.; Costa, Valney da Gama

    2018-01-01

    Virtual sensors provisioning is a central issue for sensors cloud middleware since it is responsible for selecting physical nodes, usually from Wireless Sensor Networks (WSN) of different owners, to handle user’s queries or applications. Recent works perform provisioning by clustering sensor nodes based on the correlation measurements and then selecting as few nodes as possible to preserve WSN energy. However, such works consider only homogeneous nodes (same set of sensors). Therefore, those works are not entirely appropriate for sensor clouds, which in most cases comprises heterogeneous sensor nodes. In this paper, we propose ACxSIMv2, an approach to enhance the provisioning task by considering heterogeneous environments. Two main algorithms form ACxSIMv2. The first one, ACASIMv1, creates multi-dimensional clusters of sensor nodes, taking into account the measurements correlations instead of the physical distance between nodes like most works on literature. Then, the second algorithm, ACOSIMv2, based on an Ant Colony Optimization system, selects an optimal set of sensors nodes from to respond user’s queries while attending all parameters and preserving the overall energy consumption. Results from initial experiments show that the approach reduces significantly the sensor cloud energy consumption compared to traditional works, providing a solution to be considered in sensor cloud scenarios. PMID:29495406

  16. An Energy-Efficient Approach to Enhance Virtual Sensors Provisioning in Sensor Clouds Environments.

    PubMed

    Lemos, Marcus Vinícius de S; Filho, Raimir Holanda; Rabêlo, Ricardo de Andrade L; de Carvalho, Carlos Giovanni N; Mendes, Douglas Lopes de S; Costa, Valney da Gama

    2018-02-26

    Virtual sensors provisioning is a central issue for sensors cloud middleware since it is responsible for selecting physical nodes, usually from Wireless Sensor Networks (WSN) of different owners, to handle user's queries or applications. Recent works perform provisioning by clustering sensor nodes based on the correlation measurements and then selecting as few nodes as possible to preserve WSN energy. However, such works consider only homogeneous nodes (same set of sensors). Therefore, those works are not entirely appropriate for sensor clouds, which in most cases comprises heterogeneous sensor nodes. In this paper, we propose ACxSIMv2, an approach to enhance the provisioning task by considering heterogeneous environments. Two main algorithms form ACxSIMv2. The first one, ACASIMv1, creates multi-dimensional clusters of sensor nodes, taking into account the measurements correlations instead of the physical distance between nodes like most works on literature. Then, the second algorithm, ACOSIMv2, based on an Ant Colony Optimization system, selects an optimal set of sensors nodes from to respond user's queries while attending all parameters and preserving the overall energy consumption. Results from initial experiments show that the approach reduces significantly the sensor cloud energy consumption compared to traditional works, providing a solution to be considered in sensor cloud scenarios.

  17. Integrated Sensor Architecture (ISA) for Live Virtual Constructive (LVC) Environments

    DTIC Science & Technology

    2014-03-01

    connect, publish their needs and capabilities, and interact with other systems even on disadvantaged networks. Within the ISA project, three levels of...constructive, disadvantaged network, sensor 1. INTRODUCTION In 2003 the Networked Sensors for the Future Force (NSFF) Advanced Technology Demonstration...While this combination is less optimal over disadvantaged networks, and we do not recommend it there, TCP and TLS perform adequately over networks with

  18. Inertial Sensor-Based Touch and Shake Metaphor for Expressive Control of 3D Virtual Avatars

    PubMed Central

    Patil, Shashidhar; Chintalapalli, Harinadha Reddy; Kim, Dubeom; Chai, Youngho

    2015-01-01

    In this paper, we present an inertial sensor-based touch and shake metaphor for expressive control of a 3D virtual avatar in a virtual environment. An intuitive six degrees-of-freedom wireless inertial motion sensor is used as a gesture and motion control input device with a sensor fusion algorithm. The algorithm enables user hand motions to be tracked in 3D space via magnetic, angular rate, and gravity sensors. A quaternion-based complementary filter is implemented to reduce noise and drift. An algorithm based on dynamic time-warping is developed for efficient recognition of dynamic hand gestures with real-time automatic hand gesture segmentation. Our approach enables the recognition of gestures and estimates gesture variations for continuous interaction. We demonstrate the gesture expressivity using an interactive flexible gesture mapping interface for authoring and controlling a 3D virtual avatar and its motion by tracking user dynamic hand gestures. This synthesizes stylistic variations in a 3D virtual avatar, producing motions that are not present in the motion database using hand gesture sequences from a single inertial motion sensor. PMID:26094629

  19. A Survey of Middleware for Sensor and Network Virtualization

    PubMed Central

    Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd.

    2014-01-01

    Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization. PMID:25615737

  20. A survey of middleware for sensor and network virtualization.

    PubMed

    Khalid, Zubair; Fisal, Norsheila; Rozaini, Mohd

    2014-12-12

    Wireless Sensor Network (WSN) is leading to a new paradigm of Internet of Everything (IoE). WSNs have a wide range of applications but are usually deployed in a particular application. However, the future of WSNs lies in the aggregation and allocation of resources, serving diverse applications. WSN virtualization by the middleware is an emerging concept that enables aggregation of multiple independent heterogeneous devices, networks, radios and software platforms; and enhancing application development. WSN virtualization, middleware can further be categorized into sensor virtualization and network virtualization. Middleware for WSN virtualization poses several challenges like efficient decoupling of networks, devices and software. In this paper efforts have been put forward to bring an overview of the previous and current middleware designs for WSN virtualization, the design goals, software architectures, abstracted services, testbeds and programming techniques. Furthermore, the paper also presents the proposed model, challenges and future opportunities for further research in the middleware designs for WSN virtualization.

  1. Migrating EO/IR sensors to cloud-based infrastructure as service architectures

    NASA Astrophysics Data System (ADS)

    Berglie, Stephen T.; Webster, Steven; May, Christopher M.

    2014-06-01

    The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for providing virtual machines with direct access to hardware resources. The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased technologies to various extents in order to streamline infrastructure and service management. This paper details the challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and, ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG. A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics APls required by the NVIG and similar GPU-bound tools.

  2. Practical design and evaluation methods of omnidirectional vision sensors

    NASA Astrophysics Data System (ADS)

    Ohte, Akira; Tsuzuki, Osamu

    2012-01-01

    A practical omnidirectional vision sensor, consisting of a curved mirror, a mirror-supporting structure, and a megapixel digital imaging system, can view a field of 360 deg horizontally and 135 deg vertically. The authors theoretically analyzed and evaluated several curved mirrors, namely, a spherical mirror, an equidistant mirror, and a single viewpoint mirror (hyperboloidal mirror). The focus of their study was mainly on the image-forming characteristics, position of the virtual images, and size of blur spot images. The authors propose here a practical design method that satisfies the required characteristics. They developed image-processing software for converting circular images to images of the desired characteristics in real time. They also developed several prototype vision sensors using spherical mirrors. Reports dealing with virtual images and blur-spot size of curved mirrors are few; therefore, this paper will be very useful for the development of omnidirectional vision sensors.

  3. Development of real-time motion capture system for 3D on-line games linked with virtual character

    NASA Astrophysics Data System (ADS)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  4. Robot Position Sensor Fault Tolerance

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.

    1997-01-01

    Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. A new method is proposed that utilizes analytical redundancy to allow for continued operation during joint position sensor failure. Joint torque sensors are used with a virtual passive torque controller to make the robot joint stable without position feedback and improve position tracking performance in the presence of unknown link dynamics and end-effector loading. Two Cartesian accelerometer based methods are proposed to determine the position of the joint. The joint specific position determination method utilizes two triaxial accelerometers attached to the link driven by the joint with the failed position sensor. The joint specific method is not computationally complex and the position error is bounded. The system wide position determination method utilizes accelerometers distributed on different robot links and the end-effector to determine the position of sets of multiple joints. The system wide method requires fewer accelerometers than the joint specific method to make all joint position sensors fault tolerant but is more computationally complex and has lower convergence properties. Experiments were conducted on a laboratory manipulator. Both position determination methods were shown to track the actual position satisfactorily. A controller using the position determination methods and the virtual passive torque controller was able to servo the joints to a desired position during position sensor failure.

  5. A New Continent of Ideas

    NASA Technical Reports Server (NTRS)

    1990-01-01

    While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.

  6. Virtual IED sensor at an rf-biased electrode in low-pressure plasma

    NASA Astrophysics Data System (ADS)

    Bogdanova, Maria; Lopaev, Dmitry; Zyryanov, Sergey; Rakhimov, Alexander

    2016-09-01

    The majority of present-day technologies resort to ion-assisted processes in rf low-pressure plasma. In order to control the process precisely, the energy distribution of ions (IED) bombarding the sample placed on the rf-biased electrode should be tracked. In this work the ``Virtual IED sensor'' concept is considered. The idea is to obtain the IED ``virtually'' from the plasma sheath model including a set of externally measurable discharge parameters. The applicability of the ``Virtual IED sensor'' concept was studied for dual-frequency asymmetric ICP and CCP discharges. The IED measurements were carried out in Ar and H2 plasmas in a wide range of conditions. The calculated IEDs were compared to those measured by the Retarded Field Energy Analyzer. To calibrate the ``Virtual IED sensor'', the ion flux was measured by the pulsed self-bias method and then compared to plasma density measurements by Langmuir and hairpin probes. It is shown that if there is a reliable calibration procedure, the ``Virtual IED sensor'' can be successfully realized on the basis of analytical and semianalytical plasma sheath models including measurable discharge parameters. This research is supported by Russian Science Foundation (RSF) Grant 14-12-01012.

  7. Application of intelligent sensors in the integrated systems health monitoring of a rocket test stand

    NASA Astrophysics Data System (ADS)

    Mahajan, Ajay; Chitikeshi, Sanjeevi; Utterbach, Lucas; Bandhil, Pavan; Figueroa, Fernando

    2006-05-01

    This paper describes the application of intelligent sensors in the Integrated Systems Health Monitoring (ISHM) as applied to a rocket test stand. The development of intelligent sensors is attempted as an integrated system approach, i.e. one treats the sensors as a complete system with its own physical transducer, A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the NASA Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements associated with the rocket tests stands. These smart elements can be sensors, actuators or other devices. Though the immediate application is the monitoring of the rocket test stands, the technology should be generally applicable to the ISHM vision. This paper outlines progress made in the development of intelligent sensors by describing the work done till date on Physical Intelligent sensors (PIS) and Virtual Intelligent Sensors (VIS).

  8. A lightweight sensor network management system design

    USGS Publications Warehouse

    Yuan, F.; Song, W.-Z.; Peterson, N.; Peng, Y.; Wang, L.; Shirazi, B.; LaHusen, R.

    2008-01-01

    In this paper, we propose a lightweight and transparent management framework for TinyOS sensor networks, called L-SNMS, which minimizes the overhead of management functions, including memory usage overhead, network traffic overhead, and integration overhead. We accomplish this by making L-SNMS virtually transparent to other applications hence requiring minimal integration. The proposed L-SNMS framework has been successfully tested on various sensor node platforms, including TelosB, MICAz and IMote2. ?? 2008 IEEE.

  9. Real and virtual explorations of the environment and interactive tracking of movable objects for the blind on the basis of tactile-acoustical maps and 3D environment models.

    PubMed

    Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas

    2008-01-01

    PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.

  10. Force Sensitive Handles and Capacitive Touch Sensor for Driving a Flexible Haptic-Based Immersive System

    PubMed Central

    Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto

    2013-01-01

    In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape. PMID:24113680

  11. Force sensitive handles and capacitive touch sensor for driving a flexible haptic-based immersive system.

    PubMed

    Covarrubias, Mario; Bordegoni, Monica; Cugini, Umberto

    2013-10-09

    In this article, we present an approach that uses both two force sensitive handles (FSH) and a flexible capacitive touch sensor (FCTS) to drive a haptic-based immersive system. The immersive system has been developed as part of a multimodal interface for product design. The haptic interface consists of a strip that can be used by product designers to evaluate the quality of a 3D virtual shape by using touch, vision and hearing and, also, to interactively change the shape of the virtual object. Specifically, the user interacts with the FSH to move the virtual object and to appropriately position the haptic interface for retrieving the six degrees of freedom required for both manipulation and modification modalities. The FCTS allows the system to track the movement and position of the user's fingers on the strip, which is used for rendering visual and sound feedback. Two evaluation experiments are described, which involve both the evaluation and the modification of a 3D shape. Results show that the use of the haptic strip for the evaluation of aesthetic shapes is effective and supports product designers in the appreciation of the aesthetic qualities of the shape.

  12. Enhanced Deployment Strategy for Role-Based Hierarchical Application Agents in Wireless Sensor Networks with Established Clusterheads

    ERIC Educational Resources Information Center

    Gendreau, Audrey

    2014-01-01

    Efficient self-organizing virtual clusterheads that supervise data collection based on their wireless connectivity, risk, and overhead costs, are an important element of Wireless Sensor Networks (WSNs). This function is especially critical during deployment when system resources are allocated to a subsequent application. In the presented research,…

  13. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata.

    PubMed

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-16

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included.

  14. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata

    PubMed Central

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-01

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included. PMID:29337877

  15. A sensor network based virtual beam-like structure method for fault diagnosis and monitoring of complex structures with Improved Bacterial Optimization

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-02-01

    This paper proposes a novel method for the fault diagnosis of complex structures based on an optimized virtual beam-like structure approach. A complex structure can be regarded as a combination of numerous virtual beam-like structures considering the vibration transmission path from vibration sources to each sensor. The structural 'virtual beam' consists of a sensor chain automatically obtained by an Improved Bacterial Optimization Algorithm (IBOA). The biologically inspired optimization method (i.e. IBOA) is proposed for solving the discrete optimization problem associated with the selection of the optimal virtual beam for fault diagnosis. This novel virtual beam-like-structure approach needs less or little prior knowledge. Neither does it require stationary response data, nor is it confined to a specific structure design. It is easy to implement within a sensor network attached to the monitored structure. The proposed fault diagnosis method has been tested on the detection of loosening screws located at varying positions in a real satellite-like model. Compared with empirical methods, the proposed virtual beam-like structure method has proved to be very effective and more reliable for fault localization.

  16. Three-Dimensional Sensor Common Operating Picture (3-D Sensor COP)

    DTIC Science & Technology

    2017-01-01

    created. Additionally, a 3-D model of the sensor itself can be created. Using these 3-D models, along with emerging virtual and augmented reality tools...augmented reality 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 20 19a...iii Contents List of Figures iv 1. Introduction 1 2. The 3-D Sensor COP 2 3. Virtual Sensor Placement 7 4. Conclusions 10 5. References 11

  17. A novel vibration structure for dynamic balancing measurement

    NASA Astrophysics Data System (ADS)

    Qin, Peng; Cai, Ping; Hu, Qinghan; Li, Yingxia

    2006-11-01

    Based on the conception of instantaneous motion center in theoretical mechanics, the paper presents a novel virtual vibration structure for dynamic balancing measurement with high precision. The structural features and the unbalancing response characteristics of this vibration structure are analyzed in depth. The relation between the real measuring system and the virtual one is emphatically expounded. Theoretical analysis indicates that the flexibly hinged integrative plate spring sets holds fixed vibration center, with the result that this vibration system has the most excellent effect of plane separation. In addition, the sensors are mounted on the same longitudinal section. Thus the influence of phase error on the primary unbalance reduction ratio is eliminated. Furthermore, the performance changes in sensors caused by environmental factor have less influence on the accuracy of the measurement. The result for this system is more accurate measurement with lower requirement for a second correction run.

  18. Design of an Intelligent Front-End Signal Conditioning Circuit for IR Sensors

    NASA Astrophysics Data System (ADS)

    de Arcas, G.; Ruiz, M.; Lopez, J. M.; Gutierrez, R.; Villamayor, V.; Gomez, L.; Montojo, Mª. T.

    2008-02-01

    This paper presents the design of an intelligent front-end signal conditioning system for IR sensors. The system has been developed as an interface between a PbSe IR sensor matrix and a TMS320C67x digital signal processor. The system architecture ensures its scalability so it can be used for sensors with different matrix sizes. It includes an integrator based signal conditioning circuit, a data acquisition converter block, and a FPGA based advanced control block that permits including high level image preprocessing routines such as faulty pixel detection and sensor calibration in the signal conditioning front-end. During the design phase virtual instrumentation technologies proved to be a very valuable tool for prototyping when choosing the best A/D converter type for the application. Development time was significantly reduced due to the use of this technology.

  19. Virtual and remote robotic laboratory using EJS, MATLAB and LabVIEW.

    PubMed

    Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián

    2013-02-21

    This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.

  20. Virtual and Remote Robotic Laboratory Using EJS, MATLAB and Lab VIEW

    PubMed Central

    Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián

    2013-01-01

    This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented. PMID:23429578

  1. Oversampling in virtual visual sensors as a means to recover higher modes of vibration

    NASA Astrophysics Data System (ADS)

    Shariati, Ali; Schumacher, Thomas

    2015-03-01

    Vibration-based structural health monitoring (SHM) techniques require modal information from the monitored structure in order to estimate the location and severity of damage. Natural frequencies also provide useful information to calibrate finite element models. There are several types of physical sensors that can measure the response over a range of frequencies. For most of those sensors however, accessibility, limitation of measurement points, wiring, and high system cost represent major challenges. Recent optical sensing approaches offer advantages such as easy access to visible areas, distributed sensing capabilities, and comparatively inexpensive data recording while having no wiring issues. In this research we propose a novel methodology to measure natural frequencies of structures using digital video cameras based on virtual visual sensors (VVS). In our initial study where we worked with commercially available inexpensive digital video cameras we found that for multiple degrees of freedom systems it is difficult to detect all of the natural frequencies simultaneously due to low quantization resolution. In this study we show how oversampling enabled by the use of high-end high-frame-rate video cameras enable recovering all of the three natural frequencies from a three story lab-scale structure.

  2. VLSI Design of Trusted Virtual Sensors.

    PubMed

    Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada

    2018-01-25

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).

  3. VLSI Design of Trusted Virtual Sensors

    PubMed Central

    2018-01-01

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μs. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time). PMID:29370141

  4. Virtual reality and telepresence for military medicine.

    PubMed

    Satava, R M

    1995-03-01

    The profound changes brought about by technology in the past few decades are leading to a total revolution in medicine. The advanced technologies of telepresence and virtual reality are but two of the manifestations emerging from our new information age; now all of medicine can be empowered because of this digital technology. The leading edge is on the digital battlefield, where an entire new concept in military medicine is evolving. Using remote sensors, intelligent systems, telepresence surgery and virtual reality surgical simulations, combat casualty care is prepared for the 21st century.

  5. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image

    PubMed Central

    Wen, Wei; Khatibi, Siamak

    2017-01-01

    Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459

  6. The application of smart sensor techniques to a solid-state array multispectral sensor

    NASA Technical Reports Server (NTRS)

    Mcfadin, L. W.

    1978-01-01

    The solid-state array spectroradiometer (SAS) developed at JSC for remote sensing applications is a multispectral sensor which has no moving parts, is virtually maintenance-free, and has the ability to provide data which requires a minimum of processing. The instrument is based on the 42 x 342 element charge injection device (CID) detector. This system allows the combination of spectral scanning and across-track spatial scanning along with its associated digitization electronics into a single detector.

  7. Cyber entertainment system using an immersive networked virtual environment

    NASA Astrophysics Data System (ADS)

    Ihara, Masayuki; Honda, Shinkuro; Kobayashi, Minoru; Ishibashi, Satoshi

    2002-05-01

    Authors are examining a cyber entertainment system that applies IPT (Immersive Projection Technology) displays to the entertainment field. This system enables users who are in remote locations to communicate with each other so that they feel as if they are together. Moreover, the system enables those users to experience a high degree of presence, this is due to provision of stereoscopic vision as well as a haptic interface and stereo sound. This paper introduces this system from the viewpoint of space sharing across the network and elucidates its operation using the theme of golf. The system is developed by integrating avatar control, an I/O device, communication links, virtual interaction, mixed reality, and physical simulations. Pairs of these environments are connected across the network. This allows the two players to experience competition. An avatar of each player is displayed by the other player's IPT display in the remote location and is driven by only two magnetic sensors. That is, in the proposed system, users don't need to wear any data suit with a lot of sensors and they are able to play golf without any encumbrance.

  8. Evaluation of Sensor Configurations for Robotic Surgical Instruments

    PubMed Central

    Gómez-de-Gabriel, Jesús M.; Harwin, William

    2015-01-01

    Designing surgical instruments for robotic-assisted minimally-invasive surgery (RAMIS) is challenging due to constraints on the number and type of sensors imposed by considerations such as space or the need for sterilization. A new method for evaluating the usability of virtual teleoperated surgical instruments based on virtual sensors is presented. This method uses virtual prototyping of the surgical instrument with a dual physical interaction, which allows testing of different sensor configurations in a real environment. Moreover, the proposed approach has been applied to the evaluation of prototypes of a two-finger grasper for lump detection by remote pinching. In this example, the usability of a set of five different sensor configurations, with a different number of force sensors, is evaluated in terms of quantitative and qualitative measures in clinical experiments with 23 volunteers. As a result, the smallest number of force sensors needed in the surgical instrument that ensures the usability of the device can be determined. The details of the experimental setup are also included. PMID:26516863

  9. Evaluation of Sensor Configurations for Robotic Surgical Instruments.

    PubMed

    Gómez-de-Gabriel, Jesús M; Harwin, William

    2015-10-27

    Designing surgical instruments for robotic-assisted minimally-invasive surgery (RAMIS) is challenging due to constraints on the number and type of sensors imposed by considerations such as space or the need for sterilization. A new method for evaluating the usability of virtual teleoperated surgical instruments based on virtual sensors is presented. This method uses virtual prototyping of the surgical instrument with a dual physical interaction, which allows testing of different sensor configurations in a real environment. Moreover, the proposed approach has been applied to the evaluation of prototypes of a two-finger grasper for lump detection by remote pinching. In this example, the usability of a set of five different sensor configurations, with a different number of force sensors, is evaluated in terms of quantitative and qualitative measures in clinical experiments with 23 volunteers. As a result, the smallest number of force sensors needed in the surgical instrument that ensures the usability of the device can be determined. The details of the experimental setup are also included.

  10. Integrating Flexible Sensor and Virtual Self-Organizing DC Grid Model With Cloud Computing for Blood Leakage Detection During Hemodialysis.

    PubMed

    Huang, Ping-Tzan; Jong, Tai-Lang; Li, Chien-Ming; Chen, Wei-Ling; Lin, Chia-Hung

    2017-08-01

    Blood leakage and blood loss are serious complications during hemodialysis. From the hemodialysis survey reports, these life-threatening events occur to attract nephrology nurses and patients themselves. When the venous needle and blood line are disconnected, it takes only a few minutes for an adult patient to lose over 40% of his / her blood, which is a sufficient amount of blood loss to cause the patient to die. Therefore, we propose integrating a flexible sensor and self-organizing algorithm to design a cloud computing-based warning device for blood leakage detection. The flexible sensor is fabricated via a screen-printing technique using metallic materials on a soft substrate in an array configuration. The self-organizing algorithm constructs a virtual direct current grid-based alarm unit in an embedded system. This warning device is employed to identify blood leakage levels via a wireless network and cloud computing. It has been validated experimentally, and the experimental results suggest specifications for its commercial designs. The proposed model can also be implemented in an embedded system.

  11. A method to align the coordinate system of accelerometers to the axes of a human body: The depitch algorithm.

    PubMed

    Gietzelt, Matthias; Schnabel, Stephan; Wolf, Klaus-Hendrik; Büsching, Felix; Song, Bianying; Rust, Stefan; Marschollek, Michael

    2012-05-01

    One of the key problems in accelerometry based gait analyses is that it may not be possible to attach an accelerometer to the lower trunk so that its axes are perfectly aligned to the axes of the subject. In this paper we will present an algorithm that was designed to virtually align the axes of the accelerometer to the axes of the subject during walking sections. This algorithm is based on a physically reasonable approach and built for measurements in unsupervised settings, where the test persons are applying the sensors by themselves. For evaluation purposes we conducted a study with 6 healthy subjects and measured their gait with a manually aligned and a skewed accelerometer attached to the subject's lower trunk. After applying the algorithm the intra-axis correlation of both sensors was on average 0.89±0.1 with a mean absolute error of 0.05g. We concluded that the algorithm was able to adjust the skewed sensor node virtually to the coordinate system of the subject. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Inertial Head-Tracker Sensor Fusion by a Complementary Separate-Bias Kalman Filter

    NASA Technical Reports Server (NTRS)

    Foxlin, Eric

    1996-01-01

    Current virtual environment and teleoperator applications are hampered by the need for an accurate, quick-responding head-tracking system with a large working volume. Gyroscopic orientation sensors can overcome problems with jitter, latency, interference, line-of-sight obscurations, and limited range, but suffer from slow drift. Gravimetric inclinometers can detect attitude without drifting, but are slow and sensitive to transverse accelerations. This paper describes the design of a Kalman filter to integrate the data from these two types of sensors in order to achieve the excellent dynamic response of an inertial system without drift, and without the acceleration sensitivity of inclinometers.

  13. Inertial head-tracker sensor fusion by a complementary separate-bias Kalman filter

    NASA Technical Reports Server (NTRS)

    Foxlin, Eric

    1996-01-01

    Current virtual environment and teleoperator applications are hampered by the need for an accurate, quick responding head-tracking system with a large working volume. Gyroscopic orientation sensors can overcome problems with jitter, latency, interference, line-of-sight obscurations, and limited range, but suffer from slow drift. Gravimetric inclinometers can detect attitude without drifting, but are slow and sensitive to transverse accelerations. This paper describes the design of a Kalman filter to integrate the data from these two types of sensors in order to achieve the excellent dynamic response of an inertial system without drift, and without the acceleration sensitivity of inclinometers.

  14. Virtual Deformation Control of the X-56A Model with Simulated Fiber Optic Sensors

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.

    2014-01-01

    A robust control law design methodology is presented to stabilize the X-56A model and command its wing shape. The X-56A was purposely designed to experience flutter modes in its flight envelope. The methodology introduces three phases: the controller design phase, the modal filter design phase, and the reference signal design phase. A mu-optimal controller is designed and made robust to speed and parameter variations. A conversion technique is presented for generating sensor strain modes from sensor deformation mode shapes. The sensor modes are utilized for modal filtering and simulating fiber optic sensors for feedback to the controller. To generate appropriate virtual deformation reference signals, rigid-body corrections are introduced to the deformation mode shapes. After successful completion of the phases, virtual deformation control is demonstrated. The wing is deformed and it is shown that angle-ofattack changes occur which could potentially be used to an advantage. The X-56A program must demonstrate active flutter suppression. It is shown that the virtual deformation controller can achieve active flutter suppression on the X-56A simulation model.

  15. Virtual Deformation Control of the X-56A Model with Simulated Fiber Optic Sensors

    NASA Technical Reports Server (NTRS)

    Suh, Peter M.; Chin, Alexander Wong

    2013-01-01

    A robust control law design methodology is presented to stabilize the X-56A model and command its wing shape. The X-56A was purposely designed to experience flutter modes in its flight envelope. The methodology introduces three phases: the controller design phase, the modal filter design phase, and the reference signal design phase. A mu-optimal controller is designed and made robust to speed and parameter variations. A conversion technique is presented for generating sensor strain modes from sensor deformation mode shapes. The sensor modes are utilized for modal filtering and simulating fiber optic sensors for feedback to the controller. To generate appropriate virtual deformation reference signals, rigid-body corrections are introduced to the deformation mode shapes. After successful completion of the phases, virtual deformation control is demonstrated. The wing is deformed and it is shown that angle-of-attack changes occur which could potentially be used to an advantage. The X-56A program must demonstrate active flutter suppression. It is shown that the virtual deformation controller can achieve active flutter suppression on the X-56A simulation model.

  16. An Improved Co-evolutionary Particle Swarm Optimization for Wireless Sensor Networks with Dynamic Deployment

    PubMed Central

    Wang, Xue; Wang, Sheng; Ma, Jun-Jie

    2007-01-01

    The effectiveness of wireless sensor networks (WSNs) depends on the coverage and target detection probability provided by dynamic deployment, which is usually supported by the virtual force (VF) algorithm. However, in the VF algorithm, the virtual force exerted by stationary sensor nodes will hinder the movement of mobile sensor nodes. Particle swarm optimization (PSO) is introduced as another dynamic deployment algorithm, but in this case the computation time required is the big bottleneck. This paper proposes a dynamic deployment algorithm which is named “virtual force directed co-evolutionary particle swarm optimization” (VFCPSO), since this algorithm combines the co-evolutionary particle swarm optimization (CPSO) with the VF algorithm, whereby the CPSO uses multiple swarms to optimize different components of the solution vectors for dynamic deployment cooperatively and the velocity of each particle is updated according to not only the historical local and global optimal solutions, but also the virtual forces of sensor nodes. Simulation results demonstrate that the proposed VFCPSO is competent for dynamic deployment in WSNs and has better performance with respect to computation time and effectiveness than the VF, PSO and VFPSO algorithms.

  17. Sensor Webs and Virtual Globes: Enabling Understanding of Changes in a partially Glaciated Watershed

    NASA Astrophysics Data System (ADS)

    Heavner, M.; Fatland, D. R.; Habermann, M.; Berner, L.; Hood, E.; Connor, C.; Galbraith, J.; Knuth, E.; O'Brien, W.

    2008-12-01

    The University of Alaska Southeast is currently implementing a sensor web identified as the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research (SEAMONSTER). SEAMONSTER is operating in the partially glaciated Mendenhall and Lemon Creek Watersheds, in the Juneau area, on the margins of the Juneau Icefield. These watersheds are studied for both 1. long term monitoring of changes, and 2. detection and analysis of transient events (such as glacier lake outburst floods). The heterogeneous sensors (meteorologic, dual frequency GPS, water quality, lake level, etc), power and bandwidth constraints, and competing time scales of interest require autonomous reactivity of the sensor web. They also present challenges for operational management of the sensor web. The harsh conditions on the glaciers provide additional operating constraints. The tight integration of the sensor web and virtual global enabling technology enhance the project in multiple ways. We are utilizing virtual globe infrastructures to enhance both sensor web management and data access. SEAMONSTER utilizes virtual globes for education and public outreach, sensor web management, data dissemination, and enabling collaboration. Using a PosgreSQL with GIS extensions database coupled to the Open Geospatial Consortium (OGC) Geoserver, we generate near-real-time auto-updating geobrowser files of the data in multiple OGC standard formats (e.g KML, WCS). Additionally, embedding wiki pages in this database allows the development of a geospatially aware wiki describing the projects for better public outreach and education. In this presentation we will describe how we have implemented these technologies to date, the lessons learned, and our efforts towards greater OGC standard implementation. A major focus will be on demonstrating how geobrowsers and virtual globes have made this project possible.

  18. A Prototype Land Information Sensor Web: Design, Implementation and Implication for the SMAP Mission

    NASA Astrophysics Data System (ADS)

    Su, H.; Houser, P.; Tian, Y.; Geiger, J. K.; Kumar, S. V.; Gates, L.

    2009-12-01

    Land Surface Model (LSM) predictions are regular in time and space, but these predictions are influenced by errors in model structure, input variables, parameters and inadequate treatment of sub-grid scale spatial variability. Consequently, LSM predictions are significantly improved through observation constraints made in a data assimilation framework. Several multi-sensor satellites are currently operating which provide multiple global observations of the land surface, and its related near-atmospheric properties. However, these observations are not optimal for addressing current and future land surface environmental problems. To meet future earth system science challenges, NASA will develop constellations of smart satellites in sensor web configurations which provide timely on-demand data and analysis to users, and can be reconfigured based on the changing needs of science and available technology. A sensor web is more than a collection of satellite sensors. That means a sensor web is a system composed of multiple platforms interconnected by a communication network for the purpose of performing specific observations and processing data required to support specific science goals. Sensor webs can eclipse the value of disparate sensor components by reducing response time and increasing scientific value, especially when the two-way interaction between the model and the sensor web is enabled. The study of a prototype Land Information Sensor Web (LISW) is sponsored by NASA, trying to integrate the Land Information System (LIS) in a sensor web framework which allows for optimal 2-way information flow that enhances land surface modeling using sensor web observations, and in turn allows sensor web reconfiguration to minimize overall system uncertainty. This prototype is based on a simulated interactive sensor web, which is then used to exercise and optimize the sensor web modeling interfaces. The Land Information Sensor Web Service-Oriented Architecture (LISW-SOA) has been developed and it is the very first sensor web framework developed especially for the land surface studies. Synthetic experiments based on the LISW-SOA and the virtual sensor web provide a controlled environment in which to examine the end-to-end performance of the prototype, the impact of various sensor web design trade-offs and the eventual value of sensor webs for a particular prediction or decision support. In this paper, the design, implementation of the LISW-SOA and the implication for the Soil Moisture Active and Passive (SMAP) mission is presented. Particular attention is focused on examining the relationship between the economic investment on a sensor web (space and air borne, ground based) and the accuracy of the model predicted soil moisture, which can be achieved by using such sensor observations. The Study of Virtual Land Information Sensor Web (LISW) is expected to provide some necessary a priori knowledge for designing and deploying the next generation Global Earth Observing System of systems (GEOSS).

  19. Self-localization of wireless sensor networks using self-organizing maps

    NASA Astrophysics Data System (ADS)

    Ertin, Emre; Priddy, Kevin L.

    2005-03-01

    Recently there has been a renewed interest in the notion of deploying large numbers of networked sensors for applications ranging from environmental monitoring to surveillance. In a typical scenario a number of sensors are distributed in a region of interest. Each sensor is equipped with sensing, processing and communication capabilities. The information gathered from the sensors can be used to detect, track and classify objects of interest. For a number of locations the sensors location is crucial in interpreting the data collected from those sensors. Scalability requirements dictate sensor nodes that are inexpensive devices without a dedicated localization hardware such as GPS. Therefore the network has to rely on information collected within the network to self-localize. In the literature a number of algorithms has been proposed for network localization which uses measurements informative of range, angle, proximity between nodes. Recent work by Patwari and Hero relies on sensor data without explicit range estimates. The assumption is that the correlation structure in the data is a monotone function of the intersensor distances. In this paper we propose a new method based on unsupervised learning techniques to extract location information from the sensor data itself. We consider a grid consisting of virtual nodes and try to fit grid in the actual sensor network data using the method of self organizing maps. Then known sensor network geometry can be used to rotate and scale the grid to a global coordinate system. Finally, we illustrate how the virtual nodes location information can be used to track a target.

  20. MATREX: A Unifying Modeling and Simulation Architecture for Live-Virtual-Constructive Applications

    DTIC Science & Technology

    2007-05-23

    Deployment Systems Acquisition Operations & Support B C Sustainment FRP Decision Review FOC LRIP/IOT& ECritical Design Review Pre-Systems...CMS2 – Comprehensive Munitions & Sensor Server • CSAT – C4ISR Static Analysis Tool • C4ISR – Command & Control, Communications, Computers

  1. Sea-Based Automated Launch and Recovery System Virtual Testbed

    DTIC Science & Technology

    2013-12-02

    integrated with an Extended Kalman Filter to study sensor fusion in a fixed wing aircraft shipboard recovery scenario. 15. SUBJECT TERMS...the sensors and filter performance are graded both on pure estimation error, and by examining the touchdown performance of the aircraft on the ship...v, and w body-axis velocity components of the aircraft , while the velocities applied to the extremities are used to calculate estimated rotational

  2. A Personal Inertial Navigation System Based on Multiple Distributed, Nine-Degrees-Of-Freedom, Inertial Measurement Units

    DTIC Science & Technology

    2016-12-01

    based complementary filter developed at the Naval Postgraduate School, is developed. The performance of a consumer-grade nine-degrees-of-freedom IMU...measurement unit, complementary filter , gait phase detection, zero velocity update, MEMS, IMU, AHRS, GPS denied, distributed sensor, virtual sensor...algorithm and quaternion-based complementary filter developed at the Naval Postgraduate School, is developed. The performance of a consumer-grade nine

  3. The eyes prefer real images

    NASA Technical Reports Server (NTRS)

    Roscoe, Stanley N.

    1989-01-01

    For better or worse, virtual imaging displays are with us in the form of narrow-angle combining-glass presentations, head-up displays (HUD), and head-mounted projections of wide-angle sensor-generated or computer-animated imagery (HMD). All military and civil aviation services and a large number of aerospace companies are involved in one way or another in a frantic competition to develop the best virtual imaging display system. The success or failure of major weapon systems hangs in the balance, and billions of dollars in potential business are at stake. Because of the degree to which national defense is committed to the perfection of virtual imaging displays, a brief consideration of their status, an investigation and analysis of their problems, and a search for realistic alternatives are long overdue.

  4. Reactor protection system with automatic self-testing and diagnostic

    DOEpatents

    Gaubatz, Donald C.

    1996-01-01

    A reactor protection system having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically "identical" values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic.

  5. Reactor protection system with automatic self-testing and diagnostic

    DOEpatents

    Gaubatz, D.C.

    1996-12-17

    A reactor protection system is disclosed having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically ``identical`` values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic. 16 figs.

  6. Embry-Riddle Aeronautical University multispectral sensor and data fusion laboratory: a model for distributed research and education

    NASA Astrophysics Data System (ADS)

    McMullen, Sonya A. H.; Henderson, Troy; Ison, David

    2017-05-01

    The miniaturization of unmanned systems and spacecraft, as well as computing and sensor technologies, has opened new opportunities in the areas of remote sensing and multi-sensor data fusion for a variety of applications. Remote sensing and data fusion historically have been the purview of large government organizations, such as the Department of Defense (DoD), National Aeronautics and Space Administration (NASA), and National Geospatial-Intelligence Agency (NGA) due to the high cost and complexity of developing, fielding, and operating such systems. However, miniaturized computers with high capacity processing capabilities, small and affordable sensors, and emerging, commercially available platforms such as UAS and CubeSats to carry such sensors, have allowed for a vast range of novel applications. In order to leverage these developments, Embry-Riddle Aeronautical University (ERAU) has developed an advanced sensor and data fusion laboratory to research component capabilities and their employment on a wide-range of autonomous, robotic, and transportation systems. This lab is unique in several ways, for example, it provides a traditional campus laboratory for students and faculty to model and test sensors in a range of scenarios, process multi-sensor data sets (both simulated and experimental), and analyze results. Moreover, such allows for "virtual" modeling, testing, and teaching capability reaching beyond the physical confines of the facility for use among ERAU Worldwide students and faculty located around the globe. Although other institutions such as Georgia Institute of Technology, Lockheed Martin, University of Dayton, and University of Central Florida have optical sensor laboratories, the ERAU virtual concept is the first such lab to expand to multispectral sensors and data fusion, while focusing on the data collection and data products and not on the manufacturing aspect. Further, the initiative is a unique effort among Embry-Riddle faculty to develop multi-disciplinary, cross-campus research to facilitate faculty- and student-driven research. Specifically, the ERAU Worldwide Campus, with locations across the globe and delivering curricula online, will be leveraged to provide novel approaches to remote sensor experimentation and simulation. The purpose of this paper and presentation is to present this new laboratory, research, education, and collaboration process.

  7. Robust Online Monitoring for Calibration Assessment of Transmitters and Instrumentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Coble, Jamie B.; Shumaker, Brent

    Robust online monitoring (OLM) technologies are expected to enable the extension or elimination of periodic sensor calibration intervals in operating and new reactors. These advances in OLM technologies will improve the safety and reliability of current and planned nuclear power systems through improved accuracy and increased reliability of sensors used to monitor key parameters. In this article, we discuss an overview of research being performed within the Nuclear Energy Enabling Technologies (NEET)/Advanced Sensors and Instrumentation (ASI) program, for the development of OLM algorithms to use sensor outputs and, in combination with other available information, 1) determine whether one or moremore » sensors are out of calibration or failing and 2) replace a failing sensor with reliable, accurate sensor outputs. Algorithm development is focused on the following OLM functions: • Signal validation • Virtual sensing • Sensor response-time assessment These algorithms incorporate, at their base, a Gaussian Process-based uncertainty quantification (UQ) method. Various plant models (using kernel regression, GP, or hierarchical models) may be used to predict sensor responses under various plant conditions. These predicted responses can then be applied in fault detection (sensor output and response time) and in computing the correct value (virtual sensing) of a failing physical sensor. The methods being evaluated in this work can compute confidence levels along with the predicted sensor responses, and as a result, may have the potential for compensating for sensor drift in real-time (online recalibration). Evaluation was conducted using data from multiple sources (laboratory flow loops and plant data). Ongoing research in this project is focused on further evaluation of the algorithms, optimization for accuracy and computational efficiency, and integration into a suite of tools for robust OLM that are applicable to monitoring sensor calibration state in nuclear power plants.« less

  8. Virtual groups for patient WBAN monitoring in medical environments.

    PubMed

    Ivanov, Stepan; Foley, Christopher; Balasubramaniam, Sasitharan; Botvich, Dmitri

    2012-11-01

    Wireless body area networks (WBAN) provide a tremendous opportunity for remote health monitoring. However, engineering WBAN health monitoring systems encounters a number of challenges including efficient WBAN monitoring information extraction, dynamically fine tuning the monitoring process to suit the quality of data, and to allow the translation of high-level requirements of medical officers to low-level sensor reconfiguration. This paper addresses these challenges, by proposing an architecture that allows virtual groups to be formed between devices of patients, nurses, and doctors in order to enable remote analysis of WBAN data. Group formation and modification is performed with respect to patients' conditions and medical officers' requirements, which could be easily adjusted through high-level policies. We also propose, a new metric called the Quality of Health Monitoring, which allows medical officers to provide feedback on the quality of WBAN data received. The WBAN data gathered are transmitted to the virtual group members through an underlying environmental sensor network. The proposed approach is evaluated through a series of simulation.

  9. Providing a virtual tour of a glacial watershed

    NASA Astrophysics Data System (ADS)

    Berner, L.; Habermann, M.; Hood, E.; Fatland, R.; Heavner, M.; Knuth, E.

    2007-12-01

    SEAMONSTER, a NASA funded sensor web project, is the SouthEast Alaska MOnitoring Network for Science, Telecommunications, Education, and Research. Seamonster is leveraging existing open-source software and is an implementation of existing sensor web technologies intended to act as a sensor web testbed, an educational tool, a scientific resource, and a public resource. The primary focus area of initial SEAMONSTER deployment is the Lemon Creek watershed, which includes the Lemon Creek Glacier studied as part of the 1957-58 IPY. This presentation describes our year one efforts to maximize education and public outreach activities of SEAMONSTER. During the first summer, 37 sensors were deployed throughout two partially glaciated watersheds and facilitated data acquisition in temperate rain forest, alpine, lacustrine, and glacial environments. Understanding these environments are important for public understanding of climate change. These environments are geographically isolated, limiting public access to, and understanding of, such locales. In an effort to inform the general public and primary educators about the basic processes occurring in these unique natural systems, we are developing an interactive website. This web portal will supplement and enhance environmental science primary education by providing educators and students with interactive access to basic information from the glaciological, hydrological, and meteorological systems we are studying. In addition, we are developing an interactive virtual tour of the Lemon Glacier and its watershed. This effort will include Google Earth as a means of real-time data visualization and will take advantage of time-lapse movies, photographs, maps, and satellite imagery to promote an understanding of these unique natural systems and the role of sensor webs in education.

  10. Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments.

    PubMed

    Rincon, J A; Poza-Lujan, Jose-Luis; Julian, V; Posadas-Yagüe, Juan-Luis; Carrascosa, C

    2016-01-01

    This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system.

  11. Extending MAM5 Meta-Model and JaCalIV E Framework to Integrate Smart Devices from Real Environments

    PubMed Central

    2016-01-01

    This paper presents the extension of a meta-model (MAM5) and a framework based on the model (JaCalIVE) for developing intelligent virtual environments. The goal of this extension is to develop augmented mirror worlds that represent a real and virtual world coupled, so that the virtual world not only reflects the real one, but also complements it. A new component called a smart resource artifact, that enables modelling and developing devices to access the real physical world, and a human in the loop agent to place a human in the system have been included in the meta-model and framework. The proposed extension of MAM5 has been tested by simulating a light control system where agents can access both virtual and real sensor/actuators through the smart resources developed. The results show that the use of real environment interactive elements (smart resource artifacts) in agent-based simulations allows to minimize the error between simulated and real system. PMID:26926691

  12. An Interactive Logistics Centre Information Integration System Using Virtual Reality

    NASA Astrophysics Data System (ADS)

    Hong, S.; Mao, B.

    2018-04-01

    The logistics industry plays a very important role in the operation of modern cities. Meanwhile, the development of logistics industry has derived various problems that are urgent to be solved, such as the safety of logistics products. This paper combines the study of logistics industry traceability and logistics centre environment safety supervision with virtual reality technology, creates an interactive logistics centre information integration system. The proposed system utilizes the immerse characteristic of virtual reality, to simulate the real logistics centre scene distinctly, which can make operation staff conduct safety supervision training at any time without regional restrictions. On the one hand, a large number of sensor data can be used to simulate a variety of disaster emergency situations. On the other hand, collecting personnel operation data, to analyse the improper operation, which can improve the training efficiency greatly.

  13. Design of a lightweight, cost effective thimble-like sensor for haptic applications based on contact force sensors.

    PubMed

    Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael

    2011-01-01

    This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation.

  14. Design of a Lightweight, Cost Effective Thimble-Like Sensor for Haptic Applications Based on Contact Force Sensors

    PubMed Central

    Ferre, Manuel; Galiana, Ignacio; Aracil, Rafael

    2011-01-01

    This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation. PMID:22247677

  15. Intelligent Sensors: An Integrated Systems Approach

    NASA Technical Reports Server (NTRS)

    Mahajan, Ajay; Chitikeshi, Sanjeevi; Bandhil, Pavan; Utterbach, Lucas; Figueroa, Fernando

    2005-01-01

    The need for intelligent sensors as a critical component for Integrated System Health Management (ISHM) is fairly well recognized by now. Even the definition of what constitutes an intelligent sensor (or smart sensor) is well documented and stems from an intuitive desire to get the best quality measurement data that forms the basis of any complex health monitoring and/or management system. If the sensors, i.e. the elements closest to the measurand, are unreliable then the whole system works with a tremendous handicap. Hence, there has always been a desire to distribute intelligence down to the sensor level, and give it the ability to assess its own health thereby improving the confidence in the quality of the data at all times. This paper proposes the development of intelligent sensors as an integrated systems approach, i.e. one treats the sensors as a complete system with its own sensing hardware (the traditional sensor), A/D converters, processing and storage capabilities, software drivers, self-assessment algorithms, communication protocols and evolutionary methodologies that allow them to get better with time. Under a project being undertaken at the NASA Stennis Space Center, an integrated framework is being developed for the intelligent monitoring of smart elements. These smart elements can be sensors, actuators or other devices. The immediate application is the monitoring of the rocket test stands, but the technology should be generally applicable to the Intelligent Systems Health Monitoring (ISHM) vision. This paper outlines some fundamental issues in the development of intelligent sensors under the following two categories: Physical Intelligent Sensors (PIS) and Virtual Intelligent Sensors (VIS).

  16. Decentralized real-time simulation of forest machines

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Adam, Frank; Hoffmann, Katharina; Rossmann, Juergen; Kraemer, Michael; Schluse, Michael

    2000-10-01

    To develop realistic forest machine simulators is a demanding task. A useful simulator has to provide a close- to-reality simulation of the forest environment as well as the simulation of the physics of the vehicle. Customers demand a highly realistic three dimensional forestry landscape and the realistic simulation of the complex motion of the vehicle even in rough terrain in order to be able to use the simulator for operator training under close-to- reality conditions. The realistic simulation of the vehicle, especially with the driver's seat mounted on a motion platform, greatly improves the effect of immersion into the virtual reality of a simulated forest and the achievable level of education of the driver. Thus, the connection of the real control devices of forest machines to the simulation system has to be supported, i.e. the real control devices like the joysticks or the board computer system to control the crane, the aggregate etc. Beyond, the fusion of the board computer system and the simulation system is realized by means of sensors, i.e. digital and analog signals. The decentralized system structure allows several virtual reality systems to evaluate and visualize the information of the control devices and the sensors. So, while the driver is practicing, the instructor can immerse into the same virtual forest to monitor the session from his own viewpoint. In this paper, we are describing the realized structure as well as the necessary software and hardware components and application experiences.

  17. Virtual Induction Loops Based on Cooperative Vehicular Communications

    PubMed Central

    Gramaglia, Marco; Bernardos, Carlos J.; Calderon, Maria

    2013-01-01

    Induction loop detectors have become the most utilized sensors in traffic management systems. The gathered traffic data is used to improve traffic efficiency (i.e., warning users about congested areas or planning new infrastructures). Despite their usefulness, their deployment and maintenance costs are expensive. Vehicular networks are an emerging technology that can support novel strategies for ubiquitous and more cost-effective traffic data gathering. In this article, we propose and evaluate VIL (Virtual Induction Loop), a simple and lightweight traffic monitoring system based on cooperative vehicular communications. The proposed solution has been experimentally evaluated through simulation using real vehicular traces. PMID:23348033

  18. An interactive VR system based on full-body tracking and gesture recognition

    NASA Astrophysics Data System (ADS)

    Zeng, Xia; Sang, Xinzhu; Chen, Duo; Wang, Peng; Guo, Nan; Yan, Binbin; Wang, Kuiru

    2016-10-01

    Most current virtual reality (VR) interactions are realized with the hand-held input device which leads to a low degree of presence. There is other solutions using sensors like Leap Motion to recognize the gestures of users in order to interact in a more natural way, but the navigation in these systems is still a problem, because they fail to map the actual walking to virtual walking only with a partial body of the user represented in the synthetic environment. Therefore, we propose a system in which users can walk around in the virtual environment as a humanoid model, selecting menu items and manipulating with the virtual objects using natural hand gestures. With a Kinect depth camera, the system tracks the joints of the user, mapping them to a full virtual body which follows the move of the tracked user. The movements of the feet can be detected to determine whether the user is in walking state, so that the walking of model in the virtual world can be activated and stopped by means of animation control in Unity engine. This method frees the hands of users comparing to traditional navigation way using hand-held device. We use the point cloud data getting from Kinect depth camera to recognize the gestures of users, such as swiping, pressing and manipulating virtual objects. Combining the full body tracking and gestures recognition using Kinect, we achieve our interactive VR system in Unity engine with a high degree of presence.

  19. Affordable and personalized lighting using inverse modeling and virtual sensors

    NASA Astrophysics Data System (ADS)

    Basu, Chandrayee; Chen, Benjamin; Richards, Jacob; Dhinakaran, Aparna; Agogino, Alice; Martin, Rodney

    2014-03-01

    Wireless sensor networks (WSN) have great potential to enable personalized intelligent lighting systems while reducing building energy use by 50%-70%. As a result WSN systems are being increasingly integrated in state-ofart intelligent lighting systems. In the future these systems will enable participation of lighting loads as ancillary services. However, such systems can be expensive to install and lack the plug-and-play quality necessary for user-friendly commissioning. In this paper we present an integrated system of wireless sensor platforms and modeling software to enable affordable and user-friendly intelligent lighting. It requires ⇠ 60% fewer sensor deployments compared to current commercial systems. Reduction in sensor deployments has been achieved by optimally replacing the actual photo-sensors with real-time discrete predictive inverse models. Spatially sparse and clustered sub-hourly photo-sensor data captured by the WSN platforms are used to develop and validate a piece-wise linear regression of indoor light distribution. This deterministic data-driven model accounts for sky conditions and solar position. The optimal placement of photo-sensors is performed iteratively to achieve the best predictability of the light field desired for indoor lighting control. Using two weeks of daylight and artificial light training data acquired at the Sustainability Base at NASA Ames, the model was able to predict the light level at seven monitored workstations with 80%-95% accuracy. We estimate that 10% adoption of this intelligent wireless sensor system in commercial buildings could save 0.2-0.25 quads BTU of energy nationwide.

  20. The Design of a Chemical Virtual Instrument Based on LabVIEW for Determining Temperatures and Pressures.

    PubMed

    Wang, Wen-Bin; Li, Jang-Yuan; Wu, Qi-Jun

    2007-01-01

    A LabVIEW-based self-constructed chemical virtual instrument (VI) has been developed for determining temperatures and pressures. It can be put together easily and quickly by selecting hardware modules, such as the PCI-DAQ card or serial port method, different kinds of sensors, signal-conditioning circuits or finished chemical instruments, and software modules such as data acquisition, saving, proceeding. The VI system provides individual and extremely flexible solutions for automatic measurements in physical chemistry research.

  1. The Design of a Chemical Virtual Instrument Based on LabVIEW for Determining Temperatures and Pressures

    PubMed Central

    Wang, Wen-Bin; Li, Jang-Yuan; Wu, Qi-Jun

    2007-01-01

    A LabVIEW-based self-constructed chemical virtual instrument (VI) has been developed for determining temperatures and pressures. It can be put together easily and quickly by selecting hardware modules, such as the PCI-DAQ card or serial port method, different kinds of sensors, signal-conditioning circuits or finished chemical instruments, and software modules such as data acquisition, saving, proceeding. The VI system provides individual and extremely flexible solutions for automatic measurements in physical chemistry research. PMID:17671611

  2. Kinect-based virtual rehabilitation and evaluation system for upper limb disorders: A case study.

    PubMed

    Ding, W L; Zheng, Y Z; Su, Y P; Li, X L

    2018-04-19

    To help patients with disabilities of the arm and shoulder recover the accuracy and stability of movements, a novel and simple virtual rehabilitation and evaluation system called the Kine-VRES system was developed using Microsoft Kinect. First, several movements and virtual tasks were designed to increase the coordination, control and speed of the arm movements. The movements of the patients were then captured using the Kinect sensor, and kinematics-based interaction and real-time feedback were integrated into the system to enhance the motivation and self-confidence of the patient. Finally, a quantitative evaluation method of upper limb movements was provided using the recorded kinematics during hand-to-hand movement. A preliminary study of this rehabilitation system indicates that the shoulder movements of two participants with ataxia became smoother after three weeks of training (one hour per day). This case study demonstrated the effectiveness of the designed system, which could be promising for the rehabilitation of patients with upper limb disorders.

  3. A Concept for Optimizing Behavioural Effectiveness & Efficiency

    NASA Astrophysics Data System (ADS)

    Barca, Jan Carlo; Rumantir, Grace; Li, Raymond

    Both humans and machines exhibit strengths and weaknesses that can be enhanced by merging the two entities. This research aims to provide a broader understanding of how closer interactions between these two entities can facilitate more optimal goal-directed performance through the use of artificial extensions of the human body. Such extensions may assist us in adapting to and manipulating our environments in a more effective way than any system known today. To demonstrate this concept, we have developed a simulation where a semi interactive virtual spider can be navigated through an environment consisting of several obstacles and a virtual predator capable of killing the spider. The virtual spider can be navigated through the use of three different control systems that can be used to assist in optimising overall goal directed performance. The first two control systems use, an onscreen button interface and a touch sensor, respectively to facilitate human navigation of the spider. The third control system is an autonomous navigation system through the use of machine intelligence embedded in the spider. This system enables the spider to navigate and react to changes in its local environment. The results of this study indicate that machines should be allowed to override human control in order to maximise the benefits of collaboration between man and machine. This research further indicates that the development of strong machine intelligence, sensor systems that engage all human senses, extra sensory input systems, physical remote manipulators, multiple intelligent extensions of the human body, as well as a tighter symbiosis between man and machine, can support an upgrade of the human form.

  4. Bluetooth-based distributed measurement system

    NASA Astrophysics Data System (ADS)

    Tang, Baoping; Chen, Zhuo; Wei, Yuguo; Qin, Xiaofeng

    2007-07-01

    A novel distributed wireless measurement system, which is consisted of a base station, wireless intelligent sensors and relay nodes etc, is established by combining of Bluetooth-based wireless transmission, virtual instrument, intelligent sensor, and network. The intelligent sensors mounted on the equipments to be measured acquire various parameters and the Bluetooth relay nodes get the acquired data modulated and sent to the base station, where data analysis and processing are done so that the operational condition of the equipment can be evaluated. The establishment of the distributed measurement system is discussed with a measurement flow chart for the distributed measurement system based on Bluetooth technology, and the advantages and disadvantages of the system are analyzed at the end of the paper and the measurement system has successfully been used in Daqing oilfield, China for measurement of parameters, such as temperature, flow rate and oil pressure at an electromotor-pump unit.

  5. A virtual pointer to support the adoption of professional vision in laparoscopic training.

    PubMed

    Feng, Yuanyuan; McGowan, Hannah; Semsar, Azin; Zahiri, Hamid R; George, Ivan M; Turner, Timothy; Park, Adrian; Kleinsmith, Andrea; Mentis, Helena M

    2018-05-23

    To assess a virtual pointer in supporting surgical trainees' development of professional vision in laparoscopic surgery. We developed a virtual pointing and telestration system utilizing the Microsoft Kinect movement sensor as an overlay for any imagine system. Training with the application was compared to a standard condition, i.e., verbal instruction with un-mediated gestures, in a laparoscopic training environment. Seven trainees performed four simulated laparoscopic tasks guided by an experienced surgeon as the trainer. Trainee performance was subjectively assessed by the trainee and trainer, and objectively measured by number of errors, time to task completion, and economy of movement. No significant differences in errors and time to task completion were obtained between virtual pointer and standard conditions. Economy of movement in the non-dominant hand was significantly improved when using virtual pointer ([Formula: see text]). The trainers perceived a significant improvement in trainee performance in virtual pointer condition ([Formula: see text]), while the trainees perceived no difference. The trainers' perception of economy of movement was similar between the two conditions in the initial three runs and became significantly improved in virtual pointer condition in the fourth run ([Formula: see text]). Results show that the virtual pointer system improves the trainer's perception of trainee's performance and this is reflected in the objective performance measures in the third and fourth training runs. The benefit of a virtual pointing and telestration system may be perceived by the trainers early on in training, but this is not evident in objective trainee performance until further mastery has been attained. In addition, the performance improvement of economy of motion specifically shows that the virtual pointer improves the adoption of professional vision- improved ability to see and use laparoscopic video results in more direct instrument movement.

  6. A Plug-and-Play Human-Centered Virtual TEDS Architecture for the Web of Things.

    PubMed

    Hernández-Rojas, Dixys L; Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Escudero, Carlos J

    2018-06-27

    This article presents a Virtual Transducer Electronic Data Sheet (VTEDS)-based framework for the development of intelligent sensor nodes with plug-and-play capabilities in order to contribute to the evolution of the Internet of Things (IoT) toward the Web of Things (WoT). It makes use of new lightweight protocols that allow sensors to self-describe, auto-calibrate, and auto-register. Such protocols enable the development of novel IoT solutions while guaranteeing low latency, low power consumption, and the required Quality of Service (QoS). Thanks to the developed human-centered tools, it is possible to configure and modify dynamically IoT device firmware, managing the active transducers and their communication protocols in an easy and intuitive way, without requiring any prior programming knowledge. In order to evaluate the performance of the system, it was tested when using Bluetooth Low Energy (BLE) and Ethernet-based smart sensors in different scenarios. Specifically, user experience was quantified empirically (i.e., how fast the system shows collected data to a user was measured). The obtained results show that the proposed VTED architecture is very fast, with some smart sensors (located in Europe) able to self-register and self-configure in a remote cloud (in South America) in less than 3 s and to display data to remote users in less than 2 s.

  7. The Language of Glove: Wireless gesture decoder with low-power and stretchable hybrid electronics.

    PubMed

    O'Connor, Timothy F; Fach, Matthew E; Miller, Rachel; Root, Samuel E; Mercier, Patrick P; Lipomi, Darren J

    2017-01-01

    This communication describes a glove capable of wirelessly translating the American Sign Language (ASL) alphabet into text displayable on a computer or smartphone. The key components of the device are strain sensors comprising a piezoresistive composite of carbon particles embedded in a fluoroelastomer. These sensors are integrated with a wearable electronic module consisting of digitizers, a microcontroller, and a Bluetooth radio. Finite-element analysis predicts a peak strain on the sensors of 5% when the knuckles are fully bent. Fatigue studies suggest that the sensors successfully detect the articulation of the knuckles even when bent to their maximal degree 1,000 times. In concert with an accelerometer and pressure sensors, the glove is able to translate all 26 letters of the ASL alphabet. Lastly, data taken from the glove are used to control a virtual hand; this application suggests new ways in which stretchable and wearable electronics can enable humans to interface with virtual environments. Critically, this system was constructed of components costing less than $100 and did not require chemical synthesis or access to a cleanroom. It can thus be used as a test bed for materials scientists to evaluate the performance of new materials and flexible and stretchable hybrid electronics.

  8. The Language of Glove: Wireless gesture decoder with low-power and stretchable hybrid electronics

    PubMed Central

    O’Connor, Timothy F.; Fach, Matthew E.; Miller, Rachel; Root, Samuel E.; Mercier, Patrick P.

    2017-01-01

    This communication describes a glove capable of wirelessly translating the American Sign Language (ASL) alphabet into text displayable on a computer or smartphone. The key components of the device are strain sensors comprising a piezoresistive composite of carbon particles embedded in a fluoroelastomer. These sensors are integrated with a wearable electronic module consisting of digitizers, a microcontroller, and a Bluetooth radio. Finite-element analysis predicts a peak strain on the sensors of 5% when the knuckles are fully bent. Fatigue studies suggest that the sensors successfully detect the articulation of the knuckles even when bent to their maximal degree 1,000 times. In concert with an accelerometer and pressure sensors, the glove is able to translate all 26 letters of the ASL alphabet. Lastly, data taken from the glove are used to control a virtual hand; this application suggests new ways in which stretchable and wearable electronics can enable humans to interface with virtual environments. Critically, this system was constructed of components costing less than $100 and did not require chemical synthesis or access to a cleanroom. It can thus be used as a test bed for materials scientists to evaluate the performance of new materials and flexible and stretchable hybrid electronics. PMID:28700603

  9. Optimal Deployment of Sensor Nodes Based on Performance Surface of Underwater Acoustic Communication

    PubMed Central

    Choi, Jee Woong

    2017-01-01

    The underwater acoustic sensor network (UWASN) is a system that exchanges data between numerous sensor nodes deployed in the sea. The UWASN uses an underwater acoustic communication technique to exchange data. Therefore, it is important to design a robust system that will function even in severely fluctuating underwater communication conditions, along with variations in the ocean environment. In this paper, a new algorithm to find the optimal deployment positions of underwater sensor nodes is proposed. The algorithm uses the communication performance surface, which is a map showing the underwater acoustic communication performance of a targeted area. A virtual force-particle swarm optimization algorithm is then used as an optimization technique to find the optimal deployment positions of the sensor nodes, using the performance surface information to estimate the communication radii of the sensor nodes in each generation. The algorithm is evaluated by comparing simulation results between two different seasons (summer and winter) for an area located off the eastern coast of Korea as the selected targeted area. PMID:29053569

  10. Sensor data fusion for textured reconstruction and virtual representation of alpine scenes

    NASA Astrophysics Data System (ADS)

    Häufel, Gisela; Bulatov, Dimitri; Solbrig, Peter

    2017-10-01

    The concept of remote sensing is to provide information about a wide-range area without making physical contact with this area. If, additionally to satellite imagery, images and videos taken by drones provide a more up-to-date data at a higher resolution, or accurate vector data is downloadable from the Internet, one speaks of sensor data fusion. The concept of sensor data fusion is relevant for many applications, such as virtual tourism, automatic navigation, hazard assessment, etc. In this work, we describe sensor data fusion aiming to create a semantic 3D model of an extremely interesting yet challenging dataset: An alpine region in Southern Germany. A particular challenge of this work is that rock faces including overhangs are present in the input airborne laser point cloud. The proposed procedure for identification and reconstruction of overhangs from point clouds comprises four steps: Point cloud preparation, filtering out vegetation, mesh generation and texturing. Further object types are extracted in several interesting subsections of the dataset: Building models with textures from UAV (Unmanned Aerial Vehicle) videos, hills reconstructed as generic surfaces and textured by the orthophoto, individual trees detected by the watershed algorithm, as well as the vector data for roads retrieved from openly available shapefiles and GPS-device tracks. We pursue geo-specific reconstruction by assigning texture and width to roads of several pre-determined types and modeling isolated trees and rocks using commercial software. For visualization and simulation of the area, we have chosen the simulation system Virtual Battlespace 3 (VBS3). It becomes clear that the proposed concept of sensor data fusion allows a coarse reconstruction of a large scene and, at the same time, an accurate and up-to-date representation of its relevant subsections, in which simulation can take place.

  11. New optical sensor systems for high-resolution satellite, airborne and terrestrial imaging systems

    NASA Astrophysics Data System (ADS)

    Eckardt, Andreas; Börner, Anko; Lehmann, Frank

    2007-10-01

    The department of Optical Information Systems (OS) at the Institute of Robotics and Mechatronics of the German Aerospace Center (DLR) has more than 25 years experience with high-resolution imaging technology. The technology changes in the development of detectors, as well as the significant change of the manufacturing accuracy in combination with the engineering research define the next generation of spaceborne sensor systems focusing on Earth observation and remote sensing. The combination of large TDI lines, intelligent synchronization control, fast-readable sensors and new focal-plane concepts open the door to new remote-sensing instruments. This class of instruments is feasible for high-resolution sensor systems regarding geometry and radiometry and their data products like 3D virtual reality. Systemic approaches are essential for such designs of complex sensor systems for dedicated tasks. The system theory of the instrument inside a simulated environment is the beginning of the optimization process for the optical, mechanical and electrical designs. Single modules and the entire system have to be calibrated and verified. Suitable procedures must be defined on component, module and system level for the assembly test and verification process. This kind of development strategy allows the hardware-in-the-loop design. The paper gives an overview about the current activities at DLR in the field of innovative sensor systems for photogrammetric and remote sensing purposes.

  12. Using Amazon Web Services (AWS) to enable real-time, remote sensing of biophysical and anthropogenic conditions in green infrastructure systems in Philadelphia, an ultra-urban application of the Internet of Things (IoT)

    NASA Astrophysics Data System (ADS)

    Montalto, F. A.; Yu, Z.; Soldner, K.; Israel, A.; Fritch, M.; Kim, Y.; White, S.

    2017-12-01

    Urban stormwater utilities are increasingly using decentralized "green" infrastructure (GI) systems to capture stormwater and achieve compliance with regulations. Because environmental conditions, and design varies by GSI facility, monitoring of GSI systems under a range of conditions is essential. Conventional monitoring efforts can be costly because in-field data logging requires intense data transmission rates. The Internet of Things (IoT) can be used to more cost-effectively collect, store, and publish GSI monitoring data. Using 3G mobile networks, a cloud-based database was built on an Amazon Web Services (AWS) EC2 virtual machine to store and publish data collected with environmental sensors deployed in the field. This database can store multi-dimensional time series data, as well as photos and other observations logged by citizen scientists through a public engagement mobile app through a new Application Programming Interface (API). Also on the AWS EC2 virtual machine, a real-time QAQC flagging algorithm was developed to validate the sensor data streams.

  13. A Proposed Treatment for Visual Field Loss caused by Traumatic Brain Injury using Interactive Visuotactile Virtual Environment

    NASA Astrophysics Data System (ADS)

    Farkas, Attila J.; Hajnal, Alen; Shiratuddin, Mohd F.; Szatmary, Gabriella

    In this paper, we propose a novel approach of using interactive virtual environment technology in Vision Restoration Therapy caused by Traumatic Brain Injury. We called the new system Interactive Visuotactile Virtual Environment and it holds a promise of expanding the scope of already existing rehabilitation techniques. Traditional vision rehabilitation methods are based on passive psychophysical training procedures, and can last up to six months before any modest improvements can be seen in patients. A highly immersive and interactive virtual environment will allow the patient to practice everyday activities such as object identification and object manipulation through the use 3D motion sensoring handheld devices such data glove or the Nintendo Wiimote. Employing both perceptual and action components in the training procedures holds the promise of more efficient sensorimotor rehabilitation. Increased stimulation of visual and sensorimotor areas of the brain should facilitate a comprehensive recovery of visuomotor function by exploiting the plasticity of the central nervous system. Integrated with a motion tracking system and an eye tracking device, the interactive virtual environment allows for the creation and manipulation of a wide variety of stimuli, as well as real-time recording of hand-, eye- and body movements and coordination. The goal of the project is to design a cost-effective and efficient vision restoration system.

  14. Virtual environment application with partial gravity simulation

    NASA Technical Reports Server (NTRS)

    Ray, David M.; Vanchau, Michael N.

    1994-01-01

    To support manned missions to the surface of Mars and missions requiring manipulation of payloads and locomotion in space, a training facility is required to simulate the conditions of both partial and microgravity. A partial gravity simulator (Pogo) which uses pneumatic suspension is being studied for use in virtual reality training. Pogo maintains a constant partial gravity simulation with a variation of simulated body force between 2.2 and 10 percent, depending on the type of locomotion inputs. this paper is based on the concept and application of a virtual environment system with Pogo including a head-mounted display and glove. The reality engine consists of a high end SGI workstation and PC's which drive Pogo's sensors and data acquisition hardware used for tracking and control. The tracking system is a hybrid of magnetic and optical trackers integrated for this application.

  15. Virtual environment assessment for laser-based vision surface profiling

    NASA Astrophysics Data System (ADS)

    ElSoussi, Adnane; Al Alami, Abed ElRahman; Abu-Nabah, Bassam A.

    2015-03-01

    Oil and gas businesses have been raising the demand from original equipment manufacturers (OEMs) to implement a reliable metrology method in assessing surface profiles of welds before and after grinding. This certainly mandates the deviation from the commonly used surface measurement gauges, which are not only operator dependent, but also limited to discrete measurements along the weld. Due to its potential accuracy and speed, the use of laser-based vision surface profiling systems have been progressively rising as part of manufacturing quality control. This effort presents a virtual environment that lends itself for developing and evaluating existing laser vision sensor (LVS) calibration and measurement techniques. A combination of two known calibration techniques is implemented to deliver a calibrated LVS system. System calibration is implemented virtually and experimentally to scan simulated and 3D printed features of known profiles, respectively. Scanned data is inverted and compared with the input profiles to validate the virtual environment capability for LVS surface profiling and preliminary assess the measurement technique for weld profiling applications. Moreover, this effort brings 3D scanning capability a step closer towards robust quality control applications in a manufacturing environment.

  16. Developing Flexible Networked Lighting Control Systems

    Science.gov Websites

    , Bluetooth, ZigBee and others are increasingly used for building control purposes. Low-cost computation : Bundling digital intelligence at the sensors and lights adds virtually no incremental cost. Coupled with cost. Research Goals and Objectives This project "Developing Flexible, Networked Lighting Control

  17. Experimental Robot Position Sensor Fault Tolerance Using Accelerometers and Joint Torque Sensors

    NASA Technical Reports Server (NTRS)

    Aldridge, Hal A.; Juang, Jer-Nan

    1997-01-01

    Robot systems in critical applications, such as those in space and nuclear environments, must be able to operate during component failure to complete important tasks. One failure mode that has received little attention is the failure of joint position sensors. Current fault tolerant designs require the addition of directly redundant position sensors which can affect joint design. The proposed method uses joint torque sensors found in most existing advanced robot designs along with easily locatable, lightweight accelerometers to provide a joint position sensor fault recovery mode. This mode uses the torque sensors along with a virtual passive control law for stability and accelerometers for joint position information. Two methods for conversion from Cartesian acceleration to joint position based on robot kinematics, not integration, are presented. The fault tolerant control method was tested on several joints of a laboratory robot. The controllers performed well with noisy, biased data and a model with uncertain parameters.

  18. Network-Capable Application Process and Wireless Intelligent Sensors for ISHM

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Morris, Jon; Turowski, Mark; Wang, Ray

    2011-01-01

    Intelligent sensor technology and systems are increasingly becoming attractive means to serve as frameworks for intelligent rocket test facilities with embedded intelligent sensor elements, distributed data acquisition elements, and onboard data acquisition elements. Networked intelligent processors enable users and systems integrators to automatically configure their measurement automation systems for analog sensors. NASA and leading sensor vendors are working together to apply the IEEE 1451 standard for adding plug-and-play capabilities for wireless analog transducers through the use of a Transducer Electronic Data Sheet (TEDS) in order to simplify sensor setup, use, and maintenance, to automatically obtain calibration data, and to eliminate manual data entry and error. A TEDS contains the critical information needed by an instrument or measurement system to identify, characterize, interface, and properly use the signal from an analog sensor. A TEDS is deployed for a sensor in one of two ways. First, the TEDS can reside in embedded, nonvolatile memory (typically flash memory) within the intelligent processor. Second, a virtual TEDS can exist as a separate file, downloadable from the Internet. This concept of virtual TEDS extends the benefits of the standardized TEDS to legacy sensors and applications where the embedded memory is not available. An HTML-based user interface provides a visual tool to interface with those distributed sensors that a TEDS is associated with, to automate the sensor management process. Implementing and deploying the IEEE 1451.1-based Network-Capable Application Process (NCAP) can achieve support for intelligent process in Integrated Systems Health Management (ISHM) for the purpose of monitoring, detection of anomalies, diagnosis of causes of anomalies, prediction of future anomalies, mitigation to maintain operability, and integrated awareness of system health by the operator. It can also support local data collection and storage. This invention enables wide-area sensing and employs numerous globally distributed sensing devices that observe the physical world through the existing sensor network. This innovation enables distributed storage, distributed processing, distributed intelligence, and the availability of DiaK (Data, Information, and Knowledge) to any element as needed. It also enables the simultaneous execution of multiple processes, and represents models that contribute to the determination of the condition and health of each element in the system. The NCAP (intelligent process) can configure data-collection and filtering processes in reaction to sensed data, allowing it to decide when and how to adapt collection and processing with regard to sophisticated analysis of data derived from multiple sensors. The user will be able to view the sensing device network as a single unit that supports a high-level query language. Each query would be able to operate over data collected from across the global sensor network just as a search query encompasses millions of Web pages. The sensor web can preserve ubiquitous information access between the querier and the queried data. Pervasive monitoring of the physical world raises significant data and privacy concerns. This innovation enables different authorities to control portions of the sensing infrastructure, and sensor service authors may wish to compose services across authority boundaries.

  19. Sensor-Augmented Virtual Labs: Using Physical Interactions with Science Simulations to Promote Understanding of Gas Behavior

    NASA Astrophysics Data System (ADS)

    Chao, Jie; Chiu, Jennifer L.; DeJaegher, Crystal J.; Pan, Edward A.

    2016-02-01

    Deep learning of science involves integration of existing knowledge and normative science concepts. Past research demonstrates that combining physical and virtual labs sequentially or side by side can take advantage of the unique affordances each provides for helping students learn science concepts. However, providing simultaneously connected physical and virtual experiences has the potential to promote connections among ideas. This paper explores the effect of augmenting a virtual lab with physical controls on high school chemistry students' understanding of gas laws. We compared students using the augmented virtual lab to students using a similar sensor-based physical lab with teacher-led discussions. Results demonstrate that students in the augmented virtual lab condition made significant gains from pretest and posttest and outperformed traditional students on some but not all concepts. Results provide insight into incorporating mixed-reality technologies into authentic classroom settings.

  20. CSI, optimal control, and accelerometers: Trials and tribulations

    NASA Technical Reports Server (NTRS)

    Benjamin, Brian J.; Sesak, John R.

    1994-01-01

    New results concerning optimal design with accelerometers are presented. These results show that the designer must be concerned with the stability properties of two Linear Quadratic Gaussian (LQG) compensators, one of which does not explicitly appear in the closed-loop system dynamics. The new concepts of virtual and implemented compensators are introduced to cope with these subtleties. The virtual compensator appears in the closed-loop system dynamics and the implemented compensator appears in control electronics. The stability of one compensator does not guarantee the stability of the other. For strongly stable (robust) systems, both compensators should be stable. The presence of controlled and uncontrolled modes in the system results in two additional forms of the compensator with corresponding terms that are of like form, but opposite sign, making simultaneous stabilization of both the virtual and implemented compensator difficult. A new design algorithm termed sensor augmentation is developed that aids stabilization of these compensator forms by incorporating a static augmentation term associated with the uncontrolled modes in the design process.

  1. Localization of Ferromagnetic Target with Three Magnetic Sensors in the Movement Considering Angular Rotation

    PubMed Central

    Gao, Xiang; Yan, Shenggang; Li, Bin

    2017-01-01

    Magnetic detection techniques have been widely used in many fields, such as virtual reality, surgical robotics systems, and so on. A large number of methods have been developed to obtain the position of a ferromagnetic target. However, the angular rotation of the target relative to the sensor is rarely studied. In this paper, a new method for localization of moving object to determine both the position and rotation angle with three magnetic sensors is proposed. Trajectory localization estimation of three magnetic sensors, which are collinear and noncollinear, were obtained by the simulations, and experimental results demonstrated that the position and rotation angle of ferromagnetic target having roll, pitch or yaw in its movement could be calculated accurately and effectively with three noncollinear vector sensors. PMID:28892006

  2. A Non-Invasive Multichannel Hybrid Fiber-Optic Sensor System for Vital Sign Monitoring

    PubMed Central

    Fajkus, Marcel; Nedoma, Jan; Martinek, Radek; Vasinek, Vladimir; Nazeran, Homer; Siska, Petr

    2017-01-01

    In this article, we briefly describe the design, construction, and functional verification of a hybrid multichannel fiber-optic sensor system for basic vital sign monitoring. This sensor uses a novel non-invasive measurement probe based on the fiber Bragg grating (FBG). The probe is composed of two FBGs encapsulated inside a polydimethylsiloxane polymer (PDMS). The PDMS is non-reactive to human skin and resistant to electromagnetic waves, UV absorption, and radiation. We emphasize the construction of the probe to be specifically used for basic vital sign monitoring such as body temperature, respiratory rate and heart rate. The proposed sensor system can continuously process incoming signals from up to 128 individuals. We first present the overall design of this novel multichannel sensor and then elaborate on how it has the potential to simplify vital sign monitoring and consequently improve the comfort level of patients in long-term health care facilities, hospitals and clinics. The reference ECG signal was acquired with the use of standard gel electrodes fixed to the monitored person’s chest using a real-time monitoring system for ECG signals with virtual instrumentation. The outcomes of these experiments have unambiguously proved the functionality of the sensor system and will be used to inform our future research in this fast developing and emerging field. PMID:28075341

  3. A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks.

    PubMed

    Gui, Jinsong; Zhou, Kai; Xiong, Naixue

    2016-09-25

    Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude.

  4. A Cluster-Based Dual-Adaptive Topology Control Approach in Wireless Sensor Networks

    PubMed Central

    Gui, Jinsong; Zhou, Kai; Xiong, Naixue

    2016-01-01

    Multi-Input Multi-Output (MIMO) can improve wireless network performance. Sensors are usually single-antenna devices due to the high hardware complexity and cost, so several sensors are used to form virtual MIMO array, which is a desirable approach to efficiently take advantage of MIMO gains. Also, in large Wireless Sensor Networks (WSNs), clustering can improve the network scalability, which is an effective topology control approach. The existing virtual MIMO-based clustering schemes do not either fully explore the benefits of MIMO or adaptively determine the clustering ranges. Also, clustering mechanism needs to be further improved to enhance the cluster structure life. In this paper, we propose an improved clustering scheme for virtual MIMO-based topology construction (ICV-MIMO), which can determine adaptively not only the inter-cluster transmission modes but also the clustering ranges. Through the rational division of cluster head function and the optimization of cluster head selection criteria and information exchange process, the ICV-MIMO scheme effectively reduces the network energy consumption and improves the lifetime of the cluster structure when compared with the existing typical virtual MIMO-based scheme. Moreover, the message overhead and time complexity are still in the same order of magnitude. PMID:27681731

  5. Virtual Wireless Sensor Networks: Adaptive Brain-Inspired Configuration for Internet of Things Applications

    PubMed Central

    Toyonaga, Shinya; Kominami, Daichi; Murata, Masayuki

    2016-01-01

    Many researchers are devoting attention to the so-called “Internet of Things” (IoT), and wireless sensor networks (WSNs) are regarded as a critical technology for realizing the communication infrastructure of the future, including the IoT. Against this background, virtualization is a crucial technique for the integration of multiple WSNs. Designing virtualized WSNs for actual environments will require further detailed studies. Within the IoT environment, physical networks can undergo dynamic change, and so, many problems exist that could prevent applications from running without interruption when using the existing approaches. In this paper, we show an overall architecture that is suitable for constructing and running virtual wireless sensor network (VWSN) services within a VWSN topology. Our approach provides users with a reliable VWSN network by assigning redundant resources according to each user’s demand and providing a recovery method to incorporate environmental changes. We tested this approach by simulation experiment, with the results showing that the VWSN network is reliable in many cases, although physical deployment of sensor nodes and the modular structure of the VWSN will be quite important to the stability of services within the VWSN topology. PMID:27548177

  6. Virtual Wireless Sensor Networks: Adaptive Brain-Inspired Configuration for Internet of Things Applications.

    PubMed

    Toyonaga, Shinya; Kominami, Daichi; Murata, Masayuki

    2016-08-19

    Many researchers are devoting attention to the so-called "Internet of Things" (IoT), and wireless sensor networks (WSNs) are regarded as a critical technology for realizing the communication infrastructure of the future, including the IoT. Against this background, virtualization is a crucial technique for the integration of multiple WSNs. Designing virtualized WSNs for actual environments will require further detailed studies. Within the IoT environment, physical networks can undergo dynamic change, and so, many problems exist that could prevent applications from running without interruption when using the existing approaches. In this paper, we show an overall architecture that is suitable for constructing and running virtual wireless sensor network (VWSN) services within a VWSN topology. Our approach provides users with a reliable VWSN network by assigning redundant resources according to each user's demand and providing a recovery method to incorporate environmental changes. We tested this approach by simulation experiment, with the results showing that the VWSN network is reliable in many cases, although physical deployment of sensor nodes and the modular structure of the VWSN will be quite important to the stability of services within the VWSN topology.

  7. Fast in-situ tool inspection based on inverse fringe projection and compact sensor heads

    NASA Astrophysics Data System (ADS)

    Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard

    2016-11-01

    Inspection of machine elements is an important task in production processes in order to ensure the quality of produced parts and to gather feedback for the continuous improvement process. A new measuring system is presented, which is capable of performing the inspection of critical tool geometries, such as gearing elements, inside the forming machine. To meet the constraints on sensor head size and inspection time imposed by the limited space inside the machine and the cycle time of the process, the measuring device employs a combination of endoscopy techniques with the fringe projection principle. Compact gradient index lenses enable a compact design of the sensor head, which is connected to a CMOS camera and a flexible micro-mirror based projector via flexible fiber bundles. Using common fringe projection patterns, the system achieves measuring times of less than five seconds. To further reduce the time required for inspection, the generation of inverse fringe projection patterns has been implemented for the system. Inverse fringe projection speeds up the inspection process by employing object-adapted patterns, which enable the detection of geometry deviations in a single image. Two different approaches to generate object adapted patterns are presented. The first approach uses a reference measurement of a manufactured tool master to generate the inverse pattern. The second approach is based on a virtual master geometry in the form of a CAD file and a ray-tracing model of the measuring system. Virtual modeling of the measuring device and inspection setup allows for geometric tolerancing for free-form surfaces by the tool designer in the CAD-file. A new approach is presented, which uses virtual tolerance specifications and additional simulation steps to enable fast checking of metric tolerances. Following the description of the pattern generation process, the image processing steps required for inspection are demonstrated on captures of gearing geometries.

  8. Use of Occupancy Sensors in LED Parking Lot and Garage Applications: Early Experiences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kinzey, Bruce R.; Myer, Michael; Royer, Michael P.

    2012-11-07

    Occupancy sensor systems are gaining traction as an effective technological approach to reducing energy use in exterior commercial lighting applications. Done correctly, occupancy sensors can substantially enhance the savings from an already efficient lighting system. However, this technology is confronted by several potential challenges and pitfalls that can leave a significant amount of the prospective savings on the table. This report describes anecdotal experiences from field installations of occupancy sensor controlled light-emitting diode (LED) lighting at two parking structures and two parking lots. The relative levels of success at these installations reflect a marked range of potential outcomes: from anmore » additional 76% in energy savings to virtually no additional savings. Several issues that influenced savings were encountered in these early stage installations and are detailed in the report. Ultimately, care must be taken in the design, selection, and commissioning of a sensor-controlled lighting installation, else the only guaranteed result may be its cost.« less

  9. Sensing and Virtual Worlds - A Survey of Research Opportunities

    NASA Technical Reports Server (NTRS)

    Moore, Dana

    2012-01-01

    Virtual Worlds (VWs) have been used effectively in live and constructive military training. An area that remains fertile ground for exploration and a new vision involves integrating various traditional and now non-traditional sensors into virtual worlds. In this paper, we will assert that the benefits of this integration are several. First, we maintain that virtual worlds offer improved sensor deployment planning through improved visualization and stimulation of the model, using geo-specific terrain and structure. Secondly, we assert that VWs enhance the mission rehearsal process, and that using a mix of live avatars, non-player characters, and live sensor feeds (e.g. real time meteorology) can help visualization of the area of operations. Finally, tactical operations are improved via better collaboration and integration of real world sensing capabilities, and in most situations, 30 VWs improve the state of the art over current "dots on a map" 20 geospatial visualization. However, several capability gaps preclude a fuller realization of this vision. In this paper, we identify many of these gaps and suggest research directions

  10. Towards an integrated strategy for monitoring wetland inundation with virtual constellations of optical and radar satellites

    NASA Astrophysics Data System (ADS)

    DeVries, B.; Huang, W.; Huang, C.; Jones, J. W.; Lang, M. W.; Creed, I. F.; Carroll, M.

    2017-12-01

    The function of wetlandscapes in hydrological and biogeochemical cycles is largely governed by surface inundation, with small wetlands that experience periodic inundation playing a disproportionately large role in these processes. However, the spatial distribution and temporal dynamics of inundation in these wetland systems are still poorly understood, resulting in large uncertainties in global water, carbon and greenhouse gas budgets. Satellite imagery provides synoptic and repeat views of the Earth's surface and presents opportunities to fill this knowledge gap. Despite the proliferation of Earth Observation satellite missions in the past decade, no single satellite sensor can simultaneously provide the spatial and temporal detail needed to adequately characterize inundation in small, dynamic wetland systems. Surface water data products must therefore integrate observations from multiple satellite sensors in order to address this objective, requiring the development of improved and coordinated algorithms to generate consistent estimates of surface inundation. We present a suite of algorithms designed to detect surface inundation in wetlands using data from a virtual constellation of optical and radar sensors comprising the Landsat and Sentinel missions (DeVries et al., 2017). Both optical and radar algorithms were able to detect inundation in wetlands without the need for external training data, allowing for high-efficiency monitoring of wetland inundation at large spatial and temporal scales. Applying these algorithms across a gradient of wetlands in North America, preliminary findings suggest that while these fully automated algorithms can detect wetland inundation at higher spatial and temporal resolutions than currently available surface water data products, limitations specific to the satellite sensors and their acquisition strategies are responsible for uncertainties in inundation estimates. Further research is needed to investigate strategies for integrating optical and radar data from virtual constellations, with a focus on reducing uncertainties, maximizing spatial and temporal detail, and establishing consistent records of wetland inundation over time. The findings and conclusions in this article do not necessarily represent the views of the U.S. Government.

  11. Novel Corrosion Sensor for Vision 21 Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heng Ban; Bharat Soni

    2007-03-31

    Advanced sensor technology is identified as a key component for advanced power systems for future energy plants that would have virtually no environmental impact. This project intends to develop a novel high temperature corrosion sensor and subsequent measurement system for advanced power systems. Fireside corrosion is the leading mechanism for boiler tube failures and has emerged to be a significant concern for current and future energy plants due to the introduction of technologies targeting emissions reduction, efficiency improvement, or fuel/oxidant flexibility. Corrosion damage can lead to catastrophic equipment failure, explosions, and forced outages. Proper management of corrosion requires real-time indicationmore » of corrosion rate. However, short-term, on-line corrosion monitoring systems for fireside corrosion remain a technical challenge to date due to the extremely harsh combustion environment. The overall goal of this project is to develop a technology for on-line fireside corrosion monitoring. This objective is achieved by the laboratory development of sensors and instrumentation, testing them in a laboratory muffle furnace, and eventually testing the system in a coal-fired furnace. This project successfully developed two types of sensors and measurement systems, and successful tested them in a muffle furnace in the laboratory. The capacitance sensor had a high fabrication cost and might be more appropriate in other applications. The low-cost resistance sensor was tested in a power plant burning eastern bituminous coals. The results show that the fireside corrosion measurement system can be used to determine the corrosion rate at waterwall and superheater locations. Electron microscope analysis of the corroded sensor surface provided detailed picture of the corrosion process.« less

  12. Sensor Network Infrastructure for a Home Care Monitoring System

    PubMed Central

    Palumbo, Filippo; Ullberg, Jonas; Štimec, Ales; Furfari, Francesco; Karlsson, Lars; Coradeschi, Silvia

    2014-01-01

    This paper presents the sensor network infrastructure for a home care system that allows long-term monitoring of physiological data and everyday activities. The aim of the proposed system is to allow the elderly to live longer in their home without compromising safety and ensuring the detection of health problems. The system offers the possibility of a virtual visit via a teleoperated robot. During the visit, physiological data and activities occurring during a period of time can be discussed. These data are collected from physiological sensors (e.g., temperature, blood pressure, glucose) and environmental sensors (e.g., motion, bed/chair occupancy, electrical usage). The system can also give alarms if sudden problems occur, like a fall, and warnings based on more long-term trends, such as the deterioration of health being detected. It has been implemented and tested in a test environment and has been deployed in six real homes for a year-long evaluation. The key contribution of the paper is the presentation of an implemented system for ambient assisted living (AAL) tested in a real environment, combining the acquisition of sensor data, a flexible and adaptable middleware compliant with the OSGistandard and a context recognition application. The system has been developed in a European project called GiraffPlus. PMID:24573309

  13. Sensor network infrastructure for a home care monitoring system.

    PubMed

    Palumbo, Filippo; Ullberg, Jonas; Stimec, Ales; Furfari, Francesco; Karlsson, Lars; Coradeschi, Silvia

    2014-02-25

    This paper presents the sensor network infrastructure for a home care system that allows long-term monitoring of physiological data and everyday activities. The aim of the proposed system is to allow the elderly to live longer in their home without compromising safety and ensuring the detection of health problems. The system offers the possibility of a virtual visit via a teleoperated robot. During the visit, physiological data and activities occurring during a period of time can be discussed. These data are collected from physiological sensors (e.g., temperature, blood pressure, glucose) and environmental sensors (e.g., motion, bed/chair occupancy, electrical usage). The system can also give alarms if sudden problems occur, like a fall, and warnings based on more long-term trends, such as the deterioration of health being detected. It has been implemented and tested in a test environment and has been deployed in six real homes for a year-long evaluation. The key contribution of the paper is the presentation of an implemented system for ambient assisted living (AAL) tested in a real environment, combining the acquisition of sensor data, a flexible and adaptable middleware compliant with the OSGistandard and a context recognition application. The system has been developed in a European project called GiraffPlus.

  14. An artificial arm/hand system with a haptic sensory function using electric stimulation of peripheral sensory nerve fibers.

    PubMed

    Mabuchi, Kunihiko

    2013-01-01

    We are currently developing an artificial arm/hand system which is capable of sensing stimuli and then transferring these stimuli to users as somatic sensations. Presently, we are evoking the virtual somatic sensations by electrically stimulating a sensory nerve fiber which innervates a single mechanoreceptor unit at the target area; this is done using a tungsten microelectrode that was percutaneously inserted into the use's peripheral nerve (a microstimulation method). The artificial arm/hand system is composed of a robot hand equipped with a pressure sensor system on its fingers. The sensor system detects mechanical stimuli, which are transferred to the user by means of the microstimulation method so that the user experiences the stimuli as the corresponding somatic sensations. In trials, the system worked satisfactorily and there was a good correlation between the pressure applied to the pressure sensors on the robot fingers and the subjective intensities of the evoked pressure sensations.

  15. An Internet of Things based physiological signal monitoring and receiving system for virtual enhanced health care network.

    PubMed

    Rajan, J Pandia; Rajan, S Edward

    2018-01-01

    Wireless physiological signal monitoring system designing with secured data communication in the health care system is an important and dynamic process. We propose a signal monitoring system using NI myRIO connected with the wireless body sensor network through multi-channel signal acquisition method. Based on the server side validation of the signal, the data connected to the local server is updated in the cloud. The Internet of Things (IoT) architecture is used to get the mobility and fast access of patient data to healthcare service providers. This research work proposes a novel architecture for wireless physiological signal monitoring system using ubiquitous healthcare services by virtual Internet of Things. We showed an improvement in method of access and real time dynamic monitoring of physiological signal of this remote monitoring system using virtual Internet of thing approach. This remote monitoring and access system is evaluated in conventional value. This proposed system is envisioned to modern smart health care system by high utility and user friendly in clinical applications. We claim that the proposed scheme significantly improves the accuracy of the remote monitoring system compared to the other wireless communication methods in clinical system.

  16. Temporally coherent 4D video segmentation for teleconferencing

    NASA Astrophysics Data System (ADS)

    Ehmann, Jana; Guleryuz, Onur G.

    2013-09-01

    We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap- plications on emerging mobile devices. Our algorithm extracts users from their environments and places them onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac- tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth sensors, while providing useful information for segmentation, result in noisy depth maps with a large number of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other- wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve- ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video conferencing scenarios.

  17. VEVI: A Virtual Reality Tool For Robotic Planetary Explorations

    NASA Technical Reports Server (NTRS)

    Piguet, Laurent; Fong, Terry; Hine, Butler; Hontalas, Phil; Nygren, Erik

    1994-01-01

    The Virtual Environment Vehicle Interface (VEVI), developed by the NASA Ames Research Center's Intelligent Mechanisms Group, is a modular operator interface for direct teleoperation and supervisory control of robotic vehicles. Virtual environments enable the efficient display and visualization of complex data. This characteristic allows operators to perceive and control complex systems in a natural fashion, utilizing the highly-evolved human sensory system. VEVI utilizes real-time, interactive, 3D graphics and position / orientation sensors to produce a range of interface modalities from the flat panel (windowed or stereoscopic) screen displays to head mounted/head-tracking stereo displays. The interface provides generic video control capability and has been used to control wheeled, legged, air bearing, and underwater vehicles in a variety of different environments. VEVI was designed and implemented to be modular, distributed and easily operated through long-distance communication links, using a communication paradigm called SYNERGY.

  18. Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.

    PubMed

    Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong

    2018-01-01

    Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.

  19. Virtual Control Policy for Binary Ordered Resources Petri Net Class.

    PubMed

    Rovetto, Carlos A; Concepción, Tomás J; Cano, Elia Esther

    2016-08-18

    Prevention and avoidance of deadlocks in sensor networks that use the wormhole routing algorithm is an active research domain. There are diverse control policies that will address this problem being our approach a new method. In this paper we present a virtual control policy for the new specialized Petri net subclass called Binary Ordered Resources Petri Net (BORPN). Essentially, it is an ordinary class constructed from various state machines that share unitary resources in a complex form, which allows branching and joining of processes. The reduced structure of this new class gives advantages that allow analysis of the entire system's behavior, which is a prohibitive task for large systems because of the complexity and routing algorithms.

  20. Constructing new seismograms from old earthquakes: Retrospective seismology at multiple length scales

    NASA Astrophysics Data System (ADS)

    Entwistle, Elizabeth; Curtis, Andrew; Galetti, Erica; Baptie, Brian; Meles, Giovanni

    2015-04-01

    If energy emitted by a seismic source such as an earthquake is recorded on a suitable backbone array of seismometers, source-receiver interferometry (SRI) is a method that allows those recordings to be projected to the location of another target seismometer, providing an estimate of the seismogram that would have been recorded at that location. Since the other seismometer may not have been deployed at the time the source occurred, this renders possible the concept of 'retrospective seismology' whereby the installation of a sensor at one period of time allows the construction of virtual seismograms as though that sensor had been active before or after its period of installation. Using the benefit of hindsight of earthquake location or magnitude estimates, SRI can establish new measurement capabilities closer to earthquake epicenters, thus potentially improving earthquake location estimates. Recently we showed that virtual SRI seismograms can be constructed on target sensors in both industrial seismic and earthquake seismology settings, using both active seismic sources and ambient seismic noise to construct SRI propagators, and on length scales ranging over 5 orders of magnitude from ~40 m to ~2500 km[1]. Here we present the results from earthquake seismology by comparing virtual earthquake seismograms constructed at target sensors by SRI to those actually recorded on the same sensors. We show that spatial integrations required by interferometric theory can be calculated over irregular receiver arrays by embedding these arrays within 2D spatial Voronoi cells, thus improving spatial interpolation and interferometric results. The results of SRI are significantly improved by restricting the backbone receiver array to include approximately those receivers that provide a stationary phase contribution to the interferometric integrals. We apply both correlation-correlation and correlation-convolution SRI, and show that the latter constructs virtual seismograms with fewer non-physical arrivals. Finally we reconstruct earthquake seismograms at sensors that were previously active but were subsequently removed before the earthquakes occurred; thus we create virtual earthquake seismograms at those sensors, truly retrospectively. Such SRI seismograms can be used to create a catalogue of new, virtual earthquake seismograms that are available to complement real earthquake data in future earthquake seismology studies. [1]E. Entwistle, Curtis, A., Galetti, E., Baptie, B., Meles, G., Constructing new seismograms from old earthquakes: Retrospective seismology at multiple length scales, JGR, in press.

  1. Virtual Instrumentation for a Fiber-Optics-Based Artificial Nerve

    NASA Technical Reports Server (NTRS)

    Lyons, Donald R.; Kyaw, Thet Mon; Griffin, DeVon (Technical Monitor)

    2001-01-01

    A LabView-based computer interface for fiber-optic artificial nerves has been devised as a Masters thesis project. This project involves the use of outputs from wavelength multiplexed optical fiber sensors (artificial nerves), which are capable of producing dense optical data outputs for physical measurements. The potential advantages of using optical fiber sensors for sensory function restoration is the fact that well defined WDM-modulated signals can be transmitted to and from the sensing region allowing networked units to replace low-level nerve functions for persons desirous of "intelligent artificial limbs." Various FO sensors can be designed with high sensitivity and the ability to be interfaced with a wide range of devices including miniature shielded electrical conversion units. Our Virtual Instrument (VI) interface software package was developed using LabView's "Laboratory Virtual Instrument Engineering Workbench" package. The virtual instrument has been configured to arrange and encode the data to develop an intelligent response in the form of encoded digitized signal outputs. The architectural layout of our nervous system is such that different touch stimuli from different artificial fiber-optic nerve points correspond to gratings of a distinct resonant wavelength and physical location along the optical fiber. Thus, when an automated, tunable diode laser sends scans, the wavelength spectrum of the artificial nerve, it triggers responses that are encoded with different touch stimuli by way wavelength shifts in the reflected Bragg resonances. The reflected light is detected and a resulting analog signal is fed into ADC1 board and DAQ card. Finally, the software has been written such that the experimenter is able to set the response range during data acquisition.

  2. Secure Autonomous Automated Scheduling (SAAS). Rev. 1.1

    NASA Technical Reports Server (NTRS)

    Walke, Jon G.; Dikeman, Larry; Sage, Stephen P.; Miller, Eric M.

    2010-01-01

    This report describes network-centric operations, where a virtual mission operations center autonomously receives sensor triggers, and schedules space and ground assets using Internet-based technologies and service-oriented architectures. For proof-of-concept purposes, sensor triggers are received from the United States Geological Survey (USGS) to determine targets for space-based sensors. The Surrey Satellite Technology Limited (SSTL) Disaster Monitoring Constellation satellite, the UK-DMC, is used as the space-based sensor. The UK-DMC's availability is determined via machine-to-machine communications using SSTL's mission planning system. Access to/from the UK-DMC for tasking and sensor data is via SSTL's and Universal Space Network's (USN) ground assets. The availability and scheduling of USN's assets can also be performed autonomously via machine-to-machine communications. All communication, both on the ground and between ground and space, uses open Internet standards

  3. A Compact Energy Harvesting System for Outdoor Wireless Sensor Nodes Based on a Low-Cost In Situ Photovoltaic Panel Characterization-Modelling Unit

    PubMed Central

    Antolín, Diego; Calvo, Belén; Martínez, Pedro A.

    2017-01-01

    This paper presents a low-cost high-efficiency solar energy harvesting system to power outdoor wireless sensor nodes. It is based on a Voltage Open Circuit (VOC) algorithm that estimates the open-circuit voltage by means of a multilayer perceptron neural network model trained using local experimental characterization data, which are acquired through a novel low cost characterization system incorporated into the deployed node. Both units—characterization and modelling—are controlled by the same low-cost microcontroller, providing a complete solution which can be understood as a virtual pilot cell, with identical characteristics to those of the specific small solar cell installed on the sensor node, that besides allows an easy adaptation to changes in the actual environmental conditions, panel aging, etc. Experimental comparison to a classical pilot panel based VOC algorithm show better efficiency under the same tested conditions. PMID:28777330

  4. A Compact Energy Harvesting System for Outdoor Wireless Sensor Nodes Based on a Low-Cost In Situ Photovoltaic Panel Characterization-Modelling Unit.

    PubMed

    Antolín, Diego; Medrano, Nicolás; Calvo, Belén; Martínez, Pedro A

    2017-08-04

    This paper presents a low-cost high-efficiency solar energy harvesting system to power outdoor wireless sensor nodes. It is based on a Voltage Open Circuit (VOC) algorithm that estimates the open-circuit voltage by means of a multilayer perceptron neural network model trained using local experimental characterization data, which are acquired through a novel low cost characterization system incorporated into the deployed node. Both units-characterization and modelling-are controlled by the same low-cost microcontroller, providing a complete solution which can be understood as a virtual pilot cell, with identical characteristics to those of the specific small solar cell installed on the sensor node, that besides allows an easy adaptation to changes in the actual environmental conditions, panel aging, etc. Experimental comparison to a classical pilot panel based VOC algorithm show better efficiency under the same tested conditions.

  5. Workflow-Oriented Cyberinfrastructure for Sensor Data Analytics

    NASA Astrophysics Data System (ADS)

    Orcutt, J. A.; Rajasekar, A.; Moore, R. W.; Vernon, F.

    2015-12-01

    Sensor streams comprise an increasingly large part of Earth Science data. Analytics based on sensor data require an easy way to perform operations such as acquisition, conversion to physical units, metadata linking, sensor fusion, analysis and visualization on distributed sensor streams. Furthermore, embedding real-time sensor data into scientific workflows is of growing interest. We have implemented a scalable networked architecture that can be used to dynamically access packets of data in a stream from multiple sensors, and perform synthesis and analysis across a distributed network. Our system is based on the integrated Rule Oriented Data System (irods.org), which accesses sensor data from the Antelope Real Time Data System (brtt.com), and provides virtualized access to collections of data streams. We integrate real-time data streaming from different sources, collected for different purposes, on different time and spatial scales, and sensed by different methods. iRODS, noted for its policy-oriented data management, brings to sensor processing features and facilities such as single sign-on, third party access control lists ( ACLs), location transparency, logical resource naming, and server-side modeling capabilities while reducing the burden on sensor network operators. Rich integrated metadata support also makes it straightforward to discover data streams of interest and maintain data provenance. The workflow support in iRODS readily integrates sensor processing into any analytical pipeline. The system is developed as part of the NSF-funded Datanet Federation Consortium (datafed.org). APIs for selecting, opening, reaping and closing sensor streams are provided, along with other helper functions to associate metadata and convert sensor packets into NetCDF and JSON formats. Near real-time sensor data including seismic sensors, environmental sensors, LIDAR and video streams are available through this interface. A system for archiving sensor data and metadata in NetCDF format has been implemented and will be demonstrated at AGU.

  6. Virtual Reality-Based Center of Mass-Assisted Personalized Balance Training System.

    PubMed

    Kumar, Deepesh; González, Alejandro; Das, Abhijit; Dutta, Anirban; Fraisse, Philippe; Hayashibe, Mitsuhiro; Lahiri, Uttama

    2017-01-01

    Poststroke hemiplegic patients often show altered weight distribution with balance disorders, increasing their risk of fall. Conventional balance training, though powerful, suffers from scarcity of trained therapists, frequent visits to clinics to get therapy, one-on-one therapy sessions, and monotony of repetitive exercise tasks. Thus, technology-assisted balance rehabilitation can be an alternative solution. Here, we chose virtual reality as a technology-based platform to develop motivating balance tasks. This platform was augmented with off-the-shelf available sensors such as Nintendo Wii balance board and Kinect to estimate one's center of mass (CoM). The virtual reality-based CoM-assisted balance tasks (Virtual CoMBaT) was designed to be adaptive to one's individualized weight-shifting capability quantified through CoM displacement. Participants were asked to interact with Virtual CoMBaT that offered tasks of varying challenge levels while adhering to ankle strategy for weight shifting. To facilitate the patients to use ankle strategy during weight-shifting, we designed a heel lift detection module. A usability study was carried out with 12 hemiplegic patients. Results indicate the potential of our system to contribute to improving one's overall performance in balance-related tasks belonging to different difficulty levels.

  7. Development and user evaluation of a virtual rehabilitation system for wobble board balance training.

    PubMed

    Fitzgerald, Diarmaid; Trakarnratanakul, Nanthana; Dunne, Lucy; Smyth, Barry; Caulfield, Brian

    2008-01-01

    We have developed a prototype virtual reality-based balance training system using a single inertial orientation sensor attached to the upper surface of a wobble board. This input device has been interfaced with Neverball, an open source computer game to create the balance training platform. Users can exercise with the system by standing on the wobble board and tilting it in different directions to control an on-screen environment. We have also developed a customized instruction manual to use when setting up the system. To evaluate the usability our prototype system we undertook a user evaluation study with twelve healthy novice participants. Participants were required to assemble the system using an instruction manual and then perform balance exercises with the system. Following this period of exercise VRUSE, a usability evaluation questionnaire, was completed by participants. Results indicated a high level of usability in all categories evaluated.

  8. Colonoscope navigation system using colonoscope tracking method based on line registration

    NASA Astrophysics Data System (ADS)

    Oda, Masahiro; Kondo, Hiroaki; Kitasaka, Takayuki; Furukawa, Kazuhiro; Miyahara, Ryoji; Hirooka, Yoshiki; Goto, Hidemi; Navab, Nassir; Mori, Kensaku

    2014-03-01

    This paper presents a new colonoscope navigation system. CT colonography is utilized for colon diagnosis based on CT images. If polyps are found while CT colonography, colonoscopic polypectomy can be performed to remove them. While performing a colonoscopic examination, a physician controls colonoscope based on his/her experience. Inexperienced physicians may occur complications such as colon perforation while colonoscopic examinations. To reduce complications, a navigation system of colonoscope while performing the colonoscopic examinations is necessary. We propose a colonoscope navigation system. This system has a new colonoscope tracking method. This method obtains a colon centerline from a CT volume of a patient. A curved line (colonoscope line) representing the shape of colonoscope inserted to the colon is obtained by using electromagnetic sensors. A coordinate system registration process that employs the ICP algorithm is performed to register the CT and sensor coordinate systems. The colon centerline and colonoscope line are registered by using a line registration method. The position of the colonoscope tip in the colon is obtained from the line registration result. Our colonoscope navigation system displays virtual colonoscopic views generated from the CT volumes. A viewpoint of the virtual colonoscopic view is a point on the centerline that corresponds to the colonoscope tip. Experimental results using a colon phantom showed that the proposed colonoscope tracking method can track the colonoscope tip with small tracking errors.

  9. Landsat's role in ecological applications of remote sensing.

    Treesearch

    Warren B. Cohen; Samuel N. Goward

    2004-01-01

    Remote sensing, geographic information systems, and modeling have combined to produce a virtual explosion of growth in ecological investigations and applications that are explicitly spatial and temporal. Of all remotely sensed data, those acquired by landsat sensors have played the most pivotal role in spatial and temporal scaling. Modern terrestrial ecology relies on...

  10. Hybrid Feedforward-Feedback Noise Control Using Virtual Sensors

    NASA Technical Reports Server (NTRS)

    Bean, Jacob; Fuller, Chris; Schiller, Noah

    2016-01-01

    Several approaches to active noise control using virtual sensors are evaluated for eventual use in an active headrest. Specifically, adaptive feedforward, feedback, and hybrid control structures are compared. Each controller incorporates the traditional filtered-x least mean squares algorithm. The feedback controller is arranged in an internal model configuration to draw comparisons with standard feedforward control theory results. Simulation and experimental results are presented that illustrate each controllers ability to minimize the pressure at both physical and virtual microphone locations. The remote microphone technique is used to obtain pressure estimates at the virtual locations. It is shown that a hybrid controller offers performance benefits over the traditional feedforward and feedback controllers. Stability issues associated with feedback and hybrid controllers are also addressed. Experimental results show that 15-20 dB reduction in broadband disturbances can be achieved by minimizing the measured pressure, whereas 10-15 dB reduction is obtained when minimizing the estimated pressure at a virtual location.

  11. Prediction of dynamic strains on a monopile offshore wind turbine using virtual sensors

    NASA Astrophysics Data System (ADS)

    Iliopoulos, A. N.; Weijtjens, W.; Van Hemelrijck, D.; Devriendt, C.

    2015-07-01

    The monitoring of the condition of the offshore wind turbine during its operational states offers the possibility of performing accurate assessments of the remaining life-time as well as supporting maintenance decisions during its entire life. The efficacy of structural monitoring in the case of the offshore wind turbine, though, is undermined by the practical limitations connected to the measurement system in terms of cost, weight and feasibility of sensor mounting (e.g. at muddline level 30m below the water level). This limitation is overcome by reconstructing the full-field response of the structure based on the limited number of measured accelerations and a calibrated Finite Element Model of the system. A modal decomposition and expansion approach is used for reconstructing the responses at all degrees of freedom of the finite element model. The paper will demonstrate the possibility to predict dynamic strains from acceleration measurements based on the aforementioned methodology. These virtual dynamic strains will then be evaluated and validated based on actual strain measurements obtained from a monitoring campaign on an offshore Vestas V90 3 MW wind turbine on a monopile foundation.

  12. Measurements by A LEAP-Based Virtual Glove for the Hand Rehabilitation

    PubMed Central

    Cinque, Luigi; Polsinelli, Matteo; Spezialetti, Matteo

    2018-01-01

    Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications. PMID:29534448

  13. Measurements by A LEAP-Based Virtual Glove for the Hand Rehabilitation.

    PubMed

    Placidi, Giuseppe; Cinque, Luigi; Polsinelli, Matteo; Spezialetti, Matteo

    2018-03-10

    Hand rehabilitation is fundamental after stroke or surgery. Traditional rehabilitation requires a therapist and implies high costs, stress for the patient, and subjective evaluation of the therapy effectiveness. Alternative approaches, based on mechanical and tracking-based gloves, can be really effective when used in virtual reality (VR) environments. Mechanical devices are often expensive, cumbersome, patient specific and hand specific, while tracking-based devices are not affected by these limitations but, especially if based on a single tracking sensor, could suffer from occlusions. In this paper, the implementation of a multi-sensors approach, the Virtual Glove (VG), based on the simultaneous use of two orthogonal LEAP motion controllers, is described. The VG is calibrated and static positioning measurements are compared with those collected with an accurate spatial positioning system. The positioning error is lower than 6 mm in a cylindrical region of interest of radius 10 cm and height 21 cm. Real-time hand tracking measurements are also performed, analysed and reported. Hand tracking measurements show that VG operated in real-time (60 fps), reduced occlusions, and managed two LEAP sensors correctly, without any temporal and spatial discontinuity when skipping from one sensor to the other. A video demonstrating the good performance of VG is also collected and presented in the Supplementary Materials. Results are promising but further work must be done to allow the calculation of the forces exerted by each finger when constrained by mechanical tools (e.g., peg-boards) and for reducing occlusions when grasping these tools. Although the VG is proposed for rehabilitation purposes, it could also be used for tele-operation of tools and robots, and for other VR applications.

  14. High sensitivity to multisensory conflicts in agoraphobia exhibited by virtual reality.

    PubMed

    Viaud-Delmon, Isabelle; Warusfel, Olivier; Seguelas, Angeline; Rio, Emmanuel; Jouvent, Roland

    2006-10-01

    The primary aim of this study was to evaluate the effect of auditory feedback in a VR system planned for clinical use and to address the different factors that should be taken into account in building a bimodal virtual environment (VE). We conducted an experiment in which we assessed spatial performances in agoraphobic patients and normal subjects comparing two kinds of VEs, visual alone (Vis) and auditory-visual (AVis), during separate sessions. Subjects were equipped with a head-mounted display coupled with an electromagnetic sensor system and immersed in a virtual town. Their task was to locate different landmarks and become familiar with the town. In the AVis condition subjects were equipped with the head-mounted display and headphones, which delivered a soundscape updated in real-time according to their movement in the virtual town. While general performances remained comparable across the conditions, the reported feeling of immersion was more compelling in the AVis environment. However, patients exhibited more cybersickness symptoms in this condition. The result of this study points to the multisensory integration deficit of agoraphobic patients and underline the need for further research on multimodal VR systems for clinical use.

  15. Rational Design of QCM-D Virtual Sensor Arrays Based on Film Thickness, Viscoelasticity, and Harmonics for Vapor Discrimination.

    PubMed

    Speller, Nicholas C; Siraj, Noureen; Regmi, Bishnu P; Marzoughi, Hassan; Neal, Courtney; Warner, Isiah M

    2015-01-01

    Herein, we demonstrate an alternative strategy for creating QCM-based sensor arrays by use of a single sensor to provide multiple responses per analyte. The sensor, which simulates a virtual sensor array (VSA), was developed by depositing a thin film of ionic liquid, either 1-octyl-3-methylimidazolium bromide ([OMIm][Br]) or 1-octyl-3-methylimidazolium thiocyanate ([OMIm][SCN]), onto the surface of a QCM-D transducer. The sensor was exposed to 18 different organic vapors (alcohols, hydrocarbons, chlorohydrocarbons, nitriles) belonging to the same or different homologous series. The resulting frequency shifts (Δf) were measured at multiple harmonics and evaluated using principal component analysis (PCA) and discriminant analysis (DA) which revealed that analytes can be classified with extremely high accuracy. In almost all cases, the accuracy for identification of a member of the same class, that is, intraclass discrimination, was 100% as determined by use of quadratic discriminant analysis (QDA). Impressively, some VSAs allowed classification of all 18 analytes tested with nearly 100% accuracy. Such results underscore the importance of utilizing lesser exploited properties that influence signal transduction. Overall, these results demonstrate excellent potential of the virtual sensor array strategy for detection and discrimination of vapor phase analytes utilizing the QCM. To the best of our knowledge, this is the first report on QCM VSAs, as well as an experimental sensor array, that is based primarily on viscoelasticity, film thickness, and harmonics.

  16. A source-attractor approach to network detection of radiation sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Barry, M. L..; Grieme, M.

    Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less

  17. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation

    PubMed Central

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a “sensor fusion” approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications. PMID:22389618

  18. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation.

    PubMed

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a "sensor fusion" approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications.

  19. Development of low cost and accurate homemade sensor system based on Surface Plasmon Resonance (SPR)

    NASA Astrophysics Data System (ADS)

    Laksono, F. D.; Supardianningsih; Arifin, M.; Abraha, K.

    2018-04-01

    In this paper, we developed homemade and computerized sensor system based on Surface Plasmon Resonance (SPR). The developed systems consist of mechanical system instrument, laser power sensor, and user interface. The mechanical system development that uses anti-backlash gear design was successfully able to enhance the angular resolution angle of incidence laser up to 0.01°. In this system, the laser detector acquisition system and stepper motor controller utilizing Arduino Uno which is easy to program, flexible, and low cost, was used. Furthermore, we employed LabView’s user interface as the virtual instrument for facilitating the sample measurement and for transforming the data recording directly into the digital form. The test results using gold-deposited half-cylinder prism showed the Total Internal Reflection (TIR) angle of 41,34°± 0,01° and SPR angle of 44,20°± 0,01°, respectively. The result demonstrated that the developed system managed to reduce the measurement duration and data recording errors caused by human error. Also, the test results also concluded that the system’s measurement is repeatable and accurate.

  20. Design and implementation of visual-haptic assistive control system for virtual rehabilitation exercise and teleoperation manipulation.

    PubMed

    Veras, Eduardo J; De Laurentis, Kathryn J; Dubey, Rajiv

    2008-01-01

    This paper describes the design and implementation of a control system that integrates visual and haptic information to give assistive force feedback through a haptic controller (Omni Phantom) to the user. A sensor-based assistive function and velocity scaling program provides force feedback that helps the user complete trajectory following exercises for rehabilitation purposes. This system also incorporates a PUMA robot for teleoperation, which implements a camera and a laser range finder, controlled in real time by a PC, were implemented into the system to help the user to define the intended path to the selected target. The real-time force feedback from the remote robot to the haptic controller is made possible by using effective multithreading programming strategies in the control system design and by novel sensor integration. The sensor-based assistant function concept applied to teleoperation as well as shared control enhances the motion range and manipulation capabilities of the users executing rehabilitation exercises such as trajectory following along a sensor-based defined path. The system is modularly designed to allow for integration of different master devices and sensors. Furthermore, because this real-time system is versatile the haptic component can be used separately from the telerobotic component; in other words, one can use the haptic device for rehabilitation purposes for cases in which assistance is needed to perform tasks (e.g., stroke rehab) and also for teleoperation with force feedback and sensor assistance in either supervisory or automatic modes.

  1. ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation.

  2. Cyber-Physical System Security With Deceptive Virtual Hosts for Industrial Control Networks

    DOE PAGES

    Vollmer, Todd; Manic, Milos

    2014-05-01

    A challenge facing industrial control network administrators is protecting the typically large number of connected assets for which they are responsible. These cyber devices may be tightly coupled with the physical processes they control and human induced failures risk dire real-world consequences. Dynamic virtual honeypots are effective tools for observing and attracting network intruder activity. This paper presents a design and implementation for self-configuring honeypots that passively examine control system network traffic and actively adapt to the observed environment. In contrast to prior work in the field, six tools were analyzed for suitability of network entity information gathering. Ettercap, anmore » established network security tool not commonly used in this capacity, outperformed the other tools and was chosen for implementation. Utilizing Ettercap XML output, a novel four-step algorithm was developed for autonomous creation and update of a Honeyd configuration. This algorithm was tested on an existing small campus grid and sensor network by execution of a collaborative usage scenario. Automatically created virtual hosts were deployed in concert with an anomaly behavior (AB) system in an attack scenario. Virtual hosts were automatically configured with unique emulated network stack behaviors for 92% of the targeted devices. The AB system alerted on 100% of the monitored emulated devices.« less

  3. Evolving a Neural Olfactorimotor System in Virtual and Real Olfactory Environments

    PubMed Central

    Rhodes, Paul A.; Anderson, Todd O.

    2012-01-01

    To provide a platform to enable the study of simulated olfactory circuitry in context, we have integrated a simulated neural olfactorimotor system with a virtual world which simulates both computational fluid dynamics as well as a robotic agent capable of exploring the simulated plumes. A number of the elements which we developed for this purpose have not, to our knowledge, been previously assembled into an integrated system, including: control of a simulated agent by a neural olfactorimotor system; continuous interaction between the simulated robot and the virtual plume; the inclusion of multiple distinct odorant plumes and background odor; the systematic use of artificial evolution driven by olfactorimotor performance (e.g., time to locate a plume source) to specify parameter values; the incorporation of the realities of an imperfect physical robot using a hybrid model where a physical robot encounters a simulated plume. We close by describing ongoing work toward engineering a high dimensional, reversible, low power electronic olfactory sensor which will allow olfactorimotor neural circuitry evolved in the virtual world to control an autonomous olfactory robot in the physical world. The platform described here is intended to better test theories of olfactory circuit function, as well as provide robust odor source localization in realistic environments. PMID:23112772

  4. Telemedicine, virtual reality, and surgery

    NASA Technical Reports Server (NTRS)

    Mccormack, Percival D.; Charles, Steve

    1994-01-01

    Two types of synthetic experience are covered: virtual reality (VR) and surgery, and telemedicine. The topics are presented in viewgraph form and include the following: geometric models; physiological sensors; surgical applications; virtual cadaver; VR surgical simulation; telesurgery; VR Surgical Trainer; abdominal surgery pilot study; advanced abdominal simulator; examples of telemedicine; and telemedicine spacebridge.

  5. Evaluation of glucose controllers in virtual environment: methodology and sample application.

    PubMed

    Chassin, Ludovic J; Wilinska, Malgorzata E; Hovorka, Roman

    2004-11-01

    Adaptive systems to deliver medical treatment in humans are safety-critical systems and require particular care in both the testing and the evaluation phase, which are time-consuming, costly, and confounded by ethical issues. The objective of the present work is to develop a methodology to test glucose controllers of an artificial pancreas in a simulated (virtual) environment. A virtual environment comprising a model of the carbohydrate metabolism and models of the insulin pump and the glucose sensor is employed to simulate individual glucose excursions in subjects with type 1 diabetes. The performance of the control algorithm within the virtual environment is evaluated by considering treatment and operational scenarios. The developed methodology includes two dimensions: testing in relation to specific life style conditions, i.e. fasting, post-prandial, and life style (metabolic) disturbances; and testing in relation to various operating conditions, i.e. expected operating conditions, adverse operating conditions, and system failure. We define safety and efficacy criteria and describe the measures to be taken prior to clinical testing. The use of the methodology is exemplified by tuning and evaluating a model predictive glucose controller being developed for a wearable artificial pancreas focused on fasting conditions. Our methodology to test glucose controllers in a virtual environment is instrumental in anticipating the results of real clinical tests for different physiological conditions and for different operating conditions. The thorough testing in the virtual environment reduces costs and speeds up the development process.

  6. The U.S. Air Force Transformation Flight Plan

    DTIC Science & Technology

    2003-11-01

    at Buckley Air Force Base, Colorado. Reserve Associate and Active Associate units have proven that this concept works and benef its the Active and...munitions manufactured from nano-particles, whose virtually all-surface structure yields unprecedented “burn-rates” (extreme explosiveness), promise far...systems for a common operating system, and a suite of remotely operated sensors, weapons, and robotics . Also included are a group of non-lethal weapon

  7. Visual tracking strategies for intelligent vehicle highway systems

    NASA Astrophysics Data System (ADS)

    Smith, Christopher E.; Papanikolopoulos, Nikolaos P.; Brandt, Scott A.; Richards, Charles

    1995-01-01

    The complexity and congestion of current transportation systems often produce traffic situations that jeopardize the safety of the people involved. These situations vary from maintaining a safe distance behind a leading vehicle to safely allowing a pedestrian to cross a busy street. Environmental sensing plays a critical role in virtually all of these situations. Of the sensors available, vision sensors provide information that is richer and more complete than other sensors, making them a logical choice for a multisensor transportation system. In this paper we present robust techniques for intelligent vehicle-highway applications where computer vision plays a crucial role. In particular, we demonstrate that the controlled active vision framework can be utilized to provide a visual sensing modality to a traffic advisory system in order to increase the overall safety margin in a variety of common traffic situations. We have selected two application examples, vehicle tracking and pedestrian tracking, to demonstrate that the framework can provide precisely the type of information required to effectively manage the given situation.

  8. High-fidelity simulation capability for virtual testing of seismic and acoustic sensors

    NASA Astrophysics Data System (ADS)

    Wilson, D. Keith; Moran, Mark L.; Ketcham, Stephen A.; Lacombe, James; Anderson, Thomas S.; Symons, Neill P.; Aldridge, David F.; Marlin, David H.; Collier, Sandra L.; Ostashev, Vladimir E.

    2005-05-01

    This paper describes development and application of a high-fidelity, seismic/acoustic simulation capability for battlefield sensors. The purpose is to provide simulated sensor data so realistic that they cannot be distinguished by experts from actual field data. This emerging capability provides rapid, low-cost trade studies of unattended ground sensor network configurations, data processing and fusion strategies, and signatures emitted by prototype vehicles. There are three essential components to the modeling: (1) detailed mechanical signature models for vehicles and walkers, (2) high-resolution characterization of the subsurface and atmospheric environments, and (3) state-of-the-art seismic/acoustic models for propagating moving-vehicle signatures through realistic, complex environments. With regard to the first of these components, dynamic models of wheeled and tracked vehicles have been developed to generate ground force inputs to seismic propagation models. Vehicle models range from simple, 2D representations to highly detailed, 3D representations of entire linked-track suspension systems. Similarly detailed models of acoustic emissions from vehicle engines are under development. The propagation calculations for both the seismics and acoustics are based on finite-difference, time-domain (FDTD) methodologies capable of handling complex environmental features such as heterogeneous geologies, urban structures, surface vegetation, and dynamic atmospheric turbulence. Any number of dynamic sources and virtual sensors may be incorporated into the FDTD model. The computational demands of 3D FDTD simulation over tactical distances require massively parallel computers. Several example calculations of seismic/acoustic wave propagation through complex atmospheric and terrain environments are shown.

  9. Long wave infrared cavity-enhanced sensors using quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Taubman, Matthew S.; Scott, David C.; Myers, Tanya L.; Cannon, Bret D.

    2005-11-01

    Quantum cascade lasers (QCLs) are becoming well known as convenient and stable semiconductor laser sources operating in the mid- to long-wave infrared, and are able to be fabricated to operate virtually anywhere in the 3.5 to 25 micron region. This makes them an ideal choice for infrared chemical sensing, a topic of great interest at present, spanning at least three critical areas: national security, environmental monitoring and protection, and the early diagnosis of disease through breath analysis. There are many different laser-based spectroscopic chemical sensor architectures in use today, from simple direct detection through to more complex and highly sensitive systems. Many current sensor needs can be met by combining QCLs and appropriate sensor architectures, those needs ranging from UAV-mounted surveillance systems, through to larger ultra-sensitive systems for airport security. In this paper we provide an overview of various laser-based spectroscopic sensing techniques, pointing out advantages and disadvantages of each. As part of this process, we include our own results and observations for techniques under development at PNNL. We also present the latest performance of our ultra-quiet QCL control electronics now being commercialized, and explore how using optimized supporting electronics enables increased sensor performance and decreased sensor footprint for given applications.

  10. Smart Multi-Level Tool for Remote Patient Monitoring Based on a Wireless Sensor Network and Mobile Augmented Reality

    PubMed Central

    González, Fernando Cornelio Jimènez; Villegas, Osslan Osiris Vergara; Ramírez, Dulce Esperanza Torres; Sánchez, Vianey Guadalupe Cruz; Domínguez, Humberto Ochoa

    2014-01-01

    Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. One of the main advances is the development of real-time monitors that use intelligent and wireless communication technology. In this paper, a system is presented for the remote monitoring of the body temperature and heart rate of a patient by means of a wireless sensor network (WSN) and mobile augmented reality (MAR). The combination of a WSN and MAR provides a novel alternative to remotely measure body temperature and heart rate in real time during patient care. The system is composed of (1) hardware such as Arduino microcontrollers (in the patient nodes), personal computers (for the nurse server), smartphones (for the mobile nurse monitor and the virtual patient file) and sensors (to measure body temperature and heart rate), (2) a network layer using WiFly technology, and (3) software such as LabView, Android SDK, and DroidAR. The results obtained from tests show that the system can perform effectively within a range of 20 m and requires ten minutes to stabilize the temperature sensor to detect hyperthermia, hypothermia or normal body temperature conditions. Additionally, the heart rate sensor can detect conditions of tachycardia and bradycardia. PMID:25230306

  11. Smart multi-level tool for remote patient monitoring based on a wireless sensor network and mobile augmented reality.

    PubMed

    González, Fernando Cornelio Jiménez; Villegas, Osslan Osiris Vergara; Ramírez, Dulce Esperanza Torres; Sánchez, Vianey Guadalupe Cruz; Domínguez, Humberto Ochoa

    2014-09-16

    Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. One of the main advances is the development of real-time monitors that use intelligent and wireless communication technology. In this paper, a system is presented for the remote monitoring of the body temperature and heart rate of a patient by means of a wireless sensor network (WSN) and mobile augmented reality (MAR). The combination of a WSN and MAR provides a novel alternative to remotely measure body temperature and heart rate in real time during patient care. The system is composed of (1) hardware such as Arduino microcontrollers (in the patient nodes), personal computers (for the nurse server), smartphones (for the mobile nurse monitor and the virtual patient file) and sensors (to measure body temperature and heart rate), (2) a network layer using WiFly technology, and (3) software such as LabView, Android SDK, and DroidAR. The results obtained from tests show that the system can perform effectively within a range of 20 m and requires ten minutes to stabilize the temperature sensor to detect hyperthermia, hypothermia or normal body temperature conditions. Additionally, the heart rate sensor can detect conditions of tachycardia and bradycardia.

  12. Virtual optical interfaces for the transportation industry

    NASA Astrophysics Data System (ADS)

    Hejmadi, Vic; Kress, Bernard

    2010-04-01

    We present a novel implementation of virtual optical interfaces for the transportation industry (automotive and avionics). This new implementation includes two functionalities in a single device; projection of a virtual interface and sensing of the position of the fingers on top of the virtual interface. Both functionalities are produced by diffraction of laser light. The device we are developing include both functionalities in a compact package which has no optical elements to align since all of them are pre-aligned on a single glass wafer through optical lithography. The package contains a CMOS sensor which diffractive objective lens is optimized for the projected interface color as well as for the IR finger position sensor based on structured illumination. Two versions are proposed: a version which senses the 2d position of the hand and a version which senses the hand position in 3d.

  13. "Virtual Feel" Capaciflectors

    NASA Technical Reports Server (NTRS)

    Vranish, John M.

    2006-01-01

    The term "virtual feel" denotes a type of capaciflector (an advanced capacitive proximity sensor) and a methodology for designing and using a sensor of this type to guide a robot in manipulating a tool (e.g., a wrench socket) into alignment with a mating fastener (e.g., a bolt head) or other electrically conductive object. A capaciflector includes at least one sensing electrode, excited with an alternating voltage, that puts out a signal indicative of the capacitance between that electrode and a proximal object.

  14. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform

    PubMed Central

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-01-01

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform’s mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument’s working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform. PMID:27869722

  15. Virtual Distances Methodology as Verification Technique for AACMMs with a Capacitive Sensor Based Indexed Metrology Platform.

    PubMed

    Acero, Raquel; Santolaria, Jorge; Brau, Agustin; Pueo, Marcos

    2016-11-18

    This paper presents a new verification procedure for articulated arm coordinate measuring machines (AACMMs) together with a capacitive sensor-based indexed metrology platform (IMP) based on the generation of virtual reference distances. The novelty of this procedure lays on the possibility of creating virtual points, virtual gauges and virtual distances through the indexed metrology platform's mathematical model taking as a reference the measurements of a ball bar gauge located in a fixed position of the instrument's working volume. The measurements are carried out with the AACMM assembled on the IMP from the six rotating positions of the platform. In this way, an unlimited number and types of reference distances could be created without the need of using a physical gauge, therefore optimizing the testing time, the number of gauge positions and the space needed in the calibration and verification procedures. Four evaluation methods are presented to assess the volumetric performance of the AACMM. The results obtained proved the suitability of the virtual distances methodology as an alternative procedure for verification of AACMMs using the indexed metrology platform.

  16. Piezoelectric power generation for sensor applications: design of a battery-less wireless tire pressure sensor

    NASA Astrophysics Data System (ADS)

    Makki, Noaman; Pop-Iliev, Remon

    2011-06-01

    An in-wheel wireless and battery-less piezo-powered tire pressure sensor is developed. Where conventional battery powered Tire Pressure Monitoring Systems (TPMS) are marred by the limited battery life, TPMS based on power harvesting modules provide virtually unlimited sensor life. Furthermore, the elimination of a permanent energy reservoir simplifies the overall sensor design through the exclusion of extra circuitry required to sense vehicle motion and conserve precious battery capacity during vehicle idling periods. In this paper, two design solutions are presented, 1) with very low cost highly flexible piezoceramic (PZT) bender elements bonded directly to the tire to generate power required to run the sensor and, 2) a novel rim mounted PZT harvesting unit that can be used to power pressure sensors incorporated into the valve stem requiring minimal change to the presently used sensors. While both the designs eliminate the use of environmentally unfriendly battery from the TPMS design, they offer advantages of being very low cost, service free and easily replaceable during tire repair and replacement.

  17. Virtual Sensor Web Architecture

    NASA Astrophysics Data System (ADS)

    Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.

    2006-12-01

    NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.

  18. Noncontact Measurement of Humidity and Temperature Using Airborne Ultrasound

    NASA Astrophysics Data System (ADS)

    Kon, Akihiko; Mizutani, Koichi; Wakatsuki, Naoto

    2010-04-01

    We describe a noncontact method for measuring humidity and dry-bulb temperature. Conventional humidity sensors are single-point measurement devices, so that a noncontact method for measuring the relative humidity is required. Ultrasonic temperature sensors are noncontact measurement sensors. Because water vapor in the air increases sound velocity, conventional ultrasonic temperature sensors measure virtual temperature, which is higher than dry-bulb temperature. We performed experiments using an ultrasonic delay line, an atmospheric pressure sensor, and either a thermometer or a relative humidity sensor to confirm the validity of our measurement method at relative humidities of 30, 50, 75, and 100% and at temperatures of 283.15, 293.15, 308.15, and 323.15 K. The results show that the proposed method measures relative humidity with an error rate of less than 16.4% and dry-bulb temperature with an error of less than 0.7 K. Adaptations of the measurement method for use in air-conditioning control systems are discussed.

  19. Virtual Reality-Based Center of Mass-Assisted Personalized Balance Training System

    PubMed Central

    Kumar, Deepesh; González, Alejandro; Das, Abhijit; Dutta, Anirban; Fraisse, Philippe; Hayashibe, Mitsuhiro; Lahiri, Uttama

    2018-01-01

    Poststroke hemiplegic patients often show altered weight distribution with balance disorders, increasing their risk of fall. Conventional balance training, though powerful, suffers from scarcity of trained therapists, frequent visits to clinics to get therapy, one-on-one therapy sessions, and monotony of repetitive exercise tasks. Thus, technology-assisted balance rehabilitation can be an alternative solution. Here, we chose virtual reality as a technology-based platform to develop motivating balance tasks. This platform was augmented with off-the-shelf available sensors such as Nintendo Wii balance board and Kinect to estimate one’s center of mass (CoM). The virtual reality-based CoM-assisted balance tasks (Virtual CoMBaT) was designed to be adaptive to one’s individualized weight-shifting capability quantified through CoM displacement. Participants were asked to interact with Virtual CoMBaT that offered tasks of varying challenge levels while adhering to ankle strategy for weight shifting. To facilitate the patients to use ankle strategy during weight-shifting, we designed a heel lift detection module. A usability study was carried out with 12 hemiplegic patients. Results indicate the potential of our system to contribute to improving one’s overall performance in balance-related tasks belonging to different difficulty levels. PMID:29359128

  20. The benefits of soft sensor and multi-rate control for the implementation of Wireless Networked Control Systems.

    PubMed

    Mansano, Raul K; Godoy, Eduardo P; Porto, Arthur J V

    2014-12-18

    Recent advances in wireless networking technology and the proliferation of industrial wireless sensors have led to an increasing interest in using wireless networks for closed loop control. The main advantages of Wireless Networked Control Systems (WNCSs) are the reconfigurability, easy commissioning and the possibility of installation in places where cabling is impossible. Despite these advantages, there are two main problems which must be considered for practical implementations of WNCSs. One problem is the sampling period constraint of industrial wireless sensors. This problem is related to the energy cost of the wireless transmission, since the power supply is limited, which precludes the use of these sensors in several closed-loop controls. The other technological concern in WNCS is the energy efficiency of the devices. As the sensors are powered by batteries, the lowest possible consumption is required to extend battery lifetime. As a result, there is a compromise between the sensor sampling period, the sensor battery lifetime and the required control performance for the WNCS. This paper develops a model-based soft sensor to overcome these problems and enable practical implementations of WNCSs. The goal of the soft sensor is generating virtual data allowing an actuation on the process faster than the maximum sampling period available for the wireless sensor. Experimental results have shown the soft sensor is a solution to the sampling period constraint problem of wireless sensors in control applications, enabling the application of industrial wireless sensors in WNCSs. Additionally, our results demonstrated the soft sensor potential for implementing energy efficient WNCS through the battery saving of industrial wireless sensors.

  1. Getting the point across: exploring the effects of dynamic virtual humans in an interactive museum exhibit on user perceptions.

    PubMed

    Rivera-Gutierrez, Diego; Ferdig, Rick; Li, Jian; Lok, Benjamin

    2014-04-01

    We have created “You, M.D.”, an interactive museum exhibit in which users learn about topics in public health literacy while interacting with virtual humans. You, M.D. is equipped with a weight sensor, a height sensor and a Microsoft Kinect that gather basic user information. Conceptually, You, M.D. could use this user information to dynamically select the appearance of the virtual humans in the interaction attempting to improve learning outcomes and user perception for each particular user. For this concept to be possible, a better understanding of how different elements of the visual appearance of a virtual human affects user perceptions is required. In this paper, we present the results of an initial user study with a large sample size (n =333) ran using You, M.D. The study measured users’ reactions based on the user’s gender and body-mass index (BMI) when facing virtual humans with BMI either concordant or discordant from the user’s BMI. The results of the study indicate that concordance between the users’ BMI and the virtual human’s BMI affects male and female users differently. The results also show that female users rate virtual humans as more knowledgeable than male users rate the same virtual humans.

  2. A haptic sensor-actor-system based on ultrasound elastography and electrorheological fluids for virtual reality applications in medicine.

    PubMed

    Khaled, W; Ermert, H; Bruhns, O; Boese, H; Baumann, M; Monkman, G J; Egersdoerfer, S; Meier, A; Klein, D; Freimuth, H

    2003-01-01

    Mechanical properties of biological tissue represent important diagnostic information and are of histological relevance (hard lesions, "nodes" in organs: tumors; calcifications in vessels: arteriosclerosis). The problem is, that such information is usually obtained by digital palpation only, which is limited with respect to sensitivity. It requires intuitive assessment and does not allow quantitative documentation. A suitable sensor is required for quantitative detection of mechanical tissue properties. On the other hand, there is also some need for a realistic mechanical display of those tissue properties. Suitable actuator arrays with high spatial resolution and real-time capabilities are required operating in a haptic sensor actuator system with different applications. The sensor system uses real time ultrasonic elastography whereas the tactile actuator is based on electrorheological fluids. Due to their small size the actuator array elements have to be manufactured by micro-mechanical production methods. In order to supply the actuator elements with individual high voltages a sophisticated switching and control concept have been designed. This haptic system has the potential of inducing real time substantial forces, using a compact lightweight mechanism which can be applied to numerous areas including intraoperative navigation, telemedicine, teaching, space and telecommunication.

  3. Vehicle Lateral State Estimation Based on Measured Tyre Forces

    PubMed Central

    Tuononen, Ari J.

    2009-01-01

    Future active safety systems need more accurate information about the state of vehicles. This article proposes a method to evaluate the lateral state of a vehicle based on measured tyre forces. The tyre forces of two tyres are estimated from optically measured tyre carcass deflections and transmitted wirelessly to the vehicle body. The two remaining tyres are so-called virtual tyre sensors, the forces of which are calculated from the real tyre sensor estimates. The Kalman filter estimator for lateral vehicle state based on measured tyre forces is presented, together with a simple method to define adaptive measurement error covariance depending on the driving condition of the vehicle. The estimated yaw rate and lateral velocity are compared with the validation sensor measurements. PMID:22291535

  4. Enhancing patient freedom in rehabilitation robotics using gaze-based intention detection.

    PubMed

    Novak, Domen; Riener, Robert

    2013-06-01

    Several design strategies for rehabilitation robotics have aimed to improve patients' experiences using motivating and engaging virtual environments. This paper presents a new design strategy: enhancing patient freedom with a complex virtual environment that intelligently detects patients' intentions and supports the intended actions. A 'virtual kitchen' scenario has been developed in which many possible actions can be performed at any time, allowing patients to experiment and giving them more freedom. Remote eye tracking is used to detect the intended action and trigger appropriate support by a rehabilitation robot. This approach requires no additional equipment attached to the patient and has a calibration time of less than a minute. The system was tested on healthy subjects using the ARMin III arm rehabilitation robot. It was found to be technically feasible and usable by healthy subjects. However, the intention detection algorithm should be improved using better sensor fusion, and clinical tests with patients are needed to evaluate the system's usability and potential therapeutic benefits.

  5. An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor

    NASA Astrophysics Data System (ADS)

    Liscombe, Michael

    3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.

  6. Using semantic technologies and the OSU ontology for modelling context and activities in multi-sensory surveillance systems

    NASA Astrophysics Data System (ADS)

    Gómez A, Héctor F.; Martínez-Tomás, Rafael; Arias Tapia, Susana A.; Rincón Zamorano, Mariano

    2014-04-01

    Automatic systems that monitor human behaviour for detecting security problems are a challenge today. Previously, our group defined the Horus framework, which is a modular architecture for the integration of multi-sensor monitoring stages. In this work, structure and technologies required for high-level semantic stages of Horus are proposed, and the associated methodological principles established with the aim of recognising specific behaviours and situations. Our methodology distinguishes three semantic levels of events: low level (compromised with sensors), medium level (compromised with context), and high level (target behaviours). The ontology for surveillance and ubiquitous computing has been used to integrate ontologies from specific domains and together with semantic technologies have facilitated the modelling and implementation of scenes and situations by reusing components. A home context and a supermarket context were modelled following this approach, where three suspicious activities were monitored via different virtual sensors. The experiments demonstrate that our proposals facilitate the rapid prototyping of this kind of systems.

  7. Virtualization of event sources in wireless sensor networks for the internet of things.

    PubMed

    Lucas Martínez, Néstor; Martínez, José-Fernán; Hernández Díaz, Vicente

    2014-12-01

    Wireless Sensor Networks (WSNs) are generally used to collect information from the environment. The gathered data are delivered mainly to sinks or gateways that become the endpoints where applications can retrieve and process such data. However, applications would also expect from a WSN an event-driven operational model, so that they can be notified whenever occur some specific environmental changes instead of continuously analyzing the data provided periodically. In either operational model, WSNs represent a collection of interconnected objects, as outlined by the Internet of Things. Additionally, in order to fulfill the Internet of Things principles, Wireless Sensor Networks must have a virtual representation that allows indirect access to their resources, a model that should also include the virtualization of event sources in a WSN. Thus, in this paper a model for a virtual representation of event sources in a WSN is proposed. They are modeled as internet resources that are accessible by any internet application, following an Internet of Things approach. The model has been tested in a real implementation where a WSN has been deployed in an open neighborhood environment. Different event sources have been identified in the proposed scenario, and they have been represented following the proposed model.

  8. Augmenting Trust Establishment in Dynamic Systems with Social Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagesse, Brent J; Kumar, Mohan; Venkatesh, Svetha

    2010-01-01

    Social networking has recently flourished in popularity through the use of social websites. Pervasive computing resources have allowed people stay well-connected to each other through access to social networking resources. We take the position that utilizing information produced by relationships within social networks can assist in the establishment of trust for other pervasive computing applications. Furthermore, we describe how such a system can augment a sensor infrastructure used for event observation with information from mobile sensors (ie, mobile phones with cameras) controlled by potentially untrusted third parties. Pervasive computing systems are invisible systems, oriented around the user. As a result,more » many future pervasive systems are likely to include a social aspect to the system. The social communities that are developed in these systems can augment existing trust mechanisms with information about pre-trusted entities or entities to initially consider when beginning to establish trust. An example of such a system is the Collaborative Virtual Observation (CoVO) system fuses sensor information from disaparate sources in soft real-time to recreate a scene that provides observation of an event that has recently transpired. To accomplish this, CoVO must efficently access services whilst protecting the data from corruption from unknown remote nodes. CoVO combines dynamic service composition with virtual observation to utilize existing infrastructure with third party services available in the environment. Since these services are not under the control of the system, they may be unreliable or malicious. When an event of interest occurs, the given infrastructure (bus cameras, etc.) may not sufficiently cover the necessary information (be it in space, time, or sensor type). To enhance observation of the event, infrastructure is augmented with information from sensors in the environment that the infrastructure does not control. These sensors may be unreliable, uncooperative, or even malicious. Additionally, to execute queries in soft real-time, processing must be distributed to available systems in the environment. We propose to use information from social networks to satisfy these requirements. In this paper, we present our position that knowledge gained from social activities can be used to augment trust mechanisms in pervasive computing. The system uses social behavior of nodes to predict a subset that it wants to query for information. In this context, social behavior such as transit patterns and schedules (which can be used to determine if a queried node is likely to be reliable) or known relationships, such as a phone's address book, that can be used to determine networks of nodes that may also be able to assist in retrieving information. Neither implicit nor explicit relationships necessarily imply that the user trusts an entity, but rather will provide a starting place for establishing trust. The proposed framework utilizes social network information to assist in trust establishment when third-party sensors are used for sensing events.« less

  9. minimega v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crussell, Jonathan; Erickson, Jeremy; Fritz, David

    minimega is an emulytics platform for creating testbeds of networked devices. The platoform consists of easily deployable tools to facilitate bringing up large networks of virtual machines including Windows, Linux, and Android. minimega allows experiments to be brought up quickly with almost no configuration. minimega also includes tools for simple cluster, management, as well as tools for creating Linux-based virtual machines. This release of minimega includes new emulated sensors for Android devices to improve the fidelity of testbeds that include mobile devices. Emulated sensors include GPS and

  10. Return to Flight: Crew Activities Resource Reel 1 of 2

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The crew of the STS-114 Discovery Mission is seen in various aspects of training for space flight. The crew activities include: 1) STS-114 Return to Flight Crew Photo Session; 2) Tile Repair Training on Precision Air Bearing Floor; 3) SAFER Tile Inspection Training in Virtual Reality Laboratory; 4) Guidance and Navigation Simulator Tile Survey Training; 5) Crew Inspects Orbital Boom and Sensor System (OBSS); 6) Bailout Training-Crew Compartment; 7) Emergency Egress Training-Crew Compartment Trainer (CCT); 8) Water Survival Training-Neutral Buoyancy Lab (NBL); 9) Ascent Training-Shuttle Motion Simulator; 10) External Tank Photo Training-Full Fuselage Trainer; 11) Rendezvous and Docking Training-Shuttle Engineering Simulator (SES) Dome; 12) Shuttle Robot Arm Training-SES Dome; 13) EVA Training Virtual Reality Lab; 14) EVA Training Neutral Buoyancy Lab; 15) EVA-2 Training-NBL; 16) EVA Tool Training-Partial Gravity Simulator; 17) Cure in Place Ablator Applicator (CIPAA) Training Glove Vacuum Chamber; 16) Crew Visit to Merritt Island Launch Area (MILA); 17) Crew Inspection-Space Shuttle Discovery; and 18) Crew Inspection-External Tank and Orbital Boom and Sensor System (OBSS). The crew are then seen answering questions from the media at the Space Shuttle Landing Facility.

  11. Recent results in visual servoing

    NASA Astrophysics Data System (ADS)

    Chaumette, François

    2008-06-01

    Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.

  12. Designing of a technological line in the context of controlling with the use of integration of the virtual controller with the mechatronics concept designer module of the PLM Siemens NX software

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2017-08-01

    In the work is examined the sequential control system of a technological line in the form of the final part of a system of an internal transport. The process of designing this technological line using the computer-aided approach ran concurrently in two different program environments. In the Mechatronics Concept Designer module of the PLM Siemens NX software was developed the 3D model of the technological line prepared for verification the logic interrelations implemented in the control system. For this purpose, from the whole system of the technological line, it was distinguished the sub-system of actuators and sensors, because their correct operation determines the correct operation of the whole system. Whereas in the application of the virtual controller have been implemented the algorithms of work of the planned line. Then both program environments have been integrated using the OPC server, which enables the exchange of data between the considered systems. The data on the state of the object and the data defining the way and sequence of operation of the technological line are exchanged between the virtual controller and the 3D model of the technological line in real time.

  13. Control performance of a road vehicle with four independent single-wheel electric motors and steer-by-wire system

    NASA Astrophysics Data System (ADS)

    Weiskircher, Thomas; Müller, Steffen

    2012-01-01

    This article presents a motion controller for a road vehicle equipped with a steer-by-wire system and four independent electric rim-mounted drives. The motion controller separates the control law from the specific actuator setup by the usage of virtual global control variables acting on the vehicle centre of gravity. A control allocation algorithm distributes the virtual control variables to the available actuators. An approximation of the real actuator dynamics is used to analyse the performance of different motion controller types in the linear and nonlinear driving regions. In addition, a vehicle state observer consisting of a traction force observer and an unscented Kalman filter is discussed to analyse the control behaviour in the case of a real sensor setup.

  14. Augmented reality system

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Liang; Su, Yu-Zheng; Hung, Min-Wei; Huang, Kuo-Cheng

    2010-08-01

    In recent years, Augmented Reality (AR)[1][2][3] is very popular in universities and research organizations. The AR technology has been widely used in Virtual Reality (VR) fields, such as sophisticated weapons, flight vehicle development, data model visualization, virtual training, entertainment and arts. AR has characteristics to enhance the display output as a real environment with specific user interactive functions or specific object recognitions. It can be use in medical treatment, anatomy training, precision instrument casting, warplane guidance, engineering and distance robot control. AR has a lot of vantages than VR. This system developed combines sensors, software and imaging algorithms to make users feel real, actual and existing. Imaging algorithms include gray level method, image binarization method, and white balance method in order to make accurate image recognition and overcome the effects of light.

  15. Wavelets and Elman Neural Networks for monitoring environmental variables

    NASA Astrophysics Data System (ADS)

    Ciarlini, Patrizia; Maniscalco, Umberto

    2008-11-01

    An application in cultural heritage is introduced. Wavelet decomposition and Neural Networks like virtual sensors are jointly used to simulate physical and chemical measurements in specific locations of a monument. Virtual sensors, suitably trained and tested, can substitute real sensors in monitoring the monument surface quality, while the real ones should be installed for a long time and at high costs. The application of the wavelet decomposition to the environmental data series allows getting the treatment of underlying temporal structure at low frequencies. Consequently a separate training of suitable Elman Neural Networks for high/low components can be performed, thus improving the networks convergence in learning time and measurement accuracy in working time.

  16. Automatic 3D virtual scenes modeling for multisensors simulation

    NASA Astrophysics Data System (ADS)

    Latger, Jean; Le Goff, Alain; Cathala, Thierry; Larive, Mathieu

    2006-05-01

    SEDRIS that stands for Synthetic Environment Data Representation and Interchange Specification is a DoD/DMSO initiative in order to federate and make interoperable 3D mocks up in the frame of virtual reality and simulation. This paper shows an original application of SEDRIS concept for research physical multi sensors simulation, when SEDRIS is more classically known for training simulation. CHORALE (simulated Optronic Acoustic Radar battlefield) is used by the French DGA/DCE (Directorate for Test and Evaluation of the French Ministry of Defense) to perform multi-sensors simulations. CHORALE enables the user to create virtual and realistic multi spectral 3D scenes, and generate the physical signal received by a sensor, typically an IR sensor. In the scope of this CHORALE workshop, French DGA has decided to introduce a SEDRIS based new 3D terrain modeling tool that enables to create automatically 3D databases, directly usable by the physical sensor simulation CHORALE renderers. This AGETIM tool turns geographical source data (including GIS facilities) into meshed geometry enhanced with the sensor physical extensions, fitted to the ray tracing rendering of CHORALE, both for the infrared, electromagnetic and acoustic spectrum. The basic idea is to enhance directly the 2D source level with the physical data, rather than enhancing the 3D meshed level, which is more efficient (rapid database generation) and more reliable (can be generated many times, changing some parameters only). The paper concludes with the last current evolution of AGETIM in the scope mission rehearsal for urban war using sensors. This evolution includes indoor modeling for automatic generation of inner parts of buildings.

  17. Phase unwrapping with a virtual Hartmann-Shack wavefront sensor.

    PubMed

    Akondi, Vyas; Falldorf, Claas; Marcos, Susana; Vohnsen, Brian

    2015-10-05

    The use of a spatial light modulator for implementing a digital phase-shifting (PS) point diffraction interferometer (PDI) allows tunability in fringe spacing and in achieving PS without the need for mechanically moving parts. However, a small amount of detector or scatter noise could affect the accuracy of wavefront sensing. Here, a novel method of wavefront reconstruction incorporating a virtual Hartmann-Shack (HS) wavefront sensor is proposed that allows easy tuning of several wavefront sensor parameters. The proposed method was tested and compared with a Fourier unwrapping method implemented on a digital PS PDI. The rewrapping of the Fourier reconstructed wavefronts resulted in phase maps that matched well the original wrapped phase and the performance was found to be more stable and accurate than conventional methods. Through simulation studies, the superiority of the proposed virtual HS phase unwrapping method is shown in comparison with the Fourier unwrapping method in the presence of noise. Further, combining the two methods could improve accuracy when the signal-to-noise ratio is sufficiently high.

  18. Vexcel Spells Excellence for Earth and Space

    NASA Technical Reports Server (NTRS)

    2002-01-01

    With assistance from Stennis Space Center, Vexcel was able to strengthen the properties of its Apex Ground Station(TM), an affordable, end-to-end system that comes complete with a tracking antenna that permits coverage within an approximate 2,000-kilometer radius of its location, a high speed direct-to-disk data acquisition system that can download information from virtually any satellite, and data processing software for virtually all synthetic aperture radar and optical satellite sensors. Vexcel is using an Apex system linked to the Terra satellite to help scientists and NASA personnel measure land and ocean surface temperatures, detect fires, monitor ocean color and currents, produce global vegetation maps and data, and assess cloud characteristics and aerosol concentrations. In addition, Vexcel is providing NASA with close-range photogrammetry software for the International Space Station. The technology, commercially available as FotoG(TM), was developed with SBIR funding and support from NASA's Jet Propulsion Laboratory. Commercially, FotoG is used for demanding projects taken on by engineering firms, nuclear power plants, oil refineries, and process facilities. A version of Vexcel's close-range photo measurement system was also used to create virtual 3-D backdrops for a high-tech science fiction film.

  19. Towards open-source, low-cost haptics for surgery simulation.

    PubMed

    Suwelack, Stefan; Sander, Christian; Schill, Julian; Serf, Manuel; Danz, Marcel; Asfour, Tamim; Burger, Wolfgang; Dillmann, Rüdiger; Speidel, Stefanie

    2014-01-01

    In minimally invasive surgery (MIS), virtual reality (VR) training systems have become a promising education tool. However, the adoption of these systems in research and clinical settings is still limited by the high costs of dedicated haptics hardware for MIS. In this paper, we present ongoing research towards an open-source, low-cost haptic interface for MIS simulation. We demonstrate the basic mechanical design of the device, the sensor setup as well as its software integration.

  20. Monitoring and Discovery for Self-Organized Network Management in Virtualized and Software Defined Networks

    PubMed Central

    Valdivieso Caraguay, Ángel Leonardo; García Villalba, Luis Javier

    2017-01-01

    This paper presents the Monitoring and Discovery Framework of the Self-Organized Network Management in Virtualized and Software Defined Networks SELFNET project. This design takes into account the scalability and flexibility requirements needed by 5G infrastructures. In this context, the present framework focuses on gathering and storing the information (low-level metrics) related to physical and virtual devices, cloud environments, flow metrics, SDN traffic and sensors. Similarly, it provides the monitoring data as a generic information source in order to allow the correlation and aggregation tasks. Our design enables the collection and storing of information provided by all the underlying SELFNET sublayers, including the dynamically onboarded and instantiated SDN/NFV Apps, also known as SELFNET sensors. PMID:28362346

  1. Monitoring and Discovery for Self-Organized Network Management in Virtualized and Software Defined Networks.

    PubMed

    Caraguay, Ángel Leonardo Valdivieso; Villalba, Luis Javier García

    2017-03-31

    This paper presents the Monitoring and Discovery Framework of the Self-Organized Network Management in Virtualized and Software Defined Networks SELFNET project. This design takes into account the scalability and flexibility requirements needed by 5G infrastructures. In this context, the present framework focuses on gathering and storing the information (low-level metrics) related to physical and virtual devices, cloud environments, flow metrics, SDN traffic and sensors. Similarly, it provides the monitoring data as a generic information source in order to allow the correlation and aggregation tasks. Our design enables the collection and storing of information provided by all the underlying SELFNET sublayers, including the dynamically onboarded and instantiated SDN/NFV Apps, also known as SELFNET sensors.

  2. Design of virtual three-dimensional instruments for sound control

    NASA Astrophysics Data System (ADS)

    Mulder, Axel Gezienus Elith

    An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object parameters. While the virtual instruments can be adapted to exploit many manipulation gestures, further work is required to reduce the need for technical expertise to realize adaptations. Better virtual object simulation techniques and faster sensor data acquisition will improve the performance of virtual instruments. The design environment which has been developed should prove useful as a (musical) instrument prototyping tool and as a tool for researching the optimal adaptation of machines to humans.

  3. Virtual Sensors in a Web 2.0 Digital Watershed

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Hill, D. J.; Marini, L.; Kooper, R.; Rodriguez, A.; Myers, J. D.

    2008-12-01

    The lack of rainfall data in many watersheds is one of the major barriers for modeling and studying many environmental and hydrological processes and supporting decision making. There are just not enough rain gages on the ground. To overcome this data scarcity issue, a Web 2.0 digital watershed is developed at NCSA(National Center for Supercomputing Applications), where users can point-and-click on a web-based google map interface and create new precipitation virtual sensors at any location within the same coverage region as a NEXRAD station. A set of scientific workflows are implemented to perform spatial, temporal and thematic transformations to the near-real-time NEXRAD Level II data. Such workflows can be triggered by the users' actions and generate either rainfall rate or rainfall accumulation streaming data at a user-specified time interval. We will discuss some underlying components of this digital watershed, which consists of a semantic content management middleware, a semantically enhanced streaming data toolkit, virtual sensor management functionality, and RESTful (REpresentational State Transfer) web service that can trigger the workflow execution. Such loosely coupled architecture presents a generic framework for constructing a Web 2.0 style digital watershed. An implementation of this architecture at the Upper Illinois Rive Basin will be presented. We will also discuss the implications of the virtual sensor concept for the broad environmental observatory community and how such concept will help us move towards a participatory digital watershed.

  4. Transmission of olfactory information for tele-medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, P.E.; Kouzes, R.T.; Kangas, L.J.

    1995-01-01

    While the inclusion of visual, aural, and tactile senses into virtual reality systems is widespread, the sense of smell has been largely ignored. We have developed a chemical vapor sensing system for the automated identification of chemical vapors (smells). Our prototype chemical vapor sensing system is composed of an array of tin-oxide vapor sensors coupled to an artificial neural net-work. The artificial neural network is used in the recognition of different smells and is constructed as a standard multilayer feed-forward network trained with the backpropagation algorithm. When a chemical sensor array is combined with an automated pattern identifier, it ismore » often referred to as an electronic or artificial nose. Applications of electronic noses include monitoring food and beverage odors, automated flavor control, analyzing fuel mixtures, and quantifying individual components in gas mixtures. Our prototype electronic nose has been used to identify odors from common household chemicals. An electronic nose will potentially be a key component in an olfactory input to a telepresent virtual reality system. The identified odor would be electronically transmitted from the electronic nose at one site to an odor generation system at another site. This combination would function as a mechanism for transmitting olfactory information for telepresence. This would have direct applicability in the area of telemedicine since the sense of smell is an important sense to the physician and surgeon. In this paper, our chemical sensing system (electronic nose) is presented along with a proposed method for regenerating the transmitted olfactory information.« less

  5. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    PubMed

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  6. Wireless sensor systems for sense/decide/act/communicate.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Nina M.; Cushner, Adam; Baker, James A.

    2003-12-01

    After 9/11, the United States (U.S.) was suddenly pushed into challenging situations they could no longer ignore as simple spectators. The War on Terrorism (WoT) was suddenly ignited and no one knows when this war will end. While the government is exploring many existing and potential technologies, the area of wireless Sensor networks (WSN) has emerged as a foundation for establish future national security. Unlike other technologies, WSN could provide virtual presence capabilities needed for precision awareness and response in military, intelligence, and homeland security applications. The Advance Concept Group (ACG) vision of Sense/Decide/Act/Communicate (SDAC) sensor system is an instantiationmore » of the WSN concept that takes a 'systems of systems' view. Each sensing nodes will exhibit the ability to: Sense the environment around them, Decide as a collective what the situation of their environment is, Act in an intelligent and coordinated manner in response to this situational determination, and Communicate their actions amongst each other and to a human command. This LDRD report provides a review of the research and development done to bring the SDAC vision closer to reality.« less

  7. Novel Virtual Environment for Alternative Treatment of Children with Cerebral Palsy

    PubMed Central

    de Oliveira, Juliana M.; Fernandes, Rafael Carneiro G.; Pinto, Cristtiano S.; Pinheiro, Plácido R.; Ribeiro, Sidarta

    2016-01-01

    Cerebral palsy is a severe condition usually caused by decreased brain oxygenation during pregnancy, at birth or soon after birth. Conventional treatments for cerebral palsy are often tiresome and expensive, leading patients to quit treatment. In this paper, we describe a virtual environment for patients to engage in a playful therapeutic game for neuropsychomotor rehabilitation, based on the experience of the occupational therapy program of the Nucleus for Integrated Medical Assistance (NAMI) at the University of Fortaleza, Brazil. Integration between patient and virtual environment occurs through the hand motion sensor “Leap Motion,” plus the electroencephalographic sensor “MindWave,” responsible for measuring attention levels during task execution. To evaluate the virtual environment, eight clinical experts on cerebral palsy were subjected to a questionnaire regarding the potential of the experimental virtual environment to promote cognitive and motor rehabilitation, as well as the potential of the treatment to enhance risks and/or negatively influence the patient's development. Based on the very positive appraisal of the experts, we propose that the experimental virtual environment is a promising alternative tool for the rehabilitation of children with cerebral palsy. PMID:27403154

  8. The Benefits of Soft Sensor and Multi-Rate Control for the Implementation of Wireless Networked Control Systems

    PubMed Central

    Mansano, Raul K.; Godoy, Eduardo P.; Porto, Arthur J. V.

    2014-01-01

    Recent advances in wireless networking technology and the proliferation of industrial wireless sensors have led to an increasing interest in using wireless networks for closed loop control. The main advantages of Wireless Networked Control Systems (WNCSs) are the reconfigurability, easy commissioning and the possibility of installation in places where cabling is impossible. Despite these advantages, there are two main problems which must be considered for practical implementations of WNCSs. One problem is the sampling period constraint of industrial wireless sensors. This problem is related to the energy cost of the wireless transmission, since the power supply is limited, which precludes the use of these sensors in several closed-loop controls. The other technological concern in WNCS is the energy efficiency of the devices. As the sensors are powered by batteries, the lowest possible consumption is required to extend battery lifetime. As a result, there is a compromise between the sensor sampling period, the sensor battery lifetime and the required control performance for the WNCS. This paper develops a model-based soft sensor to overcome these problems and enable practical implementations of WNCSs. The goal of the soft sensor is generating virtual data allowing an actuation on the process faster than the maximum sampling period available for the wireless sensor. Experimental results have shown the soft sensor is a solution to the sampling period constraint problem of wireless sensors in control applications, enabling the application of industrial wireless sensors in WNCSs. Additionally, our results demonstrated the soft sensor potential for implementing energy efficient WNCS through the battery saving of industrial wireless sensors. PMID:25529208

  9. Automotive Radar and Lidar Systems for Next Generation Driver Assistance Functions

    NASA Astrophysics Data System (ADS)

    Rasshofer, R. H.; Gresser, K.

    2005-05-01

    Automotive radar and lidar sensors represent key components for next generation driver assistance functions (Jones, 2001). Today, their use is limited to comfort applications in premium segment vehicles although an evolution process towards more safety-oriented functions is taking place. Radar sensors available on the market today suffer from low angular resolution and poor target detection in medium ranges (30 to 60m) over azimuth angles larger than ±30°. In contrast, Lidar sensors show large sensitivity towards environmental influences (e.g. snow, fog, dirt). Both sensor technologies today have a rather high cost level, forbidding their wide-spread usage on mass markets. A common approach to overcome individual sensor drawbacks is the employment of data fusion techniques (Bar-Shalom, 2001). Raw data fusion requires a common, standardized data interface to easily integrate a variety of asynchronous sensor data into a fusion network. Moreover, next generation sensors should be able to dynamically adopt to new situations and should have the ability to work in cooperative sensor environments. As vehicular function development today is being shifted more and more towards virtual prototyping, mathematical sensor models should be available. These models should take into account the sensor's functional principle as well as all typical measurement errors generated by the sensor.

  10. Terrain Commander: a next-generation remote surveillance system

    NASA Astrophysics Data System (ADS)

    Finneral, Henry J.

    2003-09-01

    Terrain Commander is a fully automated forward observation post that provides the most advanced capability in surveillance and remote situational awareness. The Terrain Commander system was selected by the Australian Government for its NINOX Phase IIB Unattended Ground Sensor Program with the first systems delivered in August of 2002. Terrain Commander offers next generation target detection using multi-spectral peripheral sensors coupled with autonomous day/night image capture and processing. Subsequent intelligence is sent back through satellite communications with unlimited range to a highly sophisticated central monitoring station. The system can "stakeout" remote locations clandestinely for 24 hours a day for months at a time. With its fully integrated SATCOM system, almost any site in the world can be monitored from virtually any other location in the world. Terrain Commander automatically detects and discriminates intruders by precisely cueing its advanced EO subsystem. The system provides target detection capabilities with minimal nuisance alarms combined with the positive visual identification that authorities demand before committing a response. Terrain Commander uses an advanced beamforming acoustic sensor and a distributed array of seismic, magnetic and passive infrared sensors to detect, capture images and accurately track vehicles and personnel. Terrain Commander has a number of emerging military and non-military applications including border control, physical security, homeland defense, force protection and intelligence gathering. This paper reviews the development, capabilities and mission applications of the Terrain Commander system.

  11. Mixed-Dimensionality VLSI-Type Configurable Tools for Virtual Prototyping of Biomicrofluidic Devices and Integrated Systems

    DTIC Science & Technology

    2002-10-01

    proximity to this aluminum bar, then the aluminum element would serve as a heat pipe to rapidly distribute heat to the center sensor and the floor...for a Bent Square Pipe ......................................................... 86 7.3 One-Cell Model for Free Surface Flows...90 7.4.2 Filament Application for Fluid Heating in Microreactor...................................... 91 7.4.3 Model

  12. Integrating soft sensor systems using conductive thread

    NASA Astrophysics Data System (ADS)

    Teng, Lijun; Jeronimo, Karina; Wei, Tianqi; Nemitz, Markus P.; Lyu, Geng; Stokes, Adam A.

    2018-05-01

    We are part of a growing community of researchers who are developing a new class of soft machines. By using mechanically soft materials (MPa modulus) we can design systems which overcome the bulk-mechanical mismatches between soft biological systems and hard engineered components. To develop fully integrated soft machines—which include power, communications, and control sub-systems—the research community requires methods for interconnecting between soft and hard electronics. Sensors based upon eutectic gallium alloys in microfluidic channels can be used to measure normal and strain forces, but integrating these sensors into systems of heterogeneous Young’s modulus is difficult due the complexity of finding a material which is electrically conductive, mechanically flexible, and stable over prolonged periods of time. Many existing gallium-based liquid alloy sensors are not mechanically or electrically robust, and have poor stability over time. We present the design and fabrication of a high-resolution pressure-sensor soft system that can transduce normal force into a digital output. In this soft system, which is built on a monolithic silicone substrate, a galinstan-based microfluidic pressure sensor is integrated with a flexible printed circuit board. We used conductive thread as the interconnect and found that this method alleviates problems arising due to the mechanical mismatch between conventional metal wires and soft or liquid materials. Conductive thread is low-cost, it is readily wetted by the liquid metal, it produces little bending moment into the microfluidic channel, and it can be connected directly onto the copper bond-pads of the flexible printed circuit board. We built a bridge-system to provide stable readings from the galinstan pressure sensor. This system gives linear measurement results between 500-3500 Pa of applied pressure. We anticipate that integrated systems of this type will find utility in soft-robotic systems as used for wearable technologies like virtual reality, or in soft-medical devices such as exoskeletal rehabilitation robots.

  13. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks.

    PubMed

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-06-06

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.

  14. Node Scheduling Strategies for Achieving Full-View Area Coverage in Camera Sensor Networks

    PubMed Central

    Wu, Peng-Fei; Xiao, Fu; Sha, Chao; Huang, Hai-Ping; Wang, Ru-Chuan; Xiong, Nai-Xue

    2017-01-01

    Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions. PMID:28587304

  15. Electro-optical imaging systems integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wight, R.

    1987-01-01

    Since the advent of high resolution, high data rate electronic sensors for military aircraft, the demands on their counterpart, the image generator hard copy output system, have increased dramatically. This has included support of direct overflight and standoff reconnaissance systems and often has required operation within a military shelter or van. The Tactical Laser Beam Recorder (TLBR) design has met the challenge each time. A third generation (TLBR) was designed and two units delivered to rapidly produce high quality wet process imagery on 5-inch film from a 5-sensor digital image signal input. A modular, in-line wet film processor is includedmore » in the total TLBR (W) system. The system features a rugged optical and transport package that requires virtually no alignment or maintenance. It has a ''Scan FIX'' capability which corrects for scanner fault errors and ''Scan LOC'' system which provides for complete phase synchronism isolation between scanner and digital image data input via strobed, 2-line digital buffers. Electronic gamma adjustment automatically compensates for variable film processing time as the film speed changes to track the sensor. This paper describes the fourth meeting of that challenge, the High Resolution Laser Beam Recorder (HRLBR) for Reconnaissance/Tactical applications.« less

  16. Magnetosensitive e-skins with directional perception for augmented reality

    PubMed Central

    Cañón Bermúdez, Gilbert Santiago; Karnaushenko, Dmitriy D.; Karnaushenko, Daniil; Lebanov, Ana; Bischoff, Lothar; Kaltenbrunner, Martin; Fassbender, Jürgen; Schmidt, Oliver G.; Makarov, Denys

    2018-01-01

    Electronic skins equipped with artificial receptors are able to extend our perception beyond the modalities that have naturally evolved. These synthetic receptors offer complimentary information on our surroundings and endow us with novel means of manipulating physical or even virtual objects. We realize highly compliant magnetosensitive skins with directional perception that enable magnetic cognition, body position tracking, and touchless object manipulation. Transfer printing of eight high-performance spin valve sensors arranged into two Wheatstone bridges onto 1.7-μm-thick polyimide foils ensures mechanical imperceptibility. This resembles a new class of interactive devices extracting information from the surroundings through magnetic tags. We demonstrate this concept in augmented reality systems with virtual knob-turning functions and the operation of virtual dialing pads, based on the interaction with magnetic fields. This technology will enable a cornucopia of applications from navigation, motion tracking in robotics, regenerative medicine, and sports and gaming to interaction in supplemented reality. PMID:29376121

  17. Encountered-Type Haptic Interface for Representation of Shape and Rigidity of 3D Virtual Objects.

    PubMed

    Takizawa, Naoki; Yano, Hiroaki; Iwata, Hiroo; Oshiro, Yukio; Ohkohchi, Nobuhiro

    2017-01-01

    This paper describes the development of an encountered-type haptic interface that can generate the physical characteristics, such as shape and rigidity, of three-dimensional (3D) virtual objects using an array of newly developed non-expandable balloons. To alter the rigidity of each non-expandable balloon, the volume of air in it is controlled through a linear actuator and a pressure sensor based on Hooke's law. Furthermore, to change the volume of each balloon, its exposed surface area is controlled by using another linear actuator with a trumpet-shaped tube. A position control mechanism is constructed to display virtual objects using the balloons. The 3D position of each balloon is controlled using a flexible tube and a string. The performance of the system is tested and the results confirm the effectiveness of the proposed principle and interface.

  18. Ultrasonic imaging of material flaws exploiting multipath information

    NASA Astrophysics Data System (ADS)

    Shen, Xizhong; Zhang, Yimin D.; Demirli, Ramazan; Amin, Moeness G.

    2011-05-01

    In this paper, we consider ultrasonic imaging for the visualization of flaws in a material. Ultrasonic imaging is a powerful nondestructive testing (NDT) tool which assesses material conditions via the detection, localization, and classification of flaws inside a structure. Multipath exploitations provide extended virtual array apertures and, in turn, enhance imaging capability beyond the limitation of traditional multisensor approaches. We utilize reflections of ultrasonic signals which occur when encountering different media and interior discontinuities. The waveforms observed at the physical as well as virtual sensors yield additional measurements corresponding to different aspect angles. Exploitation of multipath information addresses unique issues observed in ultrasonic imaging. (1) Utilization of physical and virtual sensors significantly extends the array aperture for image enhancement. (2) Multipath signals extend the angle of view of the narrow beamwidth of the ultrasound transducers, allowing improved visibility and array design flexibility. (3) Ultrasonic signals experience difficulty in penetrating a flaw, thus the aspect angle of the observation is limited unless access to other sides is available. The significant extension of the aperture makes it possible to yield flaw observation from multiple aspect angles. We show that data fusion of physical and virtual sensor data significantly improves the detection and localization performance. The effectiveness of the proposed multipath exploitation approach is demonstrated through experimental studies.

  19. Virtualization of Event Sources in Wireless Sensor Networks for the Internet of Things

    PubMed Central

    Martínez, Néstor Lucas; Martínez, José-Fernán; Díaz, Vicente Hernández

    2014-01-01

    Wireless Sensor Networks (WSNs) are generally used to collect information from the environment. The gathered data are delivered mainly to sinks or gateways that become the endpoints where applications can retrieve and process such data. However, applications would also expect from a WSN an event-driven operational model, so that they can be notified whenever occur some specific environmental changes instead of continuously analyzing the data provided periodically. In either operational model, WSNs represent a collection of interconnected objects, as outlined by the Internet of Things. Additionally, in order to fulfill the Internet of Things principles, Wireless Sensor Networks must have a virtual representation that allows indirect access to their resources, a model that should also include the virtualization of event sources in a WSN. Thus, in this paper a model for a virtual representation of event sources in a WSN is proposed. They are modeled as internet resources that are accessible by any internet application, following an Internet of Things approach. The model has been tested in a real implementation where a WSN has been deployed in an open neighborhood environment. Different event sources have been identified in the proposed scenario, and they have been represented following the proposed model. PMID:25470489

  20. Performance analysis of the Microsoft Kinect sensor for 2D Simultaneous Localization and Mapping (SLAM) techniques.

    PubMed

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-12-05

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.

  1. Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM) Techniques

    PubMed Central

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-01-01

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595

  2. A Virtual Upgrade Validation Method for Software-Reliant Systems

    DTIC Science & Technology

    2012-06-01

    3.4 Root Cause Areas of System-Level Faults 11 3.4.1 End-to-End Flow of Data Streams 11 3.4.2 Distributed Communicating State Machines 13 3.4.3...FlyByWire/FlyByWire_english.pdf (Accessed on November 11 , 2011.) [Apple 2005] Apple Support Communities , jazzman40. iTunes Crashes When Ripping...Strategies 39 7.1 Application Pattern Modeling Strategies 39 7.1.1 Control Loops 39 7.1.2 State Transition Communication 42 7.1.3 Sensor/Signal Fusion

  3. Sorption cryogenic refrigeration - Status and future

    NASA Technical Reports Server (NTRS)

    Jones, Jack A.

    1988-01-01

    The operation principles of sorption cryogenic refrigeration are discussed. Sorption refrigerators have virtually no wear-related moving parts, have negligible vibration, and offer extremely long life (at least ten years), making it possible to obtain efficient, long life and low vibration cooling to as low as 7 K for cryogenic sensors. The physisorption and chemisorption systems recommended for various cooling ranges down to 7 K are described in detail. For long-life cooling at 4-5 K temperatures, a hybrid chemisorption-mechanical refrigeration system is recommended.

  4. Monitoring and Control Interface Based on Virtual Sensors

    PubMed Central

    Escobar, Ricardo F.; Adam-Medina, Manuel; García-Beltrán, Carlos D.; Olivares-Peregrino, Víctor H.; Juárez-Romero, David; Guerrero-Ramírez, Gerardo V.

    2014-01-01

    In this article, a toolbox based on a monitoring and control interface (MCI) is presented and applied in a heat exchanger. The MCI was programed in order to realize sensor fault detection and isolation and fault tolerance using virtual sensors. The virtual sensors were designed from model-based high-gain observers. To develop the control task, different kinds of control laws were included in the monitoring and control interface. These control laws are PID, MPC and a non-linear model-based control law. The MCI helps to maintain the heat exchanger under operation, even if a temperature outlet sensor fault occurs; in the case of outlet temperature sensor failure, the MCI will display an alarm. The monitoring and control interface is used as a practical tool to support electronic engineering students with heat transfer and control concepts to be applied in a double-pipe heat exchanger pilot plant. The method aims to teach the students through the observation and manipulation of the main variables of the process and by the interaction with the monitoring and control interface (MCI) developed in LabVIEW©. The MCI provides the electronic engineering students with the knowledge of heat exchanger behavior, since the interface is provided with a thermodynamic model that approximates the temperatures and the physical properties of the fluid (density and heat capacity). An advantage of the interface is the easy manipulation of the actuator for an automatic or manual operation. Another advantage of the monitoring and control interface is that all algorithms can be manipulated and modified by the users. PMID:25365462

  5. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  6. Spacecraft Alignment Determination and Control for Dual Spacecraft Precision Formation Flying

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip; Novo-Gradac, Anne-Marie; Shah, Neerav

    2017-01-01

    Many proposed formation flying missions seek to advance the state of the art in spacecraft science imaging by utilizing precision dual spacecraft formation flying to enable a virtual space telescope. Using precision dual spacecraft alignment, very long focal lengths can be achieved by locating the optics on one spacecraft and the detector on the other. Proposed science missions include astrophysics concepts with spacecraft separations from 1000 km to 25,000 km, such as the Milli-Arc-Second Structure Imager (MASSIM) and the New Worlds Observer, and Heliophysics concepts for solar coronagraphs and X-ray imaging with smaller separations (50m-500m). All of these proposed missions require advances in guidance, navigation, and control (GNC) for precision formation flying. In particular, very precise astrometric alignment control and estimation is required for precise inertial pointing of the virtual space telescope to enable science imaging orders of magnitude better than can be achieved with conventional single spacecraft instruments. This work develops design architectures, algorithms, and performance analysis of proposed GNC systems for precision dual spacecraft astrometric alignment. These systems employ a variety of GNC sensors and actuators, including laser-based alignment and ranging systems, optical imaging sensors (e.g. guide star telescope), inertial measurement units (IMU), as well as microthruster and precision stabilized platforms. A comprehensive GNC performance analysis is given for Heliophysics dual spacecraft PFF imaging mission concept.

  7. Spacecraft Alignment Determination and Control for Dual Spacecraft Precision Formation Flying

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip C.; Novo-Gradac, Anne-Marie; Shah, Neerav

    2017-01-01

    Many proposed formation flying missions seek to advance the state of the art in spacecraft science imaging by utilizing precision dual spacecraft formation flying to enable a virtual space telescope. Using precision dual spacecraft alignment, very long focal lengths can be achieved by locating the optics on one spacecraft and the detector on the other. Proposed science missions include astrophysics concepts with spacecraft separations from 1000 km to 25,000 km, such as the Milli-Arc-Second Structure Imager (MASSIM) and the New Worlds Observer, and Heliophysics concepts for solar coronagraphs and X-ray imaging with smaller separations (50m 500m). All of these proposed missions require advances in guidance, navigation, and control (GNC) for precision formation flying. In particular, very precise astrometric alignment control and estimation is required for precise inertial pointing of the virtual space telescope to enable science imaging orders of magnitude better than can be achieved with conventional single spacecraft instruments. This work develops design architectures, algorithms, and performance analysis of proposed GNC systems for precision dual spacecraft astrometric alignment. These systems employ a variety of GNC sensors and actuators, including laser-based alignment and ranging systems, optical imaging sensors (e.g. guide star telescope), inertial measurement units (IMU), as well as micro-thruster and precision stabilized platforms. A comprehensive GNC performance analysis is given for Heliophysics dual spacecraft PFF imaging mission concept.

  8. The Virtual Environment for Rapid Prototyping of the Intelligent Environment

    PubMed Central

    Bouzouane, Abdenour; Gaboury, Sébastien

    2017-01-01

    Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants’ behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs. PMID:29112175

  9. The Virtual Environment for Rapid Prototyping of the Intelligent Environment.

    PubMed

    Francillette, Yannick; Boucher, Eric; Bouzouane, Abdenour; Gaboury, Sébastien

    2017-11-07

    Advances in domains such as sensor networks and electronic and ambient intelligence have allowed us to create intelligent environments (IEs). However, research in IE is being held back by the fact that researchers face major difficulties, such as a lack of resources for their experiments. Indeed, they cannot easily build IEs to evaluate their approaches. This is mainly because of economic and logistical issues. In this paper, we propose a simulator to build virtual IEs. Simulators are a good alternative to physical IEs because they are inexpensive, and experiments can be conducted easily. Our simulator is open source and it provides users with a set of virtual sensors that simulates the behavior of real sensors. This simulator gives the user the capacity to build their own environment, providing a model to edit inhabitants' behavior and an interactive mode. In this mode, the user can directly act upon IE objects. This simulator gathers data generated by the interactions in order to produce datasets. These datasets can be used by scientists to evaluate several approaches in IEs.

  10. Emergency Response Virtual Environment for Safe Schools

    NASA Technical Reports Server (NTRS)

    Wasfy, Ayman; Walker, Teresa

    2008-01-01

    An intelligent emergency response virtual environment (ERVE) that provides emergency first responders, response planners, and managers with situational awareness as well as training and support for safe schools is presented. ERVE incorporates an intelligent agent facility for guiding and assisting the user in the context of the emergency response operations. Response information folders capture key information about the school. The system enables interactive 3D visualization of schools and academic campuses, including the terrain and the buildings' exteriors and interiors in an easy to use Web..based interface. ERVE incorporates live camera and sensors feeds and can be integrated with other simulations such as chemical plume simulation. The system is integrated with a Geographical Information System (GIS) to enable situational awareness of emergency events and assessment of their effect on schools in a geographic area. ERVE can also be integrated with emergency text messaging notification systems. Using ERVE, it is now possible to address safe schools' emergency management needs with a scaleable, seamlessly integrated and fully interactive intelligent and visually compelling solution.

  11. A strategy for computer-assisted mental practice in stroke rehabilitation.

    PubMed

    Gaggioli, Andrea; Meneghini, Andrea; Morganti, Francesca; Alcaniz, Mariano; Riva, Giuseppe

    2006-12-01

    To investigate the technical and clinical viability of using computer-facilitated mental practice in the rehabilitation of upper-limb hemiparesis following stroke. A single-case study. Academic-affiliated rehabilitation center. A 46-year-old man with stable motor deficit of the upper right limb following subcortical ischemic stroke. Three computer-enhanced mental practice sessions per week at the rehabilitation center, in addition to usual physical therapy. A custom-made virtual reality system equipped with arm-tracking sensors was used to guide mental practice. The system was designed to superimpose over the (unseen) paretic arm a virtual reconstruction of the movement registered from the nonparetic arm. The laboratory intervention was followed by a 1-month home-rehabilitation program, making use of a portable display device. Pretreatment and posttreatment clinical assessment measures were the upper-extremity scale of the Fugl-Meyer Assessment of Sensorimotor Impairment and the Action Research Arm Test. Performance of the affected arm was evaluated using the healthy arm as the control condition. The patient's paretic limb improved after the first phase of intervention, with modest increases after home rehabilitation, as indicated by functional assessment scores and sensors data. Results suggest that technology-supported mental training is a feasible and potentially effective approach for improving motor skills after stroke.

  12. Comparison of 3D dynamic virtual model to link segment model for estimation of net L4/L5 reaction moments during lifting.

    PubMed

    Abdoli-Eramaki, Mohammad; Stevenson, Joan M; Agnew, Michael J; Kamalzadeh, Amin

    2009-04-01

    The purpose of this study was to validate a 3D dynamic virtual model for lifting tasks against a validated link segment model (LSM). A face validation study was conducted by collecting x, y, z coordinate data and using them in both virtual and LSM models. An upper body virtual model was needed to calculate the 3D torques about human joints for use in simulated lifting styles and to estimate the effect of external mechanical devices on human body. Firstly, the model had to be validated to be sure it provided accurate estimates of 3D moments in comparison to a previously validated LSM. Three synchronised Fastrak units with nine sensors were used to record data from one male subject who completed dynamic box lifting under 27 different load conditions (box weights (3), lifting techniques (3) and rotations (3)). The external moments about three axes of L4/L5 were compared for both models. A pressure switch on the box was used to denote the start and end of the lift. An excellent agreement [image omitted] was found between the two models for dynamic lifting tasks, especially for larger moments in flexion and extension. This virtual model was considered valid for use in a complete simulation of the upper body skeletal system. This biomechanical virtual model of the musculoskeletal system can be used by researchers and practitioners to give a better tool to study the causes of LBP and the effect of intervention strategies, by permitting the researcher to see and control a virtual subject's motions.

  13. [Odor sensing system and olfactory display].

    PubMed

    Nakamoto, Takamichi

    2014-01-01

    In this review, an odor sensing system and an olfactory display are introduced into people in pharmacy. An odor sensing system consists of an array of sensors with partially overlapping specificities and pattern recognition technique. One of examples of odor sensing systems is a halitosis sensor which quantifies the mixture composition of three volatile sulfide compounds. A halitosis sensor was realized using a preconcentrator to raise sensitivity and an electrochemical sensor array to suppress the influence of humidity. Partial least squares (PLS) method was used to quantify the mixture composition. The experiment reveals that the sufficient accuracy was obtained. Moreover, the olfactory display, which present scents to human noses, is explained. A multi-component olfactory display enables the presentation of a variety of smells. The two types of multi-component olfactory display are described. The first one uses many solenoid valves with high speed switching. The valve ON frequency determines the concentration of the corresponding odor component. The latter one consists of miniaturized liquid pumps and a surface acoustic wave (SAW) atomizer. It enables the wearable olfactory display without smell persistence. Finally, the application of the olfactory display is demonstrated. Virtual ice cream shop with scents was made as a content of interactive art. People can enjoy harmony among vision, audition and olfaction. In conclusion, both odor sensing system and olfactory display can contribute to the field of human health care.

  14. Cordless hand-held optical 3D sensor

    NASA Astrophysics Data System (ADS)

    Munkelt, Christoph; Bräuer-Burchardt, Christian; Kühmstedt, Peter; Schmidt, Ingo; Notni, Gunther

    2007-07-01

    A new mobile optical 3D measurement system using phase correlation based fringe projection technique will be presented. The sensor consist of a digital projection unit and two cameras in a stereo arrangement, whereby both are battery powered. The data transfer to a base station will be done via WLAN. This gives the possibility to use the system in complicate, remote measurement situations, which are typical in archaeology and architecture. In the measurement procedure the sensor will be hand-held by the user, illuminating the object with a sequence of less than 10 fringe patterns, within a time below 200 ms. This short sequence duration was achieved by a new approach, which combines the epipolar constraint with robust phase correlation utilizing a pre-calibrated sensor head, containing two cameras and a digital fringe projector. Furthermore, the system can be utilized to acquire the all around shape of objects by using the phasogrammetric approach with virtual land marks introduced by the authors 1, 2. This way no matching procedures or markers are necessary for the registration of multiple views, which makes the system very flexible in accomplishing different measurement tasks. The realized measurement field is approx. 100 mm up to 400 mm in diameter. The mobile character makes the measurement system useful for a wide range of applications in arts, architecture, archaeology and criminology, which will be shown in the paper.

  15. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    PubMed Central

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-01-01

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information. PMID:27999318

  16. Intelligent Elements for ISHM

    NASA Technical Reports Server (NTRS)

    Schmalzel, John L.; Morris, Jon; Turowski, Mark; Figueroa, Fernando; Oostdyk, Rebecca

    2008-01-01

    There are a number of architecture models for implementing Integrated Systems Health Management (ISHM) capabilities. For example, approaches based on the OSA-CBM and OSA-EAI models, or specific architectures developed in response to local needs. NASA s John C. Stennis Space Center (SSC) has developed one such version of an extensible architecture in support of rocket engine testing that integrates a palette of functions in order to achieve an ISHM capability. Among the functional capabilities that are supported by the framework are: prognostic models, anomaly detection, a data base of supporting health information, root cause analysis, intelligent elements, and integrated awareness. This paper focuses on the role that intelligent elements can play in ISHM architectures. We define an intelligent element as a smart element with sufficient computing capacity to support anomaly detection or other algorithms in support of ISHM functions. A smart element has the capabilities of supporting networked implementations of IEEE 1451.x smart sensor and actuator protocols. The ISHM group at SSC has been actively developing intelligent elements in conjunction with several partners at other Centers, universities, and companies as part of our ISHM approach for better supporting rocket engine testing. We have developed several implementations. Among the key features for these intelligent sensors is support for IEEE 1451.1 and incorporation of a suite of algorithms for determination of sensor health. Regardless of the potential advantages that can be achieved using intelligent sensors, existing large-scale systems are still based on conventional sensors and data acquisition systems. In order to bring the benefits of intelligent sensors to these environments, we have also developed virtual implementations of intelligent sensors.

  17. Real-Time Mapping: Contemporary Challenges and the Internet of Things as the Way Forward

    NASA Astrophysics Data System (ADS)

    Bęcek, Kazimierz

    2016-12-01

    The Internet of Things (IoT) is an emerging technology that was conceived in 1999. The key components of the IoT are intelligent sensors, which represent objects of interest. The adjective `intelligent' is used here in the information gathering sense, not the psychological sense. Some 30 billion sensors that `know' the current status of objects they represent are already connected to the Internet. Various studies indicate that the number of installed sensors will reach 212 billion by 2020. Various scenarios of IoT projects show sensors being able to exchange data with the network as well as between themselves. In this contribution, we discuss the possibility of deploying the IoT in cartography for real-time mapping. A real-time map is prepared using data harvested through querying sensors representing geographical objects, and the concept of a virtual sensor for abstract objects, such as a land parcel, is presented. A virtual sensor may exist as a data record in the cloud. Sensors are identified by an Internet Protocol address (IP address), which implies that geographical objects through their sensors would also have an IP address. This contribution is an updated version of a conference paper presented by the author during the International Federation of Surveyors 2014 Congress in Kuala Lumpur. The author hopes that the use of the IoT for real-time mapping will be considered by the mapmaking community.

  18. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  19. Extending and expanding the life of older current meters

    USGS Publications Warehouse

    Strahle, W.J.; Martini, Marinna A.

    1995-01-01

    The EG&G Model 610 VACM and Model 630 VMCM are standards for ocean current measurements. It is simple to add peripheral sensors to the data stream of the VACM by use of add-on CMOS circuitry. The firmware control of the VMCM makes it virtually impossible to add sampling of additional sensors. Most of the electronic components used in the VACM are obsolete or difficult to replace and the VMCM will soon follow suit. As a result, the USGS joined WHOI in the development of a PCMCIA data storage system to replace the cassette recording system in the VACM. Using the same PCMCIA recording package as the controller and recorder for the VMCM, a user-friendly VMCM is being designed. PCMCIA cards are rapidly becoming an industry standard with a wide range of storage capacities. By upgrading the VACM and VMCM to PCMCIA storage systems with a flexible microprocessor, they will continue to be viable instruments.

  20. Open core control software for surgical robots.

    PubMed

    Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-05-01

    In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a "force guide" for supporting operators to perform precise manipulation by using a master-slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. The Open Core Control software was implemented on a surgical master-slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a "force guide" on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement "General Principles of Software Validation" or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field.

  1. Benefits of Sharing Information from Commercial Airborne Forward-Looking Sensors in the Next Generation Air Transportation System

    NASA Technical Reports Server (NTRS)

    Schaffner, Philip R.; Harrah, Steven; Neece, Robert T.

    2012-01-01

    The air transportation system of the future will need to support much greater traffic densities than are currently possible, while preserving or improving upon current levels of safety. Concepts are under development to support a Next Generation Air Transportation System (NextGen) that by some estimates will need to support up to three times current capacity by the year 2025. Weather and other atmospheric phenomena, such as wake vortices and volcanic ash, constitute major constraints on airspace system capacity and can present hazards to aircraft if encountered. To support safe operations in the NextGen environment advanced systems for collection and dissemination of aviation weather and environmental information will be required. The envisioned NextGen Network Enabled Weather (NNEW) infrastructure will be a critical component of the aviation weather support services, providing access to a common weather picture for all system users. By taking advantage of Network Enabled Operations (NEO) capabilities, a virtual 4-D Weather Data Cube with aviation weather information from many sources will be developed. One new source of weather observations may be airborne forward-looking sensors, such as the X-band weather radar. Future sensor systems that are the subject of current research include advanced multi-frequency and polarimetric radar, a variety of Lidar technologies, and infrared imaging spectrometers.

  2. Open Source Dataturbine (OSDT) Android Sensorpod in Environmental Observing Systems

    NASA Astrophysics Data System (ADS)

    Fountain, T. R.; Shin, P.; Tilak, S.; Trinh, T.; Smith, J.; Kram, S.

    2014-12-01

    The OSDT Android SensorPod is a custom-designed mobile computing platform for assembling wireless sensor networks for environmental monitoring applications. Funded by an award from the Gordon and Betty Moore Foundation, the OSDT SensorPod represents a significant technological advance in the application of mobile and cloud computing technologies to near-real-time applications in environmental science, natural resources management, and disaster response and recovery. It provides a modular architecture based on open standards and open-source software that allows system developers to align their projects with industry best practices and technology trends, while avoiding commercial vendor lock-in to expensive proprietary software and hardware systems. The integration of mobile and cloud-computing infrastructure represents a disruptive technology in the field of environmental science, since basic assumptions about technology requirements are now open to revision, e.g., the roles of special purpose data loggers and dedicated site infrastructure. The OSDT Android SensorPod was designed with these considerations in mind, and the resulting system exhibits the following characteristics - it is flexible, efficient and robust. The system was developed and tested in the three science applications: 1) a fresh water limnology deployment in Wisconsin, 2) a near coastal marine science deployment at the UCSD Scripps Pier, and 3) a terrestrial ecological deployment in the mountains of Taiwan. As part of a public education and outreach effort, a Facebook page with daily ocean pH measurements from the UCSD Scripps pier was developed. Wireless sensor networks and the virtualization of data and network services is the future of environmental science infrastructure. The OSDT Android SensorPod was designed and developed to harness these new technology developments for environmental monitoring applications.

  3. Polytopol computing for multi-core and distributed systems

    NASA Astrophysics Data System (ADS)

    Spaanenburg, Henk; Spaanenburg, Lambert; Ranefors, Johan

    2009-05-01

    Multi-core computing provides new challenges to software engineering. The paper addresses such issues in the general setting of polytopol computing, that takes multi-core problems in such widely differing areas as ambient intelligence sensor networks and cloud computing into account. It argues that the essence lies in a suitable allocation of free moving tasks. Where hardware is ubiquitous and pervasive, the network is virtualized into a connection of software snippets judiciously injected to such hardware that a system function looks as one again. The concept of polytopol computing provides a further formalization in terms of the partitioning of labor between collector and sensor nodes. Collectors provide functions such as a knowledge integrator, awareness collector, situation displayer/reporter, communicator of clues and an inquiry-interface provider. Sensors provide functions such as anomaly detection (only communicating singularities, not continuous observation), they are generally powered or self-powered, amorphous (not on a grid) with generation-and-attrition, field re-programmable, and sensor plug-and-play-able. Together the collector and the sensor are part of the skeleton injector mechanism, added to every node, and give the network the ability to organize itself into some of many topologies. Finally we will discuss a number of applications and indicate how a multi-core architecture supports the security aspects of the skeleton injector.

  4. A Novel Cloud-Based Service Robotics Application to Data Center Environmental Monitoring

    PubMed Central

    Russo, Ludovico Orlando; Rosa, Stefano; Maggiora, Marcello; Bona, Basilio

    2016-01-01

    This work presents a robotic application aimed at performing environmental monitoring in data centers. Due to the high energy density managed in data centers, environmental monitoring is crucial for controlling air temperature and humidity throughout the whole environment, in order to improve power efficiency, avoid hardware failures and maximize the life cycle of IT devices. State of the art solutions for data center monitoring are nowadays based on environmental sensor networks, which continuously collect temperature and humidity data. These solutions are still expensive and do not scale well in large environments. This paper presents an alternative to environmental sensor networks that relies on autonomous mobile robots equipped with environmental sensors. The robots are controlled by a centralized cloud robotics platform that enables autonomous navigation and provides a remote client user interface for system management. From the user point of view, our solution simulates an environmental sensor network. The system can easily be reconfigured in order to adapt to management requirements and changes in the layout of the data center. For this reason, it is called the virtual sensor network. This paper discusses the implementation choices with regards to the particular requirements of the application and presents and discusses data collected during a long-term experiment in a real scenario. PMID:27509505

  5. Portable Multispectral Colorimeter for Metallic Ion Detection and Classification

    PubMed Central

    Jaimes, Ruth F. V. V.; Borysow, Walter; Gomes, Osmar F.; Salcedo, Walter J.

    2017-01-01

    This work deals with a portable device system applied to detect and classify different metallic ions as proposed and developed, aiming its application for hydrological monitoring systems such as rivers, lakes and groundwater. Considering the system features, a portable colorimetric system was developed by using a multispectral optoelectronic sensor. All the technology of quantification and classification of metallic ions using optoelectronic multispectral sensors was fully integrated in the embedded hardware FPGA ( Field Programmable Gate Array) technology and software based on virtual instrumentation (NI LabView®). The system draws on an indicative colorimeter by using the chromogen reagent of 1-(2-pyridylazo)-2-naphthol (PAN). The results obtained with the signal processing and pattern analysis using the method of the linear discriminant analysis, allows excellent results during detection and classification of Pb(II), Cd(II), Zn(II), Cu(II), Fe(III) and Ni(II) ions, with almost the same level of performance as for those obtained from the Ultravioled and visible (UV-VIS) spectrophotometers of high spectral resolution. PMID:28788082

  6. Portable Multispectral Colorimeter for Metallic Ion Detection and Classification.

    PubMed

    Braga, Mauro S; Jaimes, Ruth F V V; Borysow, Walter; Gomes, Osmar F; Salcedo, Walter J

    2017-07-28

    This work deals with a portable device system applied to detect and classify different metallic ions as proposed and developed, aiming its application for hydrological monitoring systems such as rivers, lakes and groundwater. Considering the system features, a portable colorimetric system was developed by using a multispectral optoelectronic sensor. All the technology of quantification and classification of metallic ions using optoelectronic multispectral sensors was fully integrated in the embedded hardware FPGA ( Field Programmable Gate Array) technology and software based on virtual instrumentation (NI LabView ® ). The system draws on an indicative colorimeter by using the chromogen reagent of 1-(2-pyridylazo)-2-naphthol (PAN). The results obtained with the signal processing and pattern analysis using the method of the linear discriminant analysis, allows excellent results during detection and classification of Pb(II), Cd(II), Zn(II), Cu(II), Fe(III) and Ni(II) ions, with almost the same level of performance as for those obtained from the Ultravioled and visible (UV-VIS) spectrophotometers of high spectral resolution.

  7. Virtual Control Policy for Binary Ordered Resources Petri Net Class

    PubMed Central

    Rovetto, Carlos A.; Concepción, Tomás J.; Cano, Elia Esther

    2016-01-01

    Prevention and avoidance of deadlocks in sensor networks that use the wormhole routing algorithm is an active research domain. There are diverse control policies that will address this problem being our approach a new method. In this paper we present a virtual control policy for the new specialized Petri net subclass called Binary Ordered Resources Petri Net (BORPN). Essentially, it is an ordinary class constructed from various state machines that share unitary resources in a complex form, which allows branching and joining of processes. The reduced structure of this new class gives advantages that allow analysis of the entire system’s behavior, which is a prohibitive task for large systems because of the complexity and routing algorithms. PMID:27548170

  8. Structural Damage Detection Using Virtual Passive Controllers

    NASA Technical Reports Server (NTRS)

    Lew, Jiann-Shiun; Juang, Jer-Nan

    2001-01-01

    This paper presents novel approaches for structural damage detection which uses the virtual passive controllers attached to structures, where passive controllers are energy dissipative devices and thus guarantee the closed-loop stability. The use of the identified parameters of various closed-loop systems can solve the problem that reliable identified parameters, such as natural frequencies of the open-loop system may not provide enough information for damage detection. Only a small number of sensors are required for the proposed approaches. The identified natural frequencies, which are generally much less sensitive to noise and more reliable than the identified natural frequencies, are used for damage detection. Two damage detection techniques are presented. One technique is based on the structures with direct output feedback controllers while the other technique uses the second-order dynamic feedback controllers. A least-squares technique, which is based on the sensitivity of natural frequencies to damage variables, is used for accurately identifying the damage variables.

  9. RoboCup-Rescue: an international cooperative research project of robotics and AI for the disaster mitigation problem

    NASA Astrophysics Data System (ADS)

    Tadokoro, Satoshi; Kitano, Hiroaki; Takahashi, Tomoichi; Noda, Itsuki; Matsubara, Hitoshi; Shinjoh, Atsushi; Koto, Tetsuo; Takeuchi, Ikuo; Takahashi, Hironao; Matsuno, Fumitoshi; Hatayama, Mitsunori; Nobe, Jun; Shimada, Susumu

    2000-07-01

    This paper introduces the RoboCup-Rescue Simulation Project, a contribution to the disaster mitigation, search and rescue problem. A comprehensive urban disaster simulator is constructed on distributed computers. Heterogeneous intelligent agents such as fire fighters, victims and volunteers conduct search and rescue activities in this virtual disaster world. A real world interface integrates various sensor systems and controllers of infrastructures in the real cities with the real world. Real-time simulation is synchronized with actual disasters, computing complex relationship between various damage factors and agent behaviors. A mission-critical man-machine interface provides portability and robustness of disaster mitigation centers, and augmented-reality interfaces for rescue in real disasters. It also provides a virtual- reality training function for the public. This diverse spectrum of RoboCup-Rescue contributes to the creation of the safer social system.

  10. Inertial Orientation Trackers with Drift Compensation

    NASA Technical Reports Server (NTRS)

    Foxlin, Eric M.

    2008-01-01

    A class of inertial-sensor systems with drift compensation has been invented for use in measuring the orientations of human heads (and perhaps other, similarly sized objects). These systems can be designed to overcome some of the limitations of prior orientation-measuring systems that are based, variously, on magnetic, optical, mechanical-linkage, and acoustical principles. The orientation signals generated by the systems of this invention could be used for diverse purposes, including controlling head-orientation-dependent virtual reality visual displays or enabling persons whose limbs are paralyzed to control machinery by means of head motions. The inventive concept admits to variations too numerous to describe here, making it necessary to limit this description to a typical system, the selected aspects of which are illustrated in the figure. A set of sensors is mounted on a bracket on a band or a cap that gently but firmly grips the wearer s head to be tracked. Among the sensors are three drift-sensitive rotationrate sensors (e.g., integrated-circuit angular- rate-measuring gyroscopes), which put out DC voltages nominally proportional to the rates of rotation about their sensory axes. These sensors are mounted in mutually orthogonal orientations for measuring rates of rotation about the roll, pitch, and yaw axes of the wearer s head. The outputs of these rate sensors are conditioned and digitized, and the resulting data are fed to an integrator module implemented in software in a digital computer. In the integrator module, the angular-rate signals are jointly integrated by any of several established methods to obtain a set of angles that represent approximately the orientation of the head in an external, inertial coordinate system. Because some drift is always present as a component of an angular position computed by integrating the outputs of angular-rate sensors, the orientation signal is processed further in a drift-compensator software module.

  11. Development of a smart home simulator for use as a heuristic tool for management of sensor distribution.

    PubMed

    Poland, Michael P; Nugent, Chris D; Wang, Hui; Chen, Liming

    2009-01-01

    Smart Homes offer potential solutions for various forms of independent living for the elderly. The assistive and protective environment afforded by smart homes offer a safe, relatively inexpensive, dependable and viable alternative to vulnerable inhabitants. Nevertheless, the success of a smart home rests upon the quality of information its decision support system receives and this in turn places great importance on the issue of correct sensor deployment. In this article we present a software tool that has been developed to address the elusive issue of sensor distribution within smart homes. Details of the tool will be presented and it will be shown how it can be used to emulate any real world environment whereby virtual sensor distributions can be rapidly implemented and assessed without the requirement for physical deployment for evaluation. As such, this approach offers the potential of tailoring sensor distributions to the specific needs of a patient in a non-evasive manner. The heuristics based tool presented here has been developed as the first part of a three stage project.

  12. A simulator for airborne laser swath mapping via photon counting

    NASA Astrophysics Data System (ADS)

    Slatton, K. C.; Carter, W. E.; Shrestha, R.

    2005-06-01

    Commercially marketed airborne laser swath mapping (ALSM) instruments currently use laser rangers with sufficient energy per pulse to work with return signals of thousands of photons per shot. The resulting high signal to noise level virtually eliminates spurious range values caused by noise, such as background solar radiation and sensor thermal noise. However, the high signal level approach requires laser repetition rates of hundreds of thousands of pulses per second to obtain contiguous coverage of the terrain at sub-meter spatial resolution, and with currently available technology, affords little scalability for significantly downsizing the hardware, or reducing the costs. A photon-counting ALSM sensor has been designed by the University of Florida and Sigma Space, Inc. for improved topographic mapping with lower power requirements and weight than traditional ALSM sensors. Major elements of the sensor design are presented along with preliminary simulation results. The simulator is being developed so that data phenomenology and target detection potential can be investigated before the system is completed. Early simulations suggest that precise estimates of terrain elevation and target detection will be possible with the sensor design.

  13. Interacting With A Near Real-Time Urban Digital Watershed Using Emerging Geospatial Web Technologies

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Fazio, D. J.; Abdelzaher, T.; Minsker, B.

    2007-12-01

    The value of real-time hydrologic data dissemination including river stage, streamflow, and precipitation for operational stormwater management efforts is particularly high for communities where flash flooding is common and costly. Ideally, such data would be presented within a watershed-scale geospatial context to portray a holistic view of the watershed. Local hydrologic sensor networks usually lack comprehensive integration with sensor networks managed by other agencies sharing the same watershed due to administrative, political, but mostly technical barriers. Recent efforts on providing unified access to hydrological data have concentrated on creating new SOAP-based web services and common data format (e.g. WaterML and Observation Data Model) for users to access the data (e.g. HIS and HydroSeek). Geospatial Web technology including OGC sensor web enablement (SWE), GeoRSS, Geo tags, Geospatial browsers such as Google Earth and Microsoft Virtual Earth and other location-based service tools provides possibilities for us to interact with a digital watershed in near-real-time. OGC SWE proposes a revolutionary concept towards a web-connected/controllable sensor networks. However, these efforts have not provided the capability to allow dynamic data integration/fusion among heterogeneous sources, data filtering and support for workflows or domain specific applications where both push and pull mode of retrieving data may be needed. We propose a light weight integration framework by extending SWE with open source Enterprise Service Bus (e.g., mule) as a backbone component to dynamically transform, transport, and integrate both heterogeneous sensor data sources and simulation model outputs. We will report our progress on building such framework where multi-agencies" sensor data and hydro-model outputs (with map layers) will be integrated and disseminated in a geospatial browser (e.g. Microsoft Virtual Earth). This is a collaborative project among NCSA, USGS Illinois Water Science Center, Computer Science Department at UIUC funded by the Adaptive Environmental Infrastructure Sensing and Information Systems initiative at UIUC.

  14. Pervasive sensing

    NASA Astrophysics Data System (ADS)

    Nagel, David J.

    2000-11-01

    The coordinated exploitation of modern communication, micro- sensor and computer technologies makes it possible to give global reach to our senses. Web-cameras for vision, web- microphones for hearing and web-'noses' for smelling, plus the abilities to sense many factors we cannot ordinarily perceive, are either available or will be soon. Applications include (1) determination of weather and environmental conditions on dense grids or over large areas, (2) monitoring of energy usage in buildings, (3) sensing the condition of hardware in electrical power distribution and information systems, (4) improving process control and other manufacturing, (5) development of intelligent terrestrial, marine, aeronautical and space transportation systems, (6) managing the continuum of routine security monitoring, diverse crises and military actions, and (7) medicine, notably the monitoring of the physiology and living conditions of individuals. Some of the emerging capabilities, such as the ability to measure remotely the conditions inside of people in real time, raise interesting social concerns centered on privacy issues. Methods for sensor data fusion and designs for human-computer interfaces are both crucial for the full realization of the potential of pervasive sensing. Computer-generated virtual reality, augmented with real-time sensor data, should be an effective means for presenting information from distributed sensors.

  15. A new chapter in environmental sensing: The Open-Source Published Environmental Sensing (OPENS) laboratory

    NASA Astrophysics Data System (ADS)

    Selker, J. S.; Roques, C.; Higgins, C. W.; Good, S. P.; Hut, R.; Selker, A.

    2015-12-01

    The confluence of 3-Dimensional printing, low-cost solid-state-sensors, low-cost, low-power digital controllers (e.g., Arduinos); and open-source publishing (e.g., Github) is poised to transform environmental sensing. The Open-Source Published Environmental Sensing (OPENS) laboratory has launched and is available for all to use. OPENS combines cutting edge technologies and makes them available to the global environmental sensing community. OPENS includes a Maker lab space where people may collaborate in person or virtually via on-line forum for the publication and discussion of environmental sensing technology (Corvallis, Oregon, USA, please feel free to request a free reservation for space and equipment use). The physical lab houses a test-bed for sensors, as well as a complete classical machine shop, 3-D printers, electronics development benches, and workstations for code development. OPENS will provide a web-based formal publishing framework wherein global students and scientists can peer-review publish (with DOI) novel and evolutionary advancements in environmental sensor systems. This curated and peer-reviewed digital collection will include complete sets of "printable" parts and operating computer code for sensing systems. The physical lab will include all of the machines required to produce these sensing systems. These tools can be addressed in person or virtually, creating a truly global venue for advancement in monitoring earth's environment and agricultural systems. In this talk we will present an example of the process of design and publication the design and data from the OPENS-Permeameter. The publication includes 3-D printing code, Arduino (or other control/logging platform) operational code; sample data sets, and a full discussion of the design set in the scientific context of previous related devices. Editors for the peer-review process are currently sought - contact John.Selker@Oregonstate.edu or Clement.Roques@Oregonstate.edu.

  16. A task scheduler framework for self-powered wireless sensors.

    PubMed

    Nordman, Mikael M

    2003-10-01

    The cost and inconvenience of cabling is a factor limiting widespread use of intelligent sensors. Recent developments in short-range, low-power radio seem to provide an opening to this problem, making development of wireless sensors feasible. However, for these sensors the energy availability is a main concern. The common solution is either to use a battery or to harvest ambient energy. The benefit of harvested ambient energy is that the energy feeder can be considered as lasting a lifetime, thus it saves the user from concerns related to energy management. The problem is, however, the unpredictability and unsteady behavior of ambient energy sources. This becomes a main concern for sensors that run multiple tasks at different priorities. This paper proposes a new scheduler framework that enables the reliable assignment of task priorities and scheduling in sensors powered by ambient energy. The framework being based on environment parameters, virtual queues, and a state machine with transition conditions, dynamically manages task execution according to priorities. The framework is assessed in a test system powered by a solar panel. The results show the functionality of the framework and how task execution reliably is handled without violating the priority scheme that has been assigned to it.

  17. Sensor-Augmented Virtual Labs: Using Physical Interactions with Science Simulations to Promote Understanding of Gas Behavior

    ERIC Educational Resources Information Center

    Chao, Jie; Chiu, Jennifer L.; DeJaegher, Crystal J.; Pan, Edward A.

    2016-01-01

    Deep learning of science involves integration of existing knowledge and normative science concepts. Past research demonstrates that combining physical and virtual labs sequentially or side by side can take advantage of the unique affordances each provides for helping students learn science concepts. However, providing simultaneously connected…

  18. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  19. Virtual reality applications in improving postural control and minimizing falls.

    PubMed

    Virk, Sumandeep; McConville, Kristiina M Valter

    2006-01-01

    Maintaining balance under all conditions is an absolute requirement for humans. Orientation in space and balance maintenance requires inputs from the vestibular, the visual, the proprioceptive and the somatosensory systems. All the cues coming from these systems are integrated by the central nervous system (CNS) to employ different strategies for orientation and balance. How the CNS integrates all the inputs and makes cognitive decisions about balance strategies has been an area of interest for biomedical engineers for a long time. More interesting is the fact that in the absence of one or more cues, or when the input from one of the sensors is skewed, the CNS "adapts" to the new environment and gives less weight to the conflicting inputs [1]. The focus of this paper is a review of different strategies and models put forward by researchers to explain the integration of these sensory cues. Also, the paper compares the different approaches used by young and old adults in maintaining balance. Since with age the musculoskeletal, visual and vestibular system deteriorates, the older subjects have to compensate for these impaired sensory cues for postural stability. The paper also discusses the applications of virtual reality in rehabilitation programs not only for balance in the elderly but also in occupational falls. Virtual reality has profound applications in the field of balance rehabilitation and training because of its relatively low cost. Studies will be conducted to evaluate the effectiveness of virtual reality training in modifying the head and eye movement strategies, and determine the role of these responses in the maintenance of balance.

  20. A Runway Surface Monitor using Internet of Things

    NASA Astrophysics Data System (ADS)

    Troiano, Amedeo; Pasero, Eros

    2014-05-01

    The monitoring of runway surfaces, for the detection of ice formation or presence of water, is an important issue for reducing maintenance costs and improving traffic safety. An innovative sensor was developed to detect the presence of ice or water on its surface, and its repeatability, stability and reliability were assessed in different simulations and experiments, performed both in laboratory and in the field. Three sensors were embedded in the runway of the Turin-Caselle airport, in the north-west of Italy, to check the state of its surface. Each sensor was connected to a GPRS modem to send the collected data to a common database. The entire system was installed about three years ago, and up to now it shows correct work and automatic reactivation after malfunctions without any external help. The state of the runway surface is virtual represented in an internet website, using the Internet of Things features and opening new scenarios.

  1. Multi-pose system for geometric measurement of large-scale assembled rotational parts

    NASA Astrophysics Data System (ADS)

    Deng, Bowen; Wang, Zhaoba; Jin, Yong; Chen, Youxing

    2017-05-01

    To achieve virtual assembly of large-scale assembled rotational parts based on in-field geometric data, we develop a multi-pose rotative arm measurement system with a gantry and 2D laser sensor (RAMSGL) to measure and provide the geometry of these parts. We mount a 2D laser sensor onto the end of a six-jointed rotative arm to guarantee the accuracy and efficiency, combine the rotative arm with a gantry to measure pairs of assembled rotational parts. By establishing and using the D-H model of the system, the 2D laser data is turned into point clouds and finally geometry is calculated. In addition, we design three experiments to evaluate the performance of the system. Experimental results show that the system’s max length measuring deviation using gauge blocks is 35 µm, max length measuring deviation using ball plates is 50 µm, max single-point repeatability error is 25 µm, and measurement scope is from a radius of 0 mm to 500 mm.

  2. Virtual DRI dataset development

    NASA Astrophysics Data System (ADS)

    Hixson, Jonathan G.; Teaney, Brian P.; May, Christopher; Maurer, Tana; Nelson, Michael B.; Pham, Justin R.

    2017-05-01

    The U.S. Army RDECOM CERDEC NVESD MSD's target acquisition models have been used for many years by the military analysis community for sensor design, trade studies, and field performance prediction. This paper analyzes the results of perception tests performed to compare the results of a field DRI (Detection, Recognition, and Identification Test) performed in 2009 to current Soldier performance viewing the same imagery in a laboratory environment and simulated imagery of the same data set. The purpose of the experiment is to build a robust data set for use in the virtual prototyping of infrared sensors. This data set will provide a strong foundation relating, model predictions, field DRI results and simulated imagery.

  3. Multimodal correlation and intraoperative matching of virtual models in neurosurgery

    NASA Technical Reports Server (NTRS)

    Ceresole, Enrico; Dalsasso, Michele; Rossi, Aldo

    1994-01-01

    The multimodal correlation between different diagnostic exams, the intraoperative calibration of pointing tools and the correlation of the patient's virtual models with the patient himself, are some examples, taken from the biomedical field, of a unique problem: determine the relationship linking representation of the same object in different reference frames. Several methods have been developed in order to determine this relationship, among them, the surface matching method is one that gives the patient minimum discomfort and the errors occurring are compatible with the required precision. The surface matching method has been successfully applied to the multimodal correlation of diagnostic exams such as CT, MR, PET and SPECT. Algorithms for automatic segmentation of diagnostic images have been developed to extract the reference surfaces from the diagnostic exams, whereas the surface of the patient's skull has been monitored, in our approach, by means of a laser sensor mounted on the end effector of an industrial robot. An integrated system for virtual planning and real time execution of surgical procedures has been realized.

  4. A new method for aerodynamic test of high altitude propellers

    NASA Astrophysics Data System (ADS)

    Gong, Xiying; Zhang, Lin

    A ground test system is designed for aerodynamic performance tests of high altitude propellers. The system is consisted of stable power supply, servo motors, two-component balance constructed by tension-compression sensors, ultrasonic anemometer, data acquisition module. It is loaded on a truck to simulate propellers' wind-tunnel test for different wind velocities at low density circumstance. The graphical programming language LABVIEW for developing virtual instrument is used to realize the test system control and data acquisition. Aerodynamic performance test of a propeller with 6.8 m diameter was completed by using this system. The results verify the feasibility of the ground test method.

  5. Secure, Autonomous, Intelligent Controller for Integrating Distributed Sensor Webs

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    2007-01-01

    This paper describes the infrastructure and protocols necessary to enable near-real-time commanding, access to space-based assets, and the secure interoperation between sensor webs owned and controlled by various entities. Select terrestrial and aeronautics-base sensor webs will be used to demonstrate time-critical interoperability between integrated, intelligent sensor webs both terrestrial and between terrestrial and space-based assets. For this work, a Secure, Autonomous, Intelligent Controller and knowledge generation unit is implemented using Virtual Mission Operation Center technology.

  6. Creating photorealistic virtual model with polarization-based vision system

    NASA Astrophysics Data System (ADS)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  7. An adaptive process-based cloud infrastructure for space situational awareness applications

    NASA Astrophysics Data System (ADS)

    Liu, Bingwei; Chen, Yu; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Rubin, Bruce

    2014-06-01

    Space situational awareness (SSA) and defense space control capabilities are top priorities for groups that own or operate man-made spacecraft. Also, with the growing amount of space debris, there is an increase in demand for contextual understanding that necessitates the capability of collecting and processing a vast amount sensor data. Cloud computing, which features scalable and flexible storage and computing services, has been recognized as an ideal candidate that can meet the large data contextual challenges as needed by SSA. Cloud computing consists of physical service providers and middleware virtual machines together with infrastructure, platform, and software as service (IaaS, PaaS, SaaS) models. However, the typical Virtual Machine (VM) abstraction is on a per operating systems basis, which is at too low-level and limits the flexibility of a mission application architecture. In responding to this technical challenge, a novel adaptive process based cloud infrastructure for SSA applications is proposed in this paper. In addition, the details for the design rationale and a prototype is further examined. The SSA Cloud (SSAC) conceptual capability will potentially support space situation monitoring and tracking, object identification, and threat assessment. Lastly, the benefits of a more granular and flexible cloud computing resources allocation are illustrated for data processing and implementation considerations within a representative SSA system environment. We show that the container-based virtualization performs better than hypervisor-based virtualization technology in an SSA scenario.

  8. Reconstruction of in-plane strain maps using hybrid dense sensor network composed of sensing skin

    NASA Astrophysics Data System (ADS)

    Downey, Austin; Laflamme, Simon; Ubertini, Filippo

    2016-12-01

    The authors have recently developed a soft-elastomeric capacitive (SEC)-based thin film sensor for monitoring strain on mesosurfaces. Arranged in a network configuration, the sensing system is analogous to a biological skin, where local strain can be monitored over a global area. Under plane stress conditions, the sensor output contains the additive measurement of the two principal strain components over the monitored surface. In applications where the evaluation of strain maps is useful, in structural health monitoring for instance, such signal must be decomposed into linear strain components along orthogonal directions. Previous work has led to an algorithm that enabled such decomposition by leveraging a dense sensor network configuration with the addition of assumed boundary conditions. Here, we significantly improve the algorithm’s accuracy by leveraging mature off-the-shelf solutions to create a hybrid dense sensor network (HDSN) to improve on the boundary condition assumptions. The system’s boundary conditions are enforced using unidirectional RSGs and assumed virtual sensors. Results from an extensive experimental investigation demonstrate the good performance of the proposed algorithm and its robustness with respect to sensors’ layout. Overall, the proposed algorithm is seen to effectively leverage the advantages of a hybrid dense network for application of the thin film sensor to reconstruct surface strain fields over large surfaces.

  9. Determining Spinal Posture for Encumbered Airmen in Crewstations Using the Luna Positioning Sensor

    DTIC Science & Technology

    to characterize design -relevant body size and shape variation as it applies to our service personnel. Of particular interest is cockpit accommodation...confidence in virtual assessments. For this effort, the Luna, Inc. fiber optic positioning sensor was evaluated to determine the utility of this

  10. Dynamic shared state maintenance in distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Hamza-Lup, Felix George

    Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model. An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for sensor-based distributed VE that has the potential to improve the system real-time behavior and scalability. (Abstract shortened by UMI.)

  11. SOMM: A New Service Oriented Middleware for Generic Wireless Multimedia Sensor Networks Based on Code Mobility

    PubMed Central

    Faghih, Mohammad Mehdi; Moghaddam, Mohsen Ebrahimi

    2011-01-01

    Although much research in the area of Wireless Multimedia Sensor Networks (WMSNs) has been done in recent years, the programming of sensor nodes is still time-consuming and tedious. It requires expertise in low-level programming, mainly because of the use of resource constrained hardware and also the low level API provided by current operating systems. The code of the resulting systems has typically no clear separation between application and system logic. This minimizes the possibility of reusing code and often leads to the necessity of major changes when the underlying platform is changed. In this paper, we present a service oriented middleware named SOMM to support application development for WMSNs. The main goal of SOMM is to enable the development of modifiable and scalable WMSN applications. A network which uses the SOMM is capable of providing multiple services to multiple clients at the same time with the specified Quality of Service (QoS). SOMM uses a virtual machine with the ability to support mobile agents. Services in SOMM are provided by mobile agents and SOMM also provides a t space on each node which agents can use to communicate with each other. PMID:22346646

  12. SOMM: A new service oriented middleware for generic wireless multimedia sensor networks based on code mobility.

    PubMed

    Faghih, Mohammad Mehdi; Moghaddam, Mohsen Ebrahimi

    2011-01-01

    Although much research in the area of Wireless Multimedia Sensor Networks (WMSNs) has been done in recent years, the programming of sensor nodes is still time-consuming and tedious. It requires expertise in low-level programming, mainly because of the use of resource constrained hardware and also the low level API provided by current operating systems. The code of the resulting systems has typically no clear separation between application and system logic. This minimizes the possibility of reusing code and often leads to the necessity of major changes when the underlying platform is changed. In this paper, we present a service oriented middleware named SOMM to support application development for WMSNs. The main goal of SOMM is to enable the development of modifiable and scalable WMSN applications. A network which uses the SOMM is capable of providing multiple services to multiple clients at the same time with the specified Quality of Service (QoS). SOMM uses a virtual machine with the ability to support mobile agents. Services in SOMM are provided by mobile agents and SOMM also provides a t space on each node which agents can use to communicate with each other.

  13. DEKF system for crowding estimation by a multiple-model approach

    NASA Astrophysics Data System (ADS)

    Cravino, F.; Dellucca, M.; Tesei, A.

    1994-03-01

    A distributed extended Kalman filter (DEKF) network devoted to real-time crowding estimation for surveillance in complex scenes is presented. Estimation is carried out by extracting a set of significant features from sequences of images. Feature values are associated by virtual sensors with the estimated number of people using nonlinear models obtained in an off-line training phase. Different models are used, depending on the positions and dimensions of the crowded subareas detected in each image.

  14. Virtual sensors for on-line wheel wear and part roughness measurement in the grinding process.

    PubMed

    Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A; Cabanes, Itziar; Pombo, Iñigo

    2014-05-19

    Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations.

  15. Real-time Data Access to First Responders: A VORB application

    NASA Astrophysics Data System (ADS)

    Lu, S.; Kim, J. B.; Bryant, P.; Foley, S.; Vernon, F.; Rajasekar, A.; Meier, S.

    2006-12-01

    Getting information to first responders is not an easy task. The sensors that provide the information are diverse in formats and come from many disciplines. They are also distributed by location, transmit data at different frequencies and are managed and owned by autonomous administrative entities. Pulling such types of data in real-time, needs a very robust sensor network with reliable data transport and buffering capabilities. Moreover, the system should be extensible and scalable in numbers and sensor types. ROADNet is a real- time sensor network project at UCSD gathering diverse environmental data in real-time or near-real-time. VORB (Virtual Object Ring Buffer) is the middleware used in ROADNet offering simple, uniform and scalable real-time data management for discovering (through metadata), accessing and archiving real-time data and data streams. Recent development in VORB, a web API, has offered quick and simple real-time data integration with web applications. In this poster, we discuss one application developed as part of ROADNet. SMER (Santa Margarita Ecological Reserve) is located in interior Southern California, a region prone to catastrophic wildfires each summer and fall. To provide data during emergencies, we have applied the VORB framework to develop a web-based application for providing access to diverse sensor data including weather data, heat sensor information, and images from cameras. Wildfire fighters have access to real-time data about weather and heat conditions in the area and view pictures taken from cameras at multiple points in the Reserve to pinpoint problem areas. Moreover, they can browse archived images and sensor data from earlier times to provide a comparison framework. To show scalability of the system, we have expanded the sensor network under consideration through other areas in Southern California including sensors accessible by Los Angeles County Fire Department (LACOFD) and those available through the High Performance Wireless Research and Education Network (HPWREN). The poster will discuss the system architecture and components, the types of sensor being used and usage scenarios. The system is currently operational through the SMER web-site.

  16. The mixed reality of things: emerging challenges for human-information interaction

    NASA Astrophysics Data System (ADS)

    Spicer, Ryan P.; Russell, Stephen M.; Rosenberg, Evan Suma

    2017-05-01

    Virtual and mixed reality technology has advanced tremendously over the past several years. This nascent medium has the potential to transform how people communicate over distance, train for unfamiliar tasks, operate in challenging environments, and how they visualize, interact, and make decisions based on complex data. At the same time, the marketplace has experienced a proliferation of network-connected devices and generalized sensors that are becoming increasingly accessible and ubiquitous. As the "Internet of Things" expands to encompass a predicted 50 billion connected devices by 2020, the volume and complexity of information generated in pervasive and virtualized environments will continue to grow exponentially. The convergence of these trends demands a theoretically grounded research agenda that can address emerging challenges for human-information interaction (HII). Virtual and mixed reality environments can provide controlled settings where HII phenomena can be observed and measured, new theories developed, and novel algorithms and interaction techniques evaluated. In this paper, we describe the intersection of pervasive computing with virtual and mixed reality, identify current research gaps and opportunities to advance the fundamental understanding of HII, and discuss implications for the design and development of cyber-human systems for both military and civilian use.

  17. Combining Multi-Agent Systems and Wireless Sensor Networks for Monitoring Crop Irrigation.

    PubMed

    Villarrubia, Gabriel; Paz, Juan F De; Iglesia, Daniel H De La; Bajo, Javier

    2017-08-02

    Monitoring mechanisms that ensure efficient crop growth are essential on many farms, especially in certain areas of the planet where water is scarce. Most farmers must assume the high cost of the required equipment in order to be able to streamline natural resources on their farms. Considering that many farmers cannot afford to install this equipment, it is necessary to look for more effective solutions that would be cheaper to implement. The objective of this study is to build virtual organizations of agents that can communicate between each other while monitoring crops. A low cost sensor architecture allows farmers to monitor and optimize the growth of their crops by streamlining the amount of resources the crops need at every moment. Since the hardware has limited processing and communication capabilities, our approach uses the PANGEA architecture to overcome this limitation. Specifically, we will design a system that is capable of collecting heterogeneous information from its environment, using sensors for temperature, solar radiation, humidity, pH, moisture and wind. A major outcome of our approach is that our solution is able to merge heterogeneous data from sensors and produce a response adapted to the context. In order to validate the proposed system, we present a case study in which farmers are provided with a tool that allows us to monitor the condition of crops on a TV screen using a low cost device.

  18. Combining Multi-Agent Systems and Wireless Sensor Networks for Monitoring Crop Irrigation

    PubMed Central

    Villarrubia, Gabriel; De Paz, Juan F.; De La Iglesia, Daniel H.; Bajo, Javier

    2017-01-01

    Monitoring mechanisms that ensure efficient crop growth are essential on many farms, especially in certain areas of the planet where water is scarce. Most farmers must assume the high cost of the required equipment in order to be able to streamline natural resources on their farms. Considering that many farmers cannot afford to install this equipment, it is necessary to look for more effective solutions that would be cheaper to implement. The objective of this study is to build virtual organizations of agents that can communicate between each other while monitoring crops. A low cost sensor architecture allows farmers to monitor and optimize the growth of their crops by streamlining the amount of resources the crops need at every moment. Since the hardware has limited processing and communication capabilities, our approach uses the PANGEA architecture to overcome this limitation. Specifically, we will design a system that is capable of collecting heterogeneous information from its environment, using sensors for temperature, solar radiation, humidity, pH, moisture and wind. A major outcome of our approach is that our solution is able to merge heterogeneous data from sensors and produce a response adapted to the context. In order to validate the proposed system, we present a case study in which farmers are provided with a tool that allows us to monitor the condition of crops on a TV screen using a low cost device. PMID:28767089

  19. Simulation of Attacks for Security in Wireless Sensor Network.

    PubMed

    Diaz, Alvaro; Sanchez, Pablo

    2016-11-18

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node's software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work.

  20. Virtually transparent epidermal imagery (VTEI): on new approaches to in vivo wireless high-definition video and image processing.

    PubMed

    Anderson, Adam L; Lin, Bingxiong; Sun, Yu

    2013-12-01

    This work first overviews a novel design, and prototype implementation, of a virtually transparent epidermal imagery (VTEI) system for laparo-endoscopic single-site (LESS) surgery. The system uses a network of multiple, micro-cameras and multiview mosaicking to obtain a panoramic view of the surgery area. The prototype VTEI system also projects the generated panoramic view on the abdomen area to create a transparent display effect that mimics equivalent, but higher risk, open-cavity surgeries. The specific research focus of this paper is on two important aspects of a VTEI system: 1) in vivo wireless high-definition (HD) video transmission and 2) multi-image processing-both of which play key roles in next-generation systems. For transmission and reception, this paper proposes a theoretical wireless communication scheme for high-definition video in situations that require extremely small-footprint image sensors and in zero-latency applications. In such situations the typical optimized metrics in communication schemes, such as power and data rate, are far less important than latency and hardware footprint that absolutely preclude their use if not satisfied. This work proposes the use of a novel Frequency-Modulated Voltage-Division Multiplexing (FM-VDM) scheme where sensor data is kept analog and transmitted via "voltage-multiplexed" signals that are also frequency-modulated. Once images are received, a novel Homographic Image Mosaicking and Morphing (HIMM) algorithm is proposed to stitch images from respective cameras, that also compensates for irregular surfaces in real-time, into a single cohesive view of the surgical area. In VTEI, this view is then visible to the surgeon directly on the patient to give an "open cavity" feel to laparoscopic procedures.

  1. Method of the Determination of Exterior Orientation of Sensors in Hilbert Type Space.

    PubMed

    Stępień, Grzegorz

    2018-03-17

    The following article presents a new isometric transformation algorithm based on the transformation in the newly normed Hilbert type space. The presented method is based on so-called virtual translations, already known in advance, of two relative oblique orthogonal coordinate systems-interior and exterior orientation of sensors-to a common, known in both systems, point. Each of the systems is translated along its axis (the systems have common origins) and at the same time the angular relative orientation of both coordinate systems is constant. The translation of both coordinate systems is defined by the spatial norm determining the length of vectors in the new Hilbert type space. As such, the displacement of two relative oblique orthogonal systems is reduced to zero. This makes it possible to directly calculate the rotation matrix of the sensor. The next and final step is the return translation of the system along an already known track. The method can be used for big rotation angles. The method was verified in laboratory conditions for the test data set and measurement data (field data). The accuracy of the results in the laboratory test is on the level of 10 -6 of the input data. This confirmed the correctness of the assumed calculation method. The method is a further development of the author's 2017 Total Free Station (TFS) transformation to several centroids in Hilbert type space. This is the reason why the method is called Multi-Centroid Isometric Transformation-MCIT. MCIT is very fast and enables, by reducing to zero the translation of two relative oblique orthogonal coordinate systems, direct calculation of the exterior orientation of the sensors.

  2. Design and application of a small size SAFT imaging system for concrete structure

    NASA Astrophysics Data System (ADS)

    Shao, Zhixue; Shi, Lihua; Shao, Zhe; Cai, Jian

    2011-07-01

    A method of ultrasonic imaging detection is presented for quick non-destructive testing (NDT) of concrete structures using synthesized aperture focusing technology (SAFT). A low cost ultrasonic sensor array consisting of 12 market available low frequency ultrasonic transducers is designed and manufactured. A channel compensation method is proposed to improve the consistency of different transducers. The controlling devices for array scan as well as the virtual instrument for SAFT imaging are designed. In the coarse scan mode with the scan step of 50 mm, the system can quickly give an image display of a cross section of 600 mm (L) × 300 mm (D) by one measurement. In the refined scan model, the system can reduce the scan step and give an image display of the same cross section by moving the sensor array several times. Experiments on staircase specimen, concrete slab with embedded target, and building floor with underground pipe line all verify the efficiency of the proposed method.

  3. CCD TV focal plane guider development and comparison to SIRTF applications

    NASA Technical Reports Server (NTRS)

    Rank, David M.

    1989-01-01

    It is expected that the SIRTF payload will use a CCD TV focal plane fine guidance sensor to provide acquisition of sources and tracking stability of the telescope. Work has been done to develop CCD TV cameras and guiders at Lick Observatory for several years and have produced state of the art CCD TV systems for internal use. NASA decided to provide additional support so that the limits of this technology could be established and a comparison between SIRTF requirements and practical systems could be put on a more quantitative basis. The results of work carried out at Lick Observatory which was designed to characterize present CCD autoguiding technology and relate it to SIRTF applications is presented. Two different design types of CCD cameras were constructed using virtual phase and burred channel CCD sensors. A simple autoguider was built and used on the KAO, Mt. Lemon and Mt. Hamilton telescopes. A video image processing system was also constructed in order to characterize the performance of the auto guider and CCD cameras.

  4. The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Zhang, Xin; Zhang, Tianhong

    2017-11-01

    A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.

  5. The IRIS Data Management Center: An international "network of networks", providing open, automated access to geographically distributed sensors of geophysical and environmental data.

    NASA Astrophysics Data System (ADS)

    Benson, R. B.; Ahern, T. K.; Trabant, C.

    2006-12-01

    The IRIS Data Management System has long supported international collaboration for seismology by both deploying a global network of seismometers and creating and maintaining an open and accessible archive in Seattle, WA, known as the Data Management Center (DMC). With sensors distributed on a global scale spanning more than 30 years of digital data, the DMC provides a rich repository of observations across broad time and space domains. Primary seismological data types include strong motion and broadband seismometers, conventional and superconducting gravimeters, tilt and creep meters, GPS measurements, along with other similar sensors that record accurate and calibrated ground motion. What may not be as well understood is the volume of environmental data that accompanies typical seismological data these days. This poster will review the types of time-series data that are currently being collected, how they are collected, and made freely available for download at the IRIS DMC. Environmental sensor data that is often co-located with geophysical data sensors include temperature, barometric pressure, wind direction and speed, humidity, insolation, rain gauge, and sometimes hydrological data like water current, level, temperature and depth. As the primary archival institution of the International Federation of Digital Seismograph Networks (FDSN), the IRIS DMC collects approximately 13,600 channels of real-time data from 69 different networks, from close to 1600 individual stations, currently averaging 10Tb per year in total. A major contribution to the IRIS archive currently is the EarthScope project data, a ten-year science undertaking that is collecting data from a high-resolution, multi-variate sensor network. Data types include magnetotelluric, high-sample rate seismics from a borehole drilled into the San Andreas fault (SAFOD) and various types of strain data from the Plate Boundary Observatory (PBO). In addition to the DMC, data centers located in other countries are networked seamlessly, and are providing access for researchers to these data from national networks around the world utilizing the IRIS developed Data Handling Interface (DHI) system. This poster will highlight some of the DHI enabled clients that allow geophysical information to be directly transferred to the clients. This ability allows one to construct a virtual network of data centers providing the illusion of a single virtual observatory. Furthermore, some of the features that will be shown include direct connections to MATLAB and the ability to access globally distributed sensor data in real time. We encourage discussion and participation from network operators who would like to leverage existing technology, as well as enabling collaboration.

  6. Virtual Stream Stage Sensor Using Projected Geometry and Augmented Reality for Crowdsourcing Citizen Science Applications

    NASA Astrophysics Data System (ADS)

    Demir, I.; Villanueva, P.; Sermet, M. Y.

    2016-12-01

    Accurately measuring the surface level of a river is a vital component of environmental monitoring and modeling efforts. Reliable data points are required for calibrating the statistical models that are used for, among other things, flood prediction and model validation. While current embedded monitoring systems provide accurate measurements, the cost to replicate this current system on a large scale is prohibitively expensive, limiting the quantity of data available. In this project, we describe a new method to accurately measure river levels using smartphone sensors. We take three pictures of the same point on the river's surface and perform calculations based on the GPS location and spatial orientation of the smartphone for each picture using projected geometry. Augmented reality is used to improve the accuracy of smartphone sensor readings. This proposed implementation is significantly cheaper than existing water measuring systems while offering similar accuracy. Additionally, since the measurements are taken by sensors that are commonly found in smartphones, crowdsourcing the collection of river measurements to citizen-scientists is possible. Thus, our proposed method leads to a much higher quantity of reliable data points than currently possible at a fraction of the cost. Sample runs and an analysis of the results are included. The presentation concludes with a discussion of future work, including applications to other fields and plans to implement a fully automated system using this method in tandem with image recognition and machine learning.

  7. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field.

    PubMed

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-09-09

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called "virtual sensor"), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth's magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms.

  8. Tailoring gas sensor arrays via the design of short peptides sequences as binding elements.

    PubMed

    Mascini, Marcello; Pizzoni, Daniel; Perez, German; Chiarappa, Emilio; Di Natale, Corrado; Pittia, Paola; Compagnone, Dario

    2017-07-15

    A semi-combinatorial virtual approach was used to prepare peptide-based gas sensors with binding properties towards five different chemical classes (alcohols, aldehydes, esters, hydrocarbons and ketones). Molecular docking simulations were conducted for a complete tripeptide library (8000 elements) versus 58 volatile compounds belonging to those five chemical classes. By maximizing the differences between chemical classes, a subset of 120 tripeptides was extracted and used as scaffolds for generating a combinatorial library of 7912 tetrapeptides. This library was processed in an analogous way to the former. Five tetrapeptides (IHRI, KSDS, LGFD, TGKF and WHVS) were chosen depending on their virtual affinity and cross-reactivity for the experimental step. The five peptides were covalently bound to gold nanoparticles by adding a terminal cysteine to each tetrapeptide and deposited onto 20MHz quartz crystal microbalances to construct the gas sensors. The behavior of peptides after this chemical modification was simulated at the pH range used in the immobilization step. ΔF signals analyzed by principal component analysis matched the virtually screened data. The array was able to clearly discriminate the 13 volatile compounds tested based on their hydrophobicity and hydrophilicity molecules as well as the molecular weight. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. A modular, programmable measurement system for physiological and spaceflight applications

    NASA Technical Reports Server (NTRS)

    Hines, John W.; Ricks, Robert D.; Miles, Christopher J.

    1993-01-01

    The NASA-Ames Sensors 2000! Program has developed a small, compact, modular, programmable, sensor signal conditioning and measurement system, initially targeted for Life Sciences Spaceflight Programs. The system consists of a twelve-slot, multi-layer, distributed function backplane, a digital microcontroller/memory subsystem, conditioned and isolated power supplies, and six application-specific, physiological signal conditioners. Each signal condition is capable of being programmed for gains, offsets, calibration and operate modes, and, in some cases, selectable outputs and functional modes. Presently, the system has the capability for measuring ECG, EMG, EEG, Temperature, Respiration, Pressure, Force, and Acceleration parameters, in physiological ranges. The measurement system makes heavy use of surface-mount packaging technology, resulting in plug in modules sized 125x55 mm. The complete 12-slot system is contained within a volume of 220x150x70mm. The system's capabilities extend well beyond the specific objectives of NASA programs. Indeed, the potential commercial uses of the technology are virtually limitless. In addition to applications in medical and biomedical sensing, the system might also be used in process control situations, in clinical or research environments, in general instrumentation systems, factory processing, or any other applications where high quality measurements are required.

  10. A modular, programmable measurement system for physiological and spaceflight applications

    NASA Astrophysics Data System (ADS)

    Hines, John W.; Ricks, Robert D.; Miles, Christopher J.

    1993-02-01

    The NASA-Ames Sensors 2000] Program has developed a small, compact, modular, programmable, sensor signal conditioning and measurement system, initially targeted for Life Sciences Spaceflight Programs. The system consists of a twelve-slot, multi-layer, distributed function backplane, a digital microcontroller/memory subsystem, conditioned and isolated power supplies, and six application-specific, physiological signal conditioners. Each signal condition is capable of being programmed for gains, offsets, calibration and operate modes, and, in some cases, selectable outputs and functional modes. Presently, the system has the capability for measuring ECG, EMG, EEG, Temperature, Respiration, Pressure, Force, and Acceleration parameters, in physiological ranges. The measurement system makes heavy use of surface-mount packaging technology, resulting in plug in modules sized 125x55 mm. The complete 12-slot system is contained within a volume of 220x150x70mm. The system's capabilities extend well beyond the specific objectives of NASA programs. Indeed, the potential commercial uses of the technology are virtually limitless. In addition to applications in medical and biomedical sensing, the system might also be used in process control situations, in clinical or research environments, in general instrumentation systems, factory processing, or any other applications where high quality measurements are required.

  11. Advanced Networks in Dental Rich Online MEDiA (ANDROMEDA)

    NASA Astrophysics Data System (ADS)

    Elson, Bruce; Reynolds, Patricia; Amini, Ardavan; Burke, Ezra; Chapman, Craig

    There is growing demand for dental education and training not only in terms of knowledge but also skills. This demand is driven by continuing professional development requirements in the more developed economies, personnel shortages and skills differences across the European Union (EU) accession states and more generally in the developing world. There is an excellent opportunity for the EU to meet this demand by developing an innovative online flexible learning platform (FLP). Current clinical online systems are restricted to the delivery of general, knowledge-based training with no easy method of personalization or delivery of skill-based training. The PHANTOM project, headed by Kings College London is developing haptic-based virtual reality training systems for clinical dental training. ANDROMEDA seeks to build on this and establish a Flexible Learning Platform that can integrate the haptic and sensor based training with rich media knowledge transfer, whilst using sophisticated technologies such as including service-orientated architecture (SOA), Semantic Web technologies, knowledge-based engineering, business intelligence (BI) and virtual worlds for personalization.

  12. Understanding Earthquake Fault Systems Using QuakeSim Analysis and Data Assimilation Tools

    NASA Technical Reports Server (NTRS)

    Donnellan, Andrea; Parker, Jay; Glasscoe, Margaret; Granat, Robert; Rundle, John; McLeod, Dennis; Al-Ghanmi, Rami; Grant, Lisa

    2008-01-01

    We are using the QuakeSim environment to model interacting fault systems. One goal of QuakeSim is to prepare for the large volumes of data that spaceborne missions such as DESDynI will produce. QuakeSim has the ability to ingest distributed heterogenous data in the form of InSAR, GPS, seismicity, and fault data into various earthquake modeling applications, automating the analysis when possible. Virtual California simulates interacting faults in California. We can compare output from long time history Virtual California runs with the current state of strain and the strain history in California. In addition to spaceborne data we will begin assimilating data from UAVSAR airborne flights over the San Francisco Bay Area, the Transverse Ranges, and the Salton Trough. Results of the models are important for understanding future earthquake risk and for providing decision support following earthquakes. Improved models require this sensor web of different data sources, and a modeling environment for understanding the combined data.

  13. Machine learning-based assessment tool for imbalance and vestibular dysfunction with virtual reality rehabilitation system.

    PubMed

    Yeh, Shih-Ching; Huang, Ming-Chun; Wang, Pa-Chun; Fang, Te-Yung; Su, Mu-Chun; Tsai, Po-Yi; Rizzo, Albert

    2014-10-01

    Dizziness is a major consequence of imbalance and vestibular dysfunction. Compared to surgery and drug treatments, balance training is non-invasive and more desired. However, training exercises are usually tedious and the assessment tool is insufficient to diagnose patient's severity rapidly. An interactive virtual reality (VR) game-based rehabilitation program that adopted Cawthorne-Cooksey exercises, and a sensor-based measuring system were introduced. To verify the therapeutic effect, a clinical experiment with 48 patients and 36 normal subjects was conducted. Quantified balance indices were measured and analyzed by statistical tools and a Support Vector Machine (SVM) classifier. In terms of balance indices, patients who completed the training process are progressed and the difference between normal subjects and patients is obvious. Further analysis by SVM classifier show that the accuracy of recognizing the differences between patients and normal subject is feasible, and these results can be used to evaluate patients' severity and make rapid assessment. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Open core control software for surgical robots

    PubMed Central

    Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B.; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-01-01

    Object In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge “intelligent surgical robot” will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are “home-made” in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. Materials and methods In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a “force guide” for supporting operators to perform precise manipulation by using a master–slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. Results The Open Core Control software was implemented on a surgical master–slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a “force guide” on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. Conclusion In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement “General Principles of Software Validation” or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field. PMID:20033506

  15. Virtual Sensors for On-line Wheel Wear and Part Roughness Measurement in the Grinding Process

    PubMed Central

    Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A.; Cabanes, Itziar; Pombo, Iñigo

    2014-01-01

    Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations. PMID:24854055

  16. Complementarity of ResourceSat-1 AWiFS and Landsat TM/ETM+ sensors

    USGS Publications Warehouse

    Goward, S.N.; Chander, G.; Pagnutti, M.; Marx, A.; Ryan, R.; Thomas, N.; Tetrault, R.

    2012-01-01

    Considerable interest has been given to forming an international collaboration to develop a virtual moderate spatial resolution land observation constellation through aggregation of data sets from comparable national observatories such as the US Landsat, the Indian ResourceSat and related systems. This study explores the complementarity of India's ResourceSat-1 Advanced Wide Field Sensor (AWiFS) with the Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+). The analysis focuses on the comparative radiometry, geometry, and spectral properties of the two sensors. Two applied assessments of these data are also explored to examine the strengths and limitations of these alternate sources of moderate resolution land imagery with specific application domains. There are significant technical differences in these imaging systems including spectral band response, pixel dimensions, swath width, and radiometric resolution which produce differences in observation data sets. None of these differences was found to strongly limit comparable analyses in agricultural and forestry applications. Overall, we found that the AWiFS and Landsat TM/ETM+ imagery are comparable and in some ways complementary, particularly with respect to temporal repeat frequency. We have found that there are limits to our understanding of the AWiFS performance, for example, multi-camera design and stability of radiometric calibration over time, that leave some uncertainty that has been better addressed for Landsat through the Image Assessment System and related cross-sensor calibration studies. Such work still needs to be undertaken for AWiFS and similar observatories that may play roles in the Global Earth Observation System of Systems Land Surface Imaging Constellation.

  17. Calibration of a subcutaneous amperometric glucose sensor implanted for 7 days in diabetic patients. Part 2. Superiority of the one-point calibration method.

    PubMed

    Choleau, C; Klein, J C; Reach, G; Aussedat, B; Demaria-Pesce, V; Wilson, G S; Gifford, R; Ward, W K

    2002-08-01

    Calibration, i.e. the transformation in real time of the signal I(t) generated by the glucose sensor at time t into an estimation of glucose concentration G(t), represents a key issue for the development of a continuous glucose monitoring system. To compare two calibration procedures. In the one-point calibration, which assumes that I(o) is negligible, S is simply determined as the ratio I/G, and G(t) = I(t)/S. The two-point calibration consists in the determination of a sensor sensitivity S and of a background current I(o) by plotting two values of the sensor signal versus the concomitant blood glucose concentrations. The subsequent estimation of G(t) is given by G(t) = (I(t)-I(o))/S. A glucose sensor was implanted in the abdominal subcutaneous tissue of nine type 1 diabetic patients during 3 (n = 2) and 7 days (n = 7). The one-point calibration was performed a posteriori either once per day before breakfast, or twice per day before breakfast and dinner, or three times per day before each meal. The two-point calibration was performed each morning during breakfast. The percentages of points present in zones A and B of the Clarke Error Grid were significantly higher when the system was calibrated using the one-point calibration. Use of two one-point calibrations per day before meals was virtually as accurate as three one-point calibrations. This study demonstrates the feasibility of a simple method for calibrating a continuous glucose monitoring system.

  18. Integration of stereotactic ultrasonic data into an interactive image-guided neurosurgical system

    NASA Astrophysics Data System (ADS)

    Shima, Daniel W.; Galloway, Robert L., Jr.

    1998-06-01

    Stereotactic ultrasound can be incorporated into an interactive, image-guide neurosurgical system by using an optical position sensor to define the location of an intraoperative scanner in physical space. A C-program has been developed that communicates with the OptotrakTM system developed by Northern Digital Inc. to optically track the three-dimensional position and orientation of a fan-shaped area defined with respect to a hand-held probe. (i.e., a virtual B-mode ultrasound fan beam) Volumes of CT and MR head scans from the same patient are registered to a location in physical space using a point-based technique. The coordinates of the virtual fan beam in physical space are continuously calculated and updated on-the-fly. During each program loop, the CT and MR data volumes are reformatted along the same plane and displayed as two fan-shaped images that correspond to the current physical-space location of the virtual fan beam. When the reformatted preoperative tomographic images are eventually paired with a real-time intraoperative ultrasound image, a neurosurgeon will be able to use the unique information of each imaging modality (e.g., the high resolution and tissue contrast of CT and MR and the real-time functionality of ultrasound) in a complementary manner to identify structures in the brain more easily and to guide surgical procedures more effectively.

  19. GreenVMAS: Virtual Organization Based Platform for Heating Greenhouses Using Waste Energy from Power Plants.

    PubMed

    González-Briones, Alfonso; Chamoso, Pablo; Yoe, Hyun; Corchado, Juan M

    2018-03-14

    The gradual depletion of energy resources makes it necessary to optimize their use and to reuse them. Although great advances have already been made in optimizing energy generation processes, many of these processes generate energy that inevitably gets wasted. A clear example of this are nuclear, thermal and carbon power plants, which lose a large amount of energy that could otherwise be used for different purposes, such as heating greenhouses. The role of GreenVMAS is to maintain the required temperature level in greenhouses by using the waste energy generated by power plants. It incorporates a case-based reasoning system, virtual organizations and algorithms for data analysis and for efficient interaction with sensors and actuators. The system is context aware and scalable as it incorporates an artificial neural network, this means that it can operate correctly even if the number and characteristics of the greenhouses participating in the case study change. The architecture was evaluated empirically and the results show that the user's energy bill is greatly reduced with the implemented system.

  20. GreenVMAS: Virtual Organization Based Platform for Heating Greenhouses Using Waste Energy from Power Plants

    PubMed Central

    Yoe, Hyun

    2018-01-01

    The gradual depletion of energy resources makes it necessary to optimize their use and to reuse them. Although great advances have already been made in optimizing energy generation processes, many of these processes generate energy that inevitably gets wasted. A clear example of this are nuclear, thermal and carbon power plants, which lose a large amount of energy that could otherwise be used for different purposes, such as heating greenhouses. The role of GreenVMAS is to maintain the required temperature level in greenhouses by using the waste energy generated by power plants. It incorporates a case-based reasoning system, virtual organizations and algorithms for data analysis and for efficient interaction with sensors and actuators. The system is context aware and scalable as it incorporates an artificial neural network, this means that it can operate correctly even if the number and characteristics of the greenhouses participating in the case study change. The architecture was evaluated empirically and the results show that the user’s energy bill is greatly reduced with the implemented system. PMID:29538351

  1. Integrated microelectromechanical gyroscope under shock loads

    NASA Astrophysics Data System (ADS)

    Nesterenko, T. G.; Koleda, A. N.; Barbin, E. S.

    2018-01-01

    The paper presents a new design of a shock-proof two-axis microelectromechanical gyroscope. Without stoppers, the shock load enables the interaction between the silicon sensor elements. Stoppers were installed in the gyroscope to prevent the contact interaction between electrodes and spring elements with fixed part of the sensor. The contact of stoppers occurs along the plane, thereby preventing the system from serious contact stresses. The shock resistance of the gyroscope is improved by the increase in its eigenfrequency at which the contact interaction does not occur. It is shown that the shock load directed along one axis does not virtually cause the movement of sensing elements along the crosswise axes. Maximum stresses observed in the proposed gyroscope at any loading direction do not exceed the value allowable for silicon.

  2. In-home virtual reality videogame telerehabilitation in adolescents with hemiplegic cerebral palsy.

    PubMed

    Golomb, Meredith R; McDonald, Brenna C; Warden, Stuart J; Yonkman, Janell; Saykin, Andrew J; Shirley, Bridget; Huber, Meghan; Rabin, Bryan; Abdelbaky, Moustafa; Nwosu, Michelle E; Barkat-Masih, Monica; Burdea, Grigore C

    2010-01-01

    Golomb MR, McDonald BC, Warden SJ, Yonkman J, Saykin AJ, Shirley B, Huber M, Rabin B, AbdelBaky M, Nwosu ME, Barkat-Masih M, Burdea GC. In-home virtual reality videogame telerehabilitation in adolescents with hemiplegic cerebral palsy. To investigate whether in-home remotely monitored virtual reality videogame-based telerehabilitation in adolescents with hemiplegic cerebral palsy can improve hand function and forearm bone health, and demonstrate alterations in motor circuitry activation. A 3-month proof-of-concept pilot study. Virtual reality videogame-based rehabilitation systems were installed in the homes of 3 participants and networked via secure Internet connections to the collaborating engineering school and children's hospital. Adolescents (N=3) with severe hemiplegic cerebral palsy. Participants were asked to exercise the plegic hand 30 minutes a day, 5 days a week using a sensor glove fitted to the plegic hand and attached to a remotely monitored videogame console installed in their home. Games were custom developed, focused on finger movement, and included a screen avatar of the hand. Standardized occupational therapy assessments, remote assessment of finger range of motion (ROM) based on sensor glove readings, assessment of plegic forearm bone health with dual-energy x-ray absorptiometry (DXA) and peripheral quantitative computed tomography (pQCT), and functional magnetic resonance imaging (fMRI) of hand grip task. All 3 adolescents showed improved function of the plegic hand on occupational therapy testing, including increased ability to lift objects, and improved finger ROM based on remote measurements. The 2 adolescents who were most compliant showed improvements in radial bone mineral content and area in the plegic arm. For all 3 adolescents, fMRI during grip task contrasting the plegic and nonplegic hand showed expanded spatial extent of activation at posttreatment relative to baseline in brain motor circuitry (eg, primary motor cortex and cerebellum). Use of remotely monitored virtual reality videogame telerehabilitation appears to produce improved hand function and forearm bone health (as measured by DXA and pQCT) in adolescents with chronic disability who practice regularly. Improved hand function appears to be reflected in functional brain changes. Copyright (c) 2010 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  3. A virtual environment for modeling and testing sensemaking with multisensor information

    NASA Astrophysics Data System (ADS)

    Nicholson, Denise; Bartlett, Kathleen; Hoppenfeld, Robert; Nolan, Margaret; Schatz, Sae

    2014-05-01

    Given today's challenging Irregular Warfare, members of small infantry units must be able to function as highly sensitized perceivers throughout large operational areas. Improved Situation Awareness (SA) in rapidly changing fields of operation may also save lives of law enforcement personnel and first responders. Critical competencies for these individuals include sociocultural sensemaking, the ability to assess a situation through the perception of essential salient environmental and behavioral cues, and intuitive sensemaking, which allows experts to act with the utmost agility. Intuitive sensemaking and intuitive decision making (IDM), which involve processing information at a subconscious level, have been cited as playing a critical role in saving lives and enabling mission success. This paper discusses the development of a virtual environment for modeling, analysis and human-in-the-loop testing of perception, sensemaking, intuitive sensemaking, decision making (DM), and IDM performance, using state-of-the-art scene simulation and modeled imagery from multi-source systems, under the "Intuition and Implicit Learning" Basic Research Challenge (I2BRC) sponsored by the Office of Naval Research (ONR). We present results from our human systems engineering approach including 1) development of requirements and test metrics for individual and integrated system components, 2) the system architecture design 3) images of the prototype virtual environment testing system and 4) a discussion of the system's current and future testing capabilities. In particular, we examine an Enhanced Interaction Suite testbed to model, test, and analyze the impact of advances in sensor spatial, and temporal resolution to a user's intuitive sensemaking and decision making capabilities.

  4. Interreality: A New Paradigm for E-health.

    PubMed

    Riva, Giuseppe

    2009-01-01

    "Interreality" is a personalized immersive e-therapy whose main novelty is a hybrid, closed-loop empowering experience bridging physical and virtual worlds. The main feature of interreality is a twofold link between the virtual and the real world: (a) behavior in the physical world influences the experience in the virtual one; (b) behavior in the virtual world influences the experience in the real one. This is achieved through: (1) 3D Shared Virtual Worlds: role-playing experiences in which one or more users interact with one another within a 3D world; (2) Bio and Activity Sensors (From the Real to the Virtual World): They are used to track the emotional/health/activity status of the user and to influence his/her experience in the virtual world (aspect, activity and access); (3) Mobile Internet Appliances (From the Virtual to the Real One): In interreality, the social and individual user activity in the virtual world has a direct link with the users' life through a mobile phone/digital assistant. The different technologies that are involved in the interreality vision and its clinical rationale are addressed and discussed.

  5. Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle

    PubMed Central

    Barriuso, Alberto L.; De Paz, Juan F.; Lozano, Álvaro

    2018-01-01

    Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed. PMID:29301310

  6. Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle.

    PubMed

    Barriuso, Alberto L; Villarrubia González, Gabriel; De Paz, Juan F; Lozano, Álvaro; Bajo, Javier

    2018-01-02

    Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed.

  7. Latency and User Performance in Virtual Environments and Augmented Reality

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    2009-01-01

    System rendering latency has been recognized by senior researchers, such as Professor Fredrick Brooks of UNC (Turing Award 1999), as a major factor limiting the realism and utility of head-referenced displays systems. Latency has been shown to reduce the user's sense of immersion within a virtual environment, disturb user interaction with virtual objects, and to contribute to motion sickness during some simulation tasks. Latency, however, is not just an issue for external display systems since finite nerve conduction rates and variation in transduction times in the human body's sensors also pose problems for latency management within the nervous system. Some of the phenomena arising from the brain's handling of sensory asynchrony due to latency will be discussed as a prelude to consideration of the effects of latency in interactive displays. The causes and consequences of the erroneous movement that appears in displays due to latency will be illustrated with examples of the user performance impact provided by several experiments. These experiments will review the generality of user sensitivity to latency when users judge either object or environment stability. Hardware and signal processing countermeasures will also be discussed. In particular the tuning of a simple extrapolative predictive filter not using a dynamic movement model will be presented. Results show that it is possible to adjust this filter so that the appearance of some latencies may be hidden without the introduction of perceptual artifacts such as overshoot. Several examples of the effects of user performance will be illustrated by three-dimensional tracking and tracing tasks executed in virtual environments. These experiments demonstrate classic phenomena known from work on manual control and show the need for very responsive systems if they are indented to support precise manipulation. The practical benefits of removing interfering latencies from interactive systems will be emphasized with some classic final examples from surgical telerobotics, and human-computer interaction.

  8. Performance analysis of routing protocols for IoT

    NASA Astrophysics Data System (ADS)

    Manda, Sridhar; Nalini, N.

    2018-04-01

    Internet of Things (IoT) is an arrangement of advancements that are between disciplinary. It is utilized to have compelling combination of both physical and computerized things. With IoT physical things can have personal virtual identities and participate in distributed computing. Realization of IoT needs the usage of sensors based on the sector for which IoT is integrated. For instance, in healthcare domain, IoT needs to have integration with wearable sensors used by patients. As sensor devices produce huge amount of data, often called big data, there should be efficient routing protocols in place. To the extent remote systems is worried there are some current protocols, for example, OLSR, DSR and AODV. It additionally tosses light into Trust based routing protocol for low-power and lossy systems (TRPL) for IoT. These are broadly utilized remote directing protocols. As IoT is developing round the corner, it is basic to investigate routing protocols that and evaluate their execution regarding throughput, end to end delay, and directing overhead. The execution experiences can help in settling on very much educated choices while incorporating remote systems with IoT. In this paper, we analyzed different routing protocols and their performance is compared. It is found that AODV showed better performance than other routing protocols aforementioned.

  9. Development of a bio-magnetic measurement system and sensor configuration analysis for rats

    NASA Astrophysics Data System (ADS)

    Kim, Ji-Eun; Kim, In-Seon; Kim, Kiwoong; Lim, Sanghyun; Kwon, Hyukchan; Kang, Chan Seok; Ahn, San; Yu, Kwon Kyu; Lee, Yong-Ho

    2017-04-01

    Magnetoencephalography (MEG) based on superconducting quantum interference devices enables the measurement of very weak magnetic fields (10-1000 fT) generated from the human or animal brain. In this article, we introduce a small MEG system that we developed specifically for use with rats. Our system has the following characteristics: (1) variable distance between the pick-up coil and outer Dewar bottom (˜5 mm), (2) small pick-up coil (4 mm) for high spatial resolution, (3) good field sensitivity (45 ˜ 80 fT /cm/√{Hz} ) , (4) the sensor interval satisfies the Nyquist spatial sampling theorem, and (5) small source localization error for the region to be investigated. To reduce source localization error, it is necessary to establish an optimal sensor layout. To this end, we simulated confidence volumes at each point on a grid on the surface of a virtual rat head. In this simulation, we used locally fitted spheres as model rat heads. This enabled us to consider more realistic volume currents. We constrained the model such that the dipoles could have only four possible orientations: the x- and y-axes from the original coordinates, and two tangentially layered dipoles (local x- and y-axes) in the locally fitted spheres. We considered the confidence volumes according to the sensor layout and dipole orientation and positions. We then conducted a preliminary test with a 4-channel MEG system prior to manufacturing the multi-channel system. Using the 4-channel MEG system, we measured rat magnetocardiograms. We obtained well defined P-, QRS-, and T-waves in rats with a maximum value of 15 pT/cm. Finally, we measured auditory evoked fields and steady state auditory evoked fields with maximum values 400 fT/cm and 250 fT/cm, respectively.

  10. Virtual containment system for composite flywheels

    NASA Astrophysics Data System (ADS)

    Shiue, Fuh-Wen

    2001-07-01

    There is much interest in advanced composite flywheel systems for use on satellites mainly because of the potential for considerable weight savings associated with combined energy and momentum management. The additional weight of a containment system needed to protect the satellite in the event of a flywheel failure, however, could negate the potential savings. Therefore, the development of a condition monitoring and virtual containment system is essential to ensure the wide acceptance of flywheel batteries for spacecraft applications. A virtual containment system is a near real-time condition monitoring system, plus additional logic to adjust the operating conditions (maximum rotational speed) accordingly when a flaw or fault is detected. Flaws of primary interest in this study are those unique to composite flywheels, such as delamination and debonding of interfaces. Such flaws change the balance state of a flywheel through small, but detectable, motion of the mass center and principal axes of inertia. A proposed monitoring technique determines the existence and the extent of such flaws by a method similar to the influence-coefficient rotor balancing method. Because of the speed-dependence of the imbalance caused by elastic flaws, a normalized imbalance change, which is a direct measure of the flaw size, was defined. To account for the possibility that flaw growth could actually improve the balance state of a rotor, a new concept of accumulated imbalance change was also introduced. Laboratory tests showed the proposed method was able to detect small simulated flaws that result in as little as 2--3 microns of mass center movement. Fracture mechanics concepts were used to evaluate the severity and growth rate of the detected flaw. An interesting discovery that coincided with some experimental observations reported in the literature was the energy release rate reduction with a large crack. This finding indicates a possible stress relief and crack arrest when a circumferential crack grows over certain size. This phenomenon is largely due to crack curvature unique to filament-wound composite flywheels. Several virtual containment strategies were investigated numerically to demonstrate the feasibility of virtual containment systems. Once a flaw is detected during flywheel operation, the maximum operating speed can be reduced to prevent catastrophic failure, achieve a specific design life, and maximize energy storage capacity over the remaining life. A numerical example showed 4--5 times of improvement in cumulative energy storage through lifetime with a virtual containment. A closed-loop speed controller using condition monitoring sensor feedback was investigated numerically to account for possible imperfection of the fracture mechanics model. Finally, an integrated virtual containment system without any complex fracture mechanics analysis was also developed and successfully demonstrated experimentally.

  11. SOA approach to battle command: simulation interoperability

    NASA Astrophysics Data System (ADS)

    Mayott, Gregory; Self, Mid; Miller, Gordon J.; McDonnell, Joseph S.

    2010-04-01

    NVESD is developing a Sensor Data and Management Services (SDMS) Service Oriented Architecture (SOA) that provides an innovative approach to achieve seamless application functionality across simulation and battle command systems. In 2010, CERDEC will conduct a SDMS Battle Command demonstration that will highlight the SDMS SOA capability to couple simulation applications to existing Battle Command systems. The demonstration will leverage RDECOM MATREX simulation tools and TRADOC Maneuver Support Battle Laboratory Virtual Base Defense Operations Center facilities. The battle command systems are those specific to the operation of a base defense operations center in support of force protection missions. The SDMS SOA consists of four components that will be discussed. An Asset Management Service (AMS) will automatically discover the existence, state, and interface definition required to interact with a named asset (sensor or a sensor platform, a process such as level-1 fusion, or an interface to a sensor or other network endpoint). A Streaming Video Service (SVS) will automatically discover the existence, state, and interfaces required to interact with a named video stream, and abstract the consumers of the video stream from the originating device. A Task Manager Service (TMS) will be used to automatically discover the existence of a named mission task, and will interpret, translate and transmit a mission command for the blue force unit(s) described in a mission order. JC3IEDM data objects, and software development kit (SDK), will be utilized as the basic data object definition for implemented web services.

  12. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    NASA Astrophysics Data System (ADS)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.

  13. Sensors and Algorithms for an Unmanned Surf-Zone Robot

    DTIC Science & Technology

    2015-12-01

    71 3. Data Fusion and Filtering................................................ 74 C. VIRTUAL POTENTIAL FIELD (VPF) PATH PLANNING ...iron effects are clearly seen: Soft iron de - calibration (sphere distortion) was caused by proximity of circuit boards. Offset of the center of the...information to perform global tasks such as path- planning , sensors and actuators commands, external communications, etc. Python3 is used as the primary

  14. Simulation of Attacks for Security in Wireless Sensor Network

    PubMed Central

    Diaz, Alvaro; Sanchez, Pablo

    2016-01-01

    The increasing complexity and low-power constraints of current Wireless Sensor Networks (WSN) require efficient methodologies for network simulation and embedded software performance analysis of nodes. In addition, security is also a very important feature that has to be addressed in most WSNs, since they may work with sensitive data and operate in hostile unattended environments. In this paper, a methodology for security analysis of Wireless Sensor Networks is presented. The methodology allows designing attack-aware embedded software/firmware or attack countermeasures to provide security in WSNs. The proposed methodology includes attacker modeling and attack simulation with performance analysis (node’s software execution time and power consumption estimation). After an analysis of different WSN attack types, an attacker model is proposed. This model defines three different types of attackers that can emulate most WSN attacks. In addition, this paper presents a virtual platform that is able to model the node hardware, embedded software and basic wireless channel features. This virtual simulation analyzes the embedded software behavior and node power consumption while it takes into account the network deployment and topology. Additionally, this simulator integrates the previously mentioned attacker model. Thus, the impact of attacks on power consumption and software behavior/execution-time can be analyzed. This provides developers with essential information about the effects that one or multiple attacks could have on the network, helping them to develop more secure WSN systems. This WSN attack simulator is an essential element of the attack-aware embedded software development methodology that is also introduced in this work. PMID:27869710

  15. 3D indoor modeling using a hand-held embedded system with multiple laser range scanners

    NASA Astrophysics Data System (ADS)

    Hu, Shaoxing; Wang, Duhu; Xu, Shike

    2016-10-01

    Accurate three-dimensional perception is a key technology for many engineering applications, including mobile mapping, obstacle detection and virtual reality. In this article, we present a hand-held embedded system designed for constructing 3D representation of structured indoor environments. Different from traditional vehicle-borne mobile mapping methods, the system presented here is capable of efficiently acquiring 3D data while an operator carrying the device traverses through the site. It consists of a simultaneous localization and mapping(SLAM) module, a 3D attitude estimate module and a point cloud processing module. The SLAM is based on a scan matching approach using a modern LIDAR system, and the 3D attitude estimate is generated by a navigation filter using inertial sensors. The hardware comprises three 2D time-flight laser range finders and an inertial measurement unit(IMU). All the sensors are rigidly mounted on a body frame. The algorithms are developed on the frame of robot operating system(ROS). The 3D model is constructed using the point cloud library(PCL). Multiple datasets have shown robust performance of the presented system in indoor scenarios.

  16. Virtual Globes, where we were, are and will be

    NASA Astrophysics Data System (ADS)

    Dehn, J.; Webley, P. W.; Worden, A. K.

    2016-12-01

    Ten years ago, Google Earth was new, and the first "Virtual Globes" session was held at AGU. Only a few of us realized the potential of this technology at the time, but the idea quickly caught on. At that time a virtual globe came in two flavors, first a complex GIS system that was utterly impenetrable for the public, or a more accessible version with limited functionality and layers that was available on a desktop computer with a good internet connection. Google Earth's use of the Keyhole Markup Language opened the door for scientists and the public to share data and visualizations across disciplines and revolutionized how everyone uses geographic data. In the following 10 years, KML became more advanced, virtual globes moved to mobile and handheld platforms, and the Google Earth engine allowed for more complex data sharing among scientists. Virtual globe images went from a rare commodity to being everywhere in our lives, from weather forecasts, in our cars, on our smart-phones and shape how we receive and process data. This is a fantastic tool for education and with newer technologies can reach the the remote corners of the world and developing countries. New and emerging technologies allow for augmented reality to be merged with the globes, and for real-time data integration with sensors built into mobile devices or add-ons. This presentation will follow the history of virtual globes in the geosciences, show how robust technologies can be used in the field and classroom today, and make some suggestions for the future.

  17. Virtual reality-enhanced partial body weight-supported treadmill training poststroke: feasibility and effectiveness in 6 subjects.

    PubMed

    Walker, Martha L; Ringleb, Stacie I; Maihafer, George C; Walker, Robert; Crouch, Jessica R; Van Lunen, Bonnie; Morrison, Steven

    2010-01-01

    Walker ML, Ringleb SI, Maihafer GC, Walker R, Crouch JR, Van Lunen B, Morrison S. Virtual reality-enhanced partial body weight-supported treadmill training poststroke: feasibility and effectiveness in 6 subjects. To determine whether the use of a low-cost virtual reality (VR) system used in conjunction with partial body weight-supported treadmill training (BWSTT) was feasible and effective in improving the walking and balance abilities of patients poststroke. A before-after comparison of a single group with BWSTT intervention. University research laboratory. A convenience sample of 7 adults who were within 1 year poststroke and who had completed traditional rehabilitation but still exhibited gait deficits. Six participants completed the study. Twelve treatment sessions of BWSTT with VR. The VR system generated a virtual environment that showed on a television screen in front of the treadmill to give participants the sensation of walking down a city street. A head-mounted position sensor provided postural feedback. Functional Gait Assessment (FGA) score, Berg Balance Scale (BBS) score, and overground walking speed. One subject dropped out of the study. All other participants made significant improvements in their ability to walk. FGA scores increased from mean of 13.8 to 18. BBS scores increased from mean of 43.8 to 48.8, although a ceiling effect was seen for this test. Overground walking speed increased from mean of .49m/s to .68m/s. A low-cost VR system combined with BWSTT is feasible for improved gait and balance of patients poststroke. Copyright (c) 2010 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  18. Terra Harvest software architecture

    NASA Astrophysics Data System (ADS)

    Humeniuk, Dave; Klawon, Kevin

    2012-06-01

    Under the Terra Harvest Program, the DIA has the objective of developing a universal Controller for the Unattended Ground Sensor (UGS) community. The mission is to define, implement, and thoroughly document an open architecture that universally supports UGS missions, integrating disparate systems, peripherals, etc. The Controller's inherent interoperability with numerous systems enables the integration of both legacy and future UGS System (UGSS) components, while the design's open architecture supports rapid third-party development to ensure operational readiness. The successful accomplishment of these objectives by the program's Phase 3b contractors is demonstrated via integration of the companies' respective plug-'n'-play contributions that include controllers, various peripherals, such as sensors, cameras, etc., and their associated software drivers. In order to independently validate the Terra Harvest architecture, L-3 Nova Engineering, along with its partner, the University of Dayton Research Institute, is developing the Terra Harvest Open Source Environment (THOSE), a Java Virtual Machine (JVM) running on an embedded Linux Operating System. The Use Cases on which the software is developed support the full range of UGS operational scenarios such as remote sensor triggering, image capture, and data exfiltration. The Team is additionally developing an ARM microprocessor-based evaluation platform that is both energy-efficient and operationally flexible. The paper describes the overall THOSE architecture, as well as the design decisions for some of the key software components. Development process for THOSE is discussed as well.

  19. OPTICAL FIBER SENSOR TECHNOLOGIES FOR EFFICIENT AND ECONOMICAL OIL RECOVERY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anbo Wang; Kristie L. Cooper; Gary R. Pickrell

    2003-06-01

    Efficient recovery of petroleum reserves from existing oil wells has been proven to be difficult due to the lack of robust instrumentation that can accurately and reliably monitor processes in the downhole environment. Commercially available sensors for measurement of pressure, temperature, and fluid flow exhibit shortened lifetimes in the harsh downhole conditions, which are characterized by high pressures (up to 20 kpsi), temperatures up to 250 C, and exposure to chemically reactive fluids. Development of robust sensors that deliver continuous, real-time data on reservoir performance and petroleum flow pathways will facilitate application of advanced recovery technologies, including horizontal and multilateralmore » wells. This is the final report for the four-year program ''Optical Fiber Sensor Technologies for Efficient and Economical Oil Recovery'', funded by the National Petroleum Technology Office of the U.S. Department of Energy, and performed by the Center for Photonics Technology of the Bradley Department of Electrical and Computer Engineering at Virginia Tech from October 1, 1999 to March 31, 2003. The main objective of this research program was to develop cost-effective, reliable optical fiber sensor instrumentation for real-time monitoring of various key parameters crucial to efficient and economical oil production. During the program, optical fiber sensors were demonstrated for the measurement of temperature, pressure, flow, and acoustic waves, including three successful field tests in the Chevron/Texaco oil fields in Coalinga, California, and at the world-class oil flow simulation facilities in Tulsa, Oklahoma. Research efforts included the design and fabrication of sensor probes, development of signal processing algorithms, construction of test systems, development and testing of strategies for the protection of optical fibers and sensors in the downhole environment, development of remote monitoring capabilities allowing real-time monitoring of the field test data from virtually anywhere in the world, and development of novel data processing techniques. Comprehensive testing was performed to systematically evaluate the performance of the fiber optic sensor systems in both lab and field environments.« less

  20. Automation and Robotics for Space-Based Systems, 1991

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II (Editor)

    1992-01-01

    The purpose of this in-house workshop was to assess the state-of-the-art of automation and robotics for space operations from an LaRC perspective and to identify areas of opportunity for future research. Over half of the presentations came from the Automation Technology Branch, covering telerobotic control, extravehicular activity (EVA) and intra-vehicular activity (IVA) robotics, hand controllers for teleoperation, sensors, neural networks, and automated structural assembly, all applied to space missions. Other talks covered the Remote Manipulator System (RMS) active damping augmentation, space crane work, modeling, simulation, and control of large, flexible space manipulators, and virtual passive controller designs for space robots.

  1. Virtual Sensors: Using Data Mining to Efficiently Estimate Spectra

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok; Oza, Nikunj; Stroeve, Julienne

    2004-01-01

    Detecting clouds within a satellite image is essential for retrieving surface geophysical parameters, such as albedo and temperature, from optical and thermal imagery because the retrieval methods tend to be valid for clear skies only. Thus, routine satellite data processing requires reliable automated cloud detection algorithms that are applicable to many surface types. Unfortunately, cloud detection over snow and ice is difficult due to the lack of spectral contrast between clouds and snow. Snow and clouds are both highly reflective in the visible wavelen,ats and often show little contrast in the thermal Infrared. However, at 1.6 microns, the spectral signatures of snow and clouds differ enough to allow improved snow/ice/cloud discrimination. The recent Terra and Aqua Moderate Resolution Imaging Spectro-Radiometer (MODIS) sensors have a channel (channel 6) at 1.6 microns. Presently the most comprehensive, long-term information on surface albedo and temperature over snow- and ice-covered surfaces comes from the Advanced Very High Resolution Radiometer ( AVHRR) sensor that has been providing imagery since July 1981. The earlier AVHRR sensors (e.g. AVHRR/2) did not however have a channel designed for discriminating clouds from snow, such as the 1.6 micron channel available on the more recent AVHRR/3 or the MODIS sensors. In the absence of the 1.6 micron channel, the AVHRR Polar Pathfinder (APP) product performs cloud detection using a combination of time-series analysis and multispectral threshold tests based on the satellite's measuring channels to produce a cloud mask. The method has been found to work reasonably well over sea ice, but not so well over the ice sheets. Thus, improving the cloud mask in the APP dataset would be extremely helpful toward increasing the accuracy of the albedo and temperature retrievals, as well as extending the time-series of albedo and temperature retrievals from the more recent sensors to the historical ones. In this work, we use data mining methods to construct a model of MODIS channel 6 as a function of other channels that are common to both MODIS and AVHRR. The idea is to use the model to generate the equivalent of MODIS channel 6 for AVHRR as a function of the AVHRR equivalents to MODIS channels. We call this a Virtual Sensor because it predicts unmeasured spectra. The goal is to use this virtual channel 6. to yield a cloud mask superior to what is currently used in APP . Our results show that several data mining methods such as multilayer perceptrons (MLPs), ensemble methods (e.g., bagging), and kernel methods (e.g., support vector machines) generate channel 6 for unseen MODIS images with high accuracy. Because the true channel 6 is not available for AVHRR images, we qualitatively assess the virtual channel 6 for several AVHRR images.

  2. Medicine in long duration space exploration: the role of virtual reality and broad bandwidth telecommunications networks

    NASA Technical Reports Server (NTRS)

    Ross, M. D.

    2001-01-01

    Safety of astronauts during long-term space exploration is a priority for NASA. This paper describes efforts to produce Earth-based models for providing expert medical advice when unforeseen medical emergencies occur on spacecraft. These models are Virtual Collaborative Clinics that reach into remote sites using telecommunications and emerging stereo-imaging and sensor technologies. c 2001. Elsevier Science Ltd. All rights reserved.

  3. Virtual-Lattice Based Intrusion Detection Algorithm over Actuator-Assisted Underwater Wireless Sensor Networks

    PubMed Central

    Yan, Jing; Li, Xiaolei; Luo, Xiaoyuan; Guan, Xinping

    2017-01-01

    Due to the lack of a physical line of defense, intrusion detection becomes one of the key issues in applications of underwater wireless sensor networks (UWSNs), especially when the confidentiality has prime importance. However, the resource-constrained property of UWSNs such as sparse deployment and energy constraint makes intrusion detection a challenging issue. This paper considers a virtual-lattice-based approach to the intrusion detection problem in UWSNs. Different from most existing works, the UWSNs consist of two kinds of nodes, i.e., sensor nodes (SNs), which cannot move autonomously, and actuator nodes (ANs), which can move autonomously according to the performance requirement. With the cooperation of SNs and ANs, the intruder detection probability is defined. Then, a virtual lattice-based monitor (VLM) algorithm is proposed to detect the intruder. In order to reduce the redundancy of communication links and improve detection probability, an optimal and coordinative lattice-based monitor patrolling (OCLMP) algorithm is further provided for UWSNs, wherein an equal price search strategy is given for ANs to find the shortest patrolling path. Under VLM and OCLMP algorithms, the detection probabilities are calculated, while the topology connectivity can be guaranteed. Finally, simulation results are presented to show that the proposed method in this paper can improve the detection accuracy and save the energy consumption compared with the conventional methods. PMID:28531127

  4. Resilient Sensor Networks with Spatiotemporal Interpolation of Missing Sensors: An Example of Space Weather Forecasting by Multiple Satellites

    PubMed Central

    Tokumitsu, Masahiro; Hasegawa, Keisuke; Ishida, Yoshiteru

    2016-01-01

    This paper attempts to construct a resilient sensor network model with an example of space weather forecasting. The proposed model is based on a dynamic relational network. Space weather forecasting is vital for a satellite operation because an operational team needs to make a decision for providing its satellite service. The proposed model is resilient to failures of sensors or missing data due to the satellite operation. In the proposed model, the missing data of a sensor is interpolated by other sensors associated. This paper demonstrates two examples of space weather forecasting that involves the missing observations in some test cases. In these examples, the sensor network for space weather forecasting continues a diagnosis by replacing faulted sensors with virtual ones. The demonstrations showed that the proposed model is resilient against sensor failures due to suspension of hardware failures or technical reasons. PMID:27092508

  5. Resilient Sensor Networks with Spatiotemporal Interpolation of Missing Sensors: An Example of Space Weather Forecasting by Multiple Satellites.

    PubMed

    Tokumitsu, Masahiro; Hasegawa, Keisuke; Ishida, Yoshiteru

    2016-04-15

    This paper attempts to construct a resilient sensor network model with an example of space weather forecasting. The proposed model is based on a dynamic relational network. Space weather forecasting is vital for a satellite operation because an operational team needs to make a decision for providing its satellite service. The proposed model is resilient to failures of sensors or missing data due to the satellite operation. In the proposed model, the missing data of a sensor is interpolated by other sensors associated. This paper demonstrates two examples of space weather forecasting that involves the missing observations in some test cases. In these examples, the sensor network for space weather forecasting continues a diagnosis by replacing faulted sensors with virtual ones. The demonstrations showed that the proposed model is resilient against sensor failures due to suspension of hardware failures or technical reasons.

  6. Hybrid Reality Lab Capabilities - Video 2

    NASA Technical Reports Server (NTRS)

    Delgado, Francisco J.; Noyes, Matthew

    2016-01-01

    Our Hybrid Reality and Advanced Operations Lab is developing incredibly realistic and immersive systems that could be used to provide training, support engineering analysis, and augment data collection for various human performance metrics at NASA. To get a better understanding of what Hybrid Reality is, let's go through the two most commonly known types of immersive realities: Virtual Reality, and Augmented Reality. Virtual Reality creates immersive scenes that are completely made up of digital information. This technology has been used to train astronauts at NASA, used during teleoperation of remote assets (arms, rovers, robots, etc.) and other activities. One challenge with Virtual Reality is that if you are using it for real time-applications (like landing an airplane) then the information used to create the virtual scenes can be old (i.e. visualized long after physical objects moved in the scene) and not accurate enough to land the airplane safely. This is where Augmented Reality comes in. Augmented Reality takes real-time environment information (from a camera, or see through window, and places digitally created information into the scene so that it matches with the video/glass information). Augmented Reality enhances real environment information collected with a live sensor or viewport (e.g. camera, window, etc.) with the information-rich visualization provided by Virtual Reality. Hybrid Reality takes Augmented Reality even further, by creating a higher level of immersion where interactivity can take place. Hybrid Reality takes Virtual Reality objects and a trackable, physical representation of those objects, places them in the same coordinate system, and allows people to interact with both objects' representations (virtual and physical) simultaneously. After a short period of adjustment, the individuals begin to interact with all the objects in the scene as if they were real-life objects. The ability to physically touch and interact with digitally created objects that have the same shape, size, location to their physical object counterpart in virtual reality environment can be a game changer when it comes to training, planning, engineering analysis, science, entertainment, etc. Our Project is developing such capabilities for various types of environments. The video outlined with this abstract is a representation of an ISS Hybrid Reality experience. In the video you can see various Hybrid Reality elements that provide immersion beyond just standard Virtual Reality or Augmented Reality.

  7. Communication Architecture in Mixed-Reality Simulations of Unmanned Systems

    PubMed Central

    2018-01-01

    Verification of the correct functionality of multi-vehicle systems in high-fidelity scenarios is required before any deployment of such a complex system, e.g., in missions of remote sensing or in mobile sensor networks. Mixed-reality simulations where both virtual and physical entities can coexist and interact have been shown to be beneficial for development, testing, and verification of such systems. This paper deals with the problems of designing a certain communication subsystem for such highly desirable realistic simulations. Requirements of this communication subsystem, including proper addressing, transparent routing, visibility modeling, or message management, are specified prior to designing an appropriate solution. Then, a suitable architecture of this communication subsystem is proposed together with solutions to the challenges that arise when simultaneous virtual and physical message transmissions occur. The proposed architecture can be utilized as a high-fidelity network simulator for vehicular systems with implicit mobility models that are given by real trajectories of the vehicles. The architecture has been utilized within multiple projects dealing with the development and practical deployment of multi-UAV systems, which support the architecture’s viability and advantages. The provided experimental results show the achieved similarity of the communication characteristics of the fully deployed hardware setup to the setup utilizing the proposed mixed-reality architecture. PMID:29538290

  8. Communication Architecture in Mixed-Reality Simulations of Unmanned Systems.

    PubMed

    Selecký, Martin; Faigl, Jan; Rollo, Milan

    2018-03-14

    Verification of the correct functionality of multi-vehicle systems in high-fidelity scenarios is required before any deployment of such a complex system, e.g., in missions of remote sensing or in mobile sensor networks. Mixed-reality simulations where both virtual and physical entities can coexist and interact have been shown to be beneficial for development, testing, and verification of such systems. This paper deals with the problems of designing a certain communication subsystem for such highly desirable realistic simulations. Requirements of this communication subsystem, including proper addressing, transparent routing, visibility modeling, or message management, are specified prior to designing an appropriate solution. Then, a suitable architecture of this communication subsystem is proposed together with solutions to the challenges that arise when simultaneous virtual and physical message transmissions occur. The proposed architecture can be utilized as a high-fidelity network simulator for vehicular systems with implicit mobility models that are given by real trajectories of the vehicles. The architecture has been utilized within multiple projects dealing with the development and practical deployment of multi-UAV systems, which support the architecture's viability and advantages. The provided experimental results show the achieved similarity of the communication characteristics of the fully deployed hardware setup to the setup utilizing the proposed mixed-reality architecture.

  9. Eglin virtual range database for hardware-in-the-loop testing

    NASA Astrophysics Data System (ADS)

    Talele, Sunjay E.; Pickard, J. W., Jr.; Owens, Monte A.; Foster, Joseph; Watson, John S.; Amick, Mary Amenda; Anthony, Kenneth

    1998-07-01

    Realistic backgrounds are necessary to support high fidelity hardware-in-the-loop testing. Advanced avionics and weapon system sensors are driving the requirement for higher resolution imagery. The model-test-model philosophy being promoted by the T&E community is resulting in the need for backgrounds that are realistic or virtual representations of actual test areas. Combined, these requirements led to a major upgrade of the terrain database used for hardware-in-the-loop testing at the Guided Weapons Evaluation Facility (GWEF) at Eglin Air Force Base, Florida. This paper will describe the process used to generate the high-resolution (1-foot) database of ten sites totaling over 20 square kilometers of the Eglin range. this process involved generating digital elevation maps from stereo aerial imagery and classifying ground cover material using the spectral content. These databases were then optimized for real-time operation at 90 Hz.

  10. FVMS: A novel SiL approach on the evaluation of controllers for autonomous MAV

    NASA Astrophysics Data System (ADS)

    Sampaio, Rafael C. B.; Becker, Marcelo; Siqueira, Adriano A. G.; Freschi, Leonardo W.; Montanher, Marcelo P.

    The originality of this work is to propose a novel SiL (Software-in-the-Loop) platform using Microsoft Flight Simulator (MSFS) to assist control design regarding the stabilization problem found in © AscTec Pelican platform. Aerial Robots Team (USP/EESC/LabRoM/ART) has developed a custom C++/C# software named FVMS (Flight Variables Management System) that interfaces the communication between the virtual Pelican and the control algorithms allowing the control designer to perform fast full closed loop real time algorithms. Emulation of embedded sensors as well as the possibility to integrate OpenCV Optical Flow algorithms to a virtual downward camera makes the SiL even more reliable. More than a strictly numeric analysis, the proposed SiL platform offers an unique experience, simultaneously offering both dynamic and graphical responses. Performance of SiL algorithms is presented and discussed.

  11. The effects of parameter variation on MSET models of the Crystal River-3 feedwater flow system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miron, A.

    1998-04-01

    In this paper we develop further the results reported in Reference 1 to include a systematic study of the effects of varying MSET models and model parameters for the Crystal River-3 (CR) feedwater flow system The study used archived CR process computer files from November 1-December 15, 1993 that were provided by Florida Power Corporation engineers Fairman Bockhorst and Brook Julias. The results support the conclusion that an optimal MSET model, properly trained and deriving its inputs in real-time from no more than 25 of the sensor signals normally provided to a PWR plant process computer, should be able tomore » reliably detect anomalous variations in the feedwater flow venturis of less than 0.1% and in the absence of a venturi sensor signal should be able to generate a virtual signal that will be within 0.1% of the correct value of the missing signal.« less

  12. A software defined RTU multi-protocol automatic adaptation data transmission method

    NASA Astrophysics Data System (ADS)

    Jin, Huiying; Xu, Xingwu; Wang, Zhanfeng; Ma, Weijun; Li, Sheng; Su, Yong; Pan, Yunpeng

    2018-02-01

    Remote terminal unit (RTU) is the core device of the monitor system in hydrology and water resources. Different devices often have different communication protocols in the application layer, which results in the difficulty in information analysis and communication networking. Therefore, we introduced the idea of software defined hardware, and abstracted the common feature of mainstream communication protocols of RTU application layer, and proposed a uniformed common protocol model. Then, various communication protocol algorithms of application layer are modularized according to the model. The executable codes of these algorithms are labeled by the virtual functions and stored in the flash chips of embedded CPU to form the protocol stack. According to the configuration commands to initialize the RTU communication systems, it is able to achieve dynamic assembling and loading of various application layer communication protocols of RTU and complete the efficient transport of sensor data from RTU to central station when the data acquisition protocol of sensors and various external communication terminals remain unchanged.

  13. A Model-Based Approach for Bridging Virtual and Physical Sensor Nodes in a Hybrid Simulation Framework

    PubMed Central

    Mozumdar, Mohammad; Song, Zhen Yu; Lavagno, Luciano; Sangiovanni-Vincentelli, Alberto L.

    2014-01-01

    The Model Based Design (MBD) approach is a popular trend to speed up application development of embedded systems, which uses high-level abstractions to capture functional requirements in an executable manner, and which automates implementation code generation. Wireless Sensor Networks (WSNs) are an emerging very promising application area for embedded systems. However, there is a lack of tools in this area, which would allow an application developer to model a WSN application by using high level abstractions, simulate it mapped to a multi-node scenario for functional analysis, and finally use the refined model to automatically generate code for different WSN platforms. Motivated by this idea, in this paper we present a hybrid simulation framework that not only follows the MBD approach for WSN application development, but also interconnects a simulated sub-network with a physical sub-network and then allows one to co-simulate them, which is also known as Hardware-In-the-Loop (HIL) simulation. PMID:24960083

  14. Providing haptic feedback in robot-assisted minimally invasive surgery: a direct optical force-sensing solution for haptic rendering of deformable bodies.

    PubMed

    Ehrampoosh, Shervin; Dave, Mohit; Kia, Michael A; Rablau, Corneliu; Zadeh, Mehrdad H

    2013-01-01

    This paper presents an enhanced haptic-enabled master-slave teleoperation system which can be used to provide force feedback to surgeons in minimally invasive surgery (MIS). One of the research goals was to develop a combined-control architecture framework that included both direct force reflection (DFR) and position-error-based (PEB) control strategies. To achieve this goal, it was essential to measure accurately the direct contact forces between deformable bodies and a robotic tool tip. To measure the forces at a surgical tool tip and enhance the performance of the teleoperation system, an optical force sensor was designed, prototyped, and added to a robot manipulator. The enhanced teleoperation architecture was formulated by developing mathematical models for the optical force sensor, the extended slave robot manipulator, and the combined-control strategy. Human factor studies were also conducted to (a) examine experimentally the performance of the enhanced teleoperation system with the optical force sensor, and (b) study human haptic perception during the identification of remote object deformability. The first experiment was carried out to discriminate deformability of objects when human subjects were in direct contact with deformable objects by means of a laparoscopic tool. The control parameters were then tuned based on the results of this experiment using a gain-scheduling method. The second experiment was conducted to study the effectiveness of the force feedback provided through the enhanced teleoperation system. The results show that the force feedback increased the ability of subjects to correctly identify materials of different deformable types. In addition, the virtual force feedback provided by the teleoperation system comes close to the real force feedback experienced in direct MIS. The experimental results provide design guidelines for choosing and validating the control architecture and the optical force sensor.

  15. Integration of the Shuttle RMS/CBM Positioning Virtual Environment Simulation

    NASA Technical Reports Server (NTRS)

    Dumas, Joseph D.

    1996-01-01

    Constructing the International Space Station, or other structures, in space presents a number of problems. In particular, payload restrictions for the Space Shuttle and other launch mechanisms prohibit assembly of large space-based structures on Earth. Instead, a number of smaller modules must be boosted into orbit separately and then assembled to form the final structure. The assembly process is difficult, as docking interfaces such as Common Berthing Mechanisms (CBMS) must be precisely positioned relative to each other to be within the "capture envelope" (approximately +/- 1 inch and +/- 0.3 degrees from the nominal position) and attach properly. In the case of the Space Station, the docking mechanisms are to be positioned robotically by an astronaut using the 55-foot-long Remote Manipulator System (RMS) robot arm. Unfortunately, direct visual or video observation of the placement process is difficult or impossible in many scenarios. One method that has been tested for aligning the CBMs uses a boresighted camera mounted on one CBM to view a standard target on the opposing CBM. While this method might be sufficient to achieve proper positioning with considerable effort, it does not provide a high level of confidence that the mechanisms have been placed within capture range of each other. It also does nothing to address the risk of inadvertent contact between the CBMS, which could result in RMS control software errors. In general, constraining the operator to a single viewpoint with few, if any, depth cues makes the task much more difficult than it would be if the target could be viewed in three-dimensional space from various viewpoints. The actual work area could be viewed by an astronaut during EVA; however, it would be extremely impractical to have an astronaut control the RMS while spacewalking. On the other hand, a view of the RMS and CBMs to be positioned in a virtual environment aboard the Space Shuttle orbiter or Space Station could provide similar benefits more safely and conveniently with little additional cost. In order to render and view the RMS and CBMs in a virtual world, the position and orientation of the end effector in three-dimensional space must be known with a high degree of accuracy. A precision video alignment sensor has been developed which can determine the position and orientation of the controlled element relative to the target CBM within approximately one-sixteenth inch and 0.07 angular degrees. Such a sensor could replace or augment the boresighted camera mentioned above. The computer system used to render the virtual world and the position tracking systems which might be used to monitor the user's movements (in order to adjust the viewpoint in virtual space) are small enough to carry to orbit. Thus, such a system would be feasible for use in constructing structures in space.

  16. Natural locomotion based on a reduced set of inertial sensors: Decoupling body and head directions indoors

    PubMed Central

    Diaz-Estrella, Antonio; Reyes-Lecuona, Arcadio; Langley, Alyson; Brown, Michael; Sharples, Sarah

    2018-01-01

    Inertial sensors offer the potential for integration into wireless virtual reality systems that allow the users to walk freely through virtual environments. However, owing to drift errors, inertial sensors cannot accurately estimate head and body orientations in the long run, and when walking indoors, this error cannot be corrected by magnetometers, due to the magnetic field distortion created by ferromagnetic materials present in buildings. This paper proposes a technique, called EHBD (Equalization of Head and Body Directions), to address this problem using two head- and shoulder-located magnetometers. Due to their proximity, their distortions are assumed to be similar and the magnetometer measurements are used to detect when the user is looking straight forward. Then, the system corrects the discrepancies between the estimated directions of the head and the shoulder, which are provided by gyroscopes and consequently are affected by drift errors. An experiment is conducted to evaluate the performance of this technique in two tasks (navigation and navigation plus exploration) and using two different locomotion techniques: (1) gaze-directed mode (GD) in which the walking direction is forced to be the same as the head direction, and (2) decoupled direction mode (DD) in which the walking direction can be different from the viewing direction. The obtained results show that both locomotion modes show similar matching of the target path during the navigation task, while DD’s path matches the target path more closely than GD in the navigation plus exploration task. These results validate the EHBD technique especially when allowing different walking and viewing directions in the navigation plus exploration tasks, as expected. While the proposed method does not reach the accuracy of optical tracking (ideal case), it is an acceptable and satisfactory solution for users and is much more compact, portable and economical. PMID:29621298

  17. A Self-Referenced Optical Intensity Sensor Network Using POFBGs for Biomedical Applications

    PubMed Central

    Moraleda, Alberto Tapetado; Montero, David Sánchez; Webb, David J.; García, Carmen Vázquez

    2014-01-01

    This work bridges the gap between the remote interrogation of multiple optical sensors and the advantages of using inherently biocompatible low-cost polymer optical fiber (POF)-based photonic sensing. A novel hybrid sensor network combining both silica fiber Bragg gratings (FBG) and polymer FBGs (POFBG) is analyzed. The topology is compatible with WDM networks so multiple remote sensors can be addressed providing high scalability. A central monitoring unit with virtual data processing is implemented, which could be remotely located up to units of km away. The feasibility of the proposed solution for potential medical environments and biomedical applications is shown. PMID:25615736

  18. A self-referenced optical intensity sensor network using POFBGs for biomedical applications.

    PubMed

    Tapetado Moraleda, Alberto; Sánchez Montero, David; Webb, David J; Vázquez García, Carmen

    2014-12-12

    This work bridges the gap between the remote interrogation of multiple optical sensors and the advantages of using inherently biocompatible low-cost polymer optical fiber (POF)-based photonic sensing. A novel hybrid sensor network combining both silica fiber Bragg gratings (FBG) and polymer FBGs (POFBG) is analyzed. The topology is compatible with WDM networks so multiple remote sensors can be addressed providing high scalability. A central monitoring unit with virtual data processing is implemented, which could be remotely located up to units of km away. The feasibility of the proposed solution for potential medical environments and biomedical applications is shown.

  19. Highly stretchable and wearable graphene strain sensors with controllable sensitivity for human motion monitoring.

    PubMed

    Park, Jung Jin; Hyun, Woo Jin; Mun, Sung Cik; Park, Yong Tae; Park, O Ok

    2015-03-25

    Because of their outstanding electrical and mechanical properties, graphene strain sensors have attracted extensive attention for electronic applications in virtual reality, robotics, medical diagnostics, and healthcare. Although several strain sensors based on graphene have been reported, the stretchability and sensitivity of these sensors remain limited, and also there is a pressing need to develop a practical fabrication process. This paper reports the fabrication and characterization of new types of graphene strain sensors based on stretchable yarns. Highly stretchable, sensitive, and wearable sensors are realized by a layer-by-layer assembly method that is simple, low-cost, scalable, and solution-processable. Because of the yarn structures, these sensors exhibit high stretchability (up to 150%) and versatility, and can detect both large- and small-scale human motions. For this study, wearable electronics are fabricated with implanted sensors that can monitor diverse human motions, including joint movement, phonation, swallowing, and breathing.

  20. Development of inferential sensors for real-time quality control of water-level data for the Everglades Depth Estimation Network

    USGS Publications Warehouse

    Daamen, Ruby C.; Edwin A. Roehl, Jr.; Conrads, Paul

    2010-01-01

    A technology often used for industrial applications is “inferential sensor.” Rather than installing a redundant sensor to measure a process, such as an additional waterlevel gage, an inferential sensor, or virtual sensor, is developed that estimates the processes measured by the physical sensor. The advantage of an inferential sensor is that it provides a redundant signal to the sensor in the field but without exposure to environmental threats. In the event that a gage does malfunction, the inferential sensor provides an estimate for the period of missing data. The inferential sensor also can be used in the quality assurance and quality control of the data. Inferential sensors for gages in the EDEN network are currently (2010) under development. The inferential sensors will be automated so that the real-time EDEN data will continuously be compared to the inferential sensor signal and digital reports of the status of the real-time data will be sent periodically to the appropriate support personnel. The development and application of inferential sensors is easily transferable to other real-time hydrologic monitoring networks.

  1. Molecular Rift: Virtual Reality for Drug Designers.

    PubMed

    Norrby, Magnus; Grebner, Christoph; Eriksson, Joakim; Boström, Jonas

    2015-11-23

    Recent advances in interaction design have created new ways to use computers. One example is the ability to create enhanced 3D environments that simulate physical presence in the real world--a virtual reality. This is relevant to drug discovery since molecular models are frequently used to obtain deeper understandings of, say, ligand-protein complexes. We have developed a tool (Molecular Rift), which creates a virtual reality environment steered with hand movements. Oculus Rift, a head-mounted display, is used to create the virtual settings. The program is controlled by gesture-recognition, using the gaming sensor MS Kinect v2, eliminating the need for standard input devices. The Open Babel toolkit was integrated to provide access to powerful cheminformatics functions. Molecular Rift was developed with a focus on usability, including iterative test-group evaluations. We conclude with reflections on virtual reality's future capabilities in chemistry and education. Molecular Rift is open source and can be downloaded from GitHub.

  2. Flash LIDAR Systems for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Dissly, Richard; Weinberg, J.; Weimer, C.; Craig, R.; Earhart, P.; Miller, K.

    2009-01-01

    Ball Aerospace offers a mature, highly capable 3D flash-imaging LIDAR system for planetary exploration. Multi mission applications include orbital, standoff and surface terrain mapping, long distance and rapid close-in ranging, descent and surface navigation and rendezvous and docking. Our flash LIDAR is an optical, time-of-flight, topographic imaging system, leveraging innovations in focal plane arrays, readout integrated circuit real time processing, and compact and efficient pulsed laser sources. Due to its modular design, it can be easily tailored to satisfy a wide range of mission requirements. Flash LIDAR offers several distinct advantages over traditional scanning systems. The entire scene within the sensor's field of view is imaged with a single laser flash. This directly produces an image with each pixel already correlated in time, making the sensor resistant to the relative motion of a target subject. Additionally, images may be produced at rates much faster than are possible with a scanning system. And because the system captures a new complete image with each flash, optical glint and clutter are easily filtered and discarded. This allows for imaging under any lighting condition and makes the system virtually insensitive to stray light. Finally, because there are no moving parts, our flash LIDAR system is highly reliable and has a long life expectancy. As an industry leader in laser active sensor system development, Ball Aerospace has been working for more than four years to mature flash LIDAR systems for space applications, and is now under contract to provide the Vision Navigation System for NASA's Orion spacecraft. Our system uses heritage optics and electronics from our star tracker products, and space qualified lasers similar to those used in our CALIPSO LIDAR, which has been in continuous operation since 2006, providing more than 1.3 billion laser pulses to date.

  3. Pervasive Radio Mapping of Industrial Environments Using a Virtual Reality Approach

    PubMed Central

    Nedelcu, Adrian-Valentin; Machedon-Pisu, Mihai; Talaba, Doru

    2015-01-01

    Wireless communications in industrial environments are seriously affected by reliability and performance issues, due to the multipath nature of obstacles within such environments. Special attention needs to be given to planning a wireless industrial network, so as to find the optimum spatial position for each of the nodes within the network, and especially for key nodes such as gateways or cluster heads. The aim of this paper is to present a pervasive radio mapping system which captures (senses) data regarding the radio spectrum, using low-cost wireless sensor nodes. This data is the input of radio mapping algorithms that generate electromagnetic propagation profiles. Such profiles are used for identifying obstacles within the environment and optimum propagation pathways. With the purpose of further optimizing the radio planning process, the authors propose a novel human-network interaction (HNI) paradigm that uses 3D virtual environments in order to display the radio maps in a natural, easy-to-perceive manner. The results of this approach illustrate its added value to the field of radio resource planning of industrial communication systems. PMID:26167533

  4. Pervasive Radio Mapping of Industrial Environments Using a Virtual Reality Approach.

    PubMed

    Nedelcu, Adrian-Valentin; Machedon-Pisu, Mihai; Duguleana, Mihai; Talaba, Doru

    2015-01-01

    Wireless communications in industrial environments are seriously affected by reliability and performance issues, due to the multipath nature of obstacles within such environments. Special attention needs to be given to planning a wireless industrial network, so as to find the optimum spatial position for each of the nodes within the network, and especially for key nodes such as gateways or cluster heads. The aim of this paper is to present a pervasive radio mapping system which captures (senses) data regarding the radio spectrum, using low-cost wireless sensor nodes. This data is the input of radio mapping algorithms that generate electromagnetic propagation profiles. Such profiles are used for identifying obstacles within the environment and optimum propagation pathways. With the purpose of further optimizing the radio planning process, the authors propose a novel human-network interaction (HNI) paradigm that uses 3D virtual environments in order to display the radio maps in a natural, easy-to-perceive manner. The results of this approach illustrate its added value to the field of radio resource planning of industrial communication systems.

  5. Live Aircraft Encounter Visualization at FutureFlight Central

    NASA Technical Reports Server (NTRS)

    Murphy, James R.; Chinn, Fay; Monheim, Spencer; Otto, Neil; Kato, Kenji; Archdeacon, John

    2018-01-01

    Researchers at the National Aeronautics and Space Administration (NASA) have developed an aircraft data streaming capability that can be used to visualize live aircraft in near real-time. During a joint Federal Aviation Administration (FAA)/NASA Airborne Collision Avoidance System flight series, test sorties between unmanned aircraft and manned intruder aircraft were shown in real-time at NASA Ames' FutureFlight Central tower facility as a virtual representation of the encounter. This capability leveraged existing live surveillance, video, and audio data streams distributed through a Live, Virtual, Constructive test environment, then depicted the encounter from the point of view of any aircraft in the system showing the proximity of the other aircraft. For the demonstration, position report data were sent to the ground from on-board sensors on the unmanned aircraft. The point of view can be change dynamically, allowing encounters from all angles to be observed. Visualizing the encounters in real-time provides a safe and effective method for observation of live flight testing and a strong alternative to travel to the remote test range.

  6. Assessing Upper Extremity Motor Function in Practice of Virtual Activities of Daily Living

    PubMed Central

    Adams, Richard J.; Lichter, Matthew D.; Krepkovich, Eileen T.; Ellington, Allison; White, Marga; Diamond, Paul T.

    2015-01-01

    A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An Unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user’s avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman’s rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs. PMID:25265612

  7. Assessing upper extremity motor function in practice of virtual activities of daily living.

    PubMed

    Adams, Richard J; Lichter, Matthew D; Krepkovich, Eileen T; Ellington, Allison; White, Marga; Diamond, Paul T

    2015-03-01

    A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user's avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman's rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs.

  8. Building intelligent communication systems for handicapped aphasiacs.

    PubMed

    Fu, Yu-Fen; Ho, Cheng-Seen

    2010-01-01

    This paper presents an intelligent system allowing handicapped aphasiacs to perform basic communication tasks. It has the following three key features: (1) A 6-sensor data glove measures the finger gestures of a patient in terms of the bending degrees of his fingers. (2) A finger language recognition subsystem recognizes language components from the finger gestures. It employs multiple regression analysis to automatically extract proper finger features so that the recognition model can be fast and correctly constructed by a radial basis function neural network. (3) A coordinate-indexed virtual keyboard allows the users to directly access the letters on the keyboard at a practical speed. The system serves as a viable tool for natural and affordable communication for handicapped aphasiacs through continuous finger language input.

  9. Design of a small laser ceilometer and visibility measuring device for helicopter landing sites

    NASA Astrophysics Data System (ADS)

    Streicher, Jurgen; Werner, Christian; Dittel, Walter

    2004-01-01

    Hardware development for remote sensing costs a lot of time and money. A virtual instrument based on software modules was developed to optimise a small visibility and cloud base height sensor. Visibility is the parameter describing the turbidity of the atmosphere. This can be done either by a mean value over a path measured by a transmissometer or for each point of the atmosphere like the backscattered intensity of a range resolved lidar measurement. A standard ceilometer detects the altitude of clouds by using the runtime of the laser pulse and the increasing intensity of the back scattered light when hitting the boundary of a cloud. This corresponds to hard target range finding, but with a more sensitive detection. The output of a standard ceilometer is in case of cloud coverage the altitude of one or more layers. Commercial cloud sensors are specified to track cloud altitude at rather large distances (100 m up to 10 km) and are therefore big and expensive. A virtual instrument was used to calculate the system parameters for a small system for heliports at hospitals and landing platforms under visual flight rules (VFR). Helicopter pilots need information about cloud altitude (base not below 500 feet) and/or the visibility conditions (visual range not lower than 600m) at the destinated landing point. Private pilots need this information too when approaching a non-commercial airport. Both values can be measured automatically with the developed small and compact prototype, at the size of a shoebox for a reasonable price.

  10. Visualizing vascular structures in virtual environments

    NASA Astrophysics Data System (ADS)

    Wischgoll, Thomas

    2013-01-01

    In order to learn more about the cause of coronary heart diseases and develop diagnostic tools, the extraction and visualization of vascular structures from volumetric scans for further analysis is an important step. By determining a geometric representation of the vasculature, the geometry can be inspected and additional quantitative data calculated and incorporated into the visualization of the vasculature. To provide a more user-friendly visualization tool, virtual environment paradigms can be utilized. This paper describes techniques for interactive rendering of large-scale vascular structures within virtual environments. This can be applied to almost any virtual environment configuration, such as CAVE-type displays. Specifically, the tools presented in this paper were tested on a Barco I-Space and a large 62x108 inch passive projection screen with a Kinect sensor for user tracking.

  11. Intranet and Internet metrological workstation with photonic sensors and transmission

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.; Pozniak, Krzysztof T.; Dybko, Artur

    1999-05-01

    We describe in this paper a part of a telemetric network which consists of a workstation with photonic measurement and communication interfaces, structural fiber optic cabling (10/100BaseFX and CAN-FL), and photonic sensors with fiber optic interfaces. The station is equipped with direct photonic measurement interface and most common measuring standards converter (RS, GPIB) with fiber optic I/O CAN bus, O/E converters, LAN and modem ports. The station was connected to the Intranet (ipx/spx) and Internet (tcp/ip) with separate IP number and DNS, WINS names. Virtual measuring environment system program was written specially for such an Intranet and Internet station. The measurement system program communicated with the user via a Graphical User's Interface (GUI). The user has direct access to all functions of the measuring station system through appropriate layers of GUI: telemetric, transmission, visualization, processing, information, help and steering of the measuring system. We have carried out series of thorough simulation investigations and tests of the station using WWW subsystem of the Internet. We logged into the system through the LAN and via modem. The Internet metrological station works continuously under the address http://nms.ipe.pw.edu.pl/nms. The station and the system hear the short name NMS (from Network Measuring System).

  12. A Framework for Analyzing the Whole Body Surface Area from a Single View

    PubMed Central

    Doretto, Gianfranco; Adjeroh, Donald

    2017-01-01

    We present a virtual reality (VR) framework for the analysis of whole human body surface area. Usual methods for determining the whole body surface area (WBSA) are based on well known formulae, characterized by large errors when the subject is obese, or belongs to certain subgroups. For these situations, we believe that a computer vision approach can overcome these problems and provide a better estimate of this important body indicator. Unfortunately, using machine learning techniques to design a computer vision system able to provide a new body indicator that goes beyond the use of only body weight and height, entails a long and expensive data acquisition process. A more viable solution is to use a dataset composed of virtual subjects. Generating a virtual dataset allowed us to build a population with different characteristics (obese, underweight, age, gender). However, synthetic data might differ from a real scenario, typical of the physician’s clinic. For this reason we develop a new virtual environment to facilitate the analysis of human subjects in 3D. This framework can simulate the acquisition process of a real camera, making it easy to analyze and to create training data for machine learning algorithms. With this virtual environment, we can easily simulate the real setup of a clinic, where a subject is standing in front of a camera, or may assume a different pose with respect to the camera. We use this newly designated environment to analyze the whole body surface area (WBSA). In particular, we show that we can obtain accurate WBSA estimations with just one view, virtually enabling the possibility to use inexpensive depth sensors (e.g., the Kinect) for large scale quantification of the WBSA from a single view 3D map. PMID:28045895

  13. The Namibia Early Flood Warning System, A CEOS Pilot Project

    NASA Technical Reports Server (NTRS)

    Mandl, Daniel; Frye, Stuart; Cappelaere, Pat; Sohlberg, Robert; Handy, Matthew; Grossman, Robert

    2012-01-01

    Over the past year few years, an international collaboration has developed a pilot project under the auspices of Committee on Earth Observation Satellite (CEOS) Disasters team. The overall team consists of civilian satellite agencies. For this pilot effort, the development team consists of NASA, Canadian Space Agency, Univ. of Maryland, Univ. of Colorado, Univ. of Oklahoma, Ukraine Space Research Institute and Joint Research Center(JRC) for European Commission. This development team collaborates with regional , national and international agencies to deliver end-to-end disaster coverage. In particular, the team in collaborating on this effort with the Namibia Department of Hydrology to begin in Namibia . However, the ultimate goal is to expand the functionality to provide early warning over the South Africa region. The initial collaboration was initiated by United Nations Office of Outer Space Affairs and CEOS Working Group for Information Systems and Services (WGISS). The initial driver was to demonstrate international interoperability using various space agency sensors and models along with regional in-situ ground sensors. In 2010, the team created a preliminary semi-manual system to demonstrate moving and combining key data streams and delivering the data to the Namibia Department of Hydrology during their flood season which typically is January through April. In this pilot, a variety of moderate resolution and high resolution satellite flood imagery was rapidly delivered and used in conjunction with flood predictive models in Namibia. This was collected in conjunction with ground measurements and was used to examine how to create a customized flood early warning system. During the first year, the team made use of SensorWeb technology to gather various sensor data which was used to monitor flood waves traveling down basins originating in Angola, but eventually flooding villages in Namibia. The team made use of standardized interfaces such as those articulated under the Open Cloud Consortium (OGC) Sensor Web Enablement (SWE) set of web services was good [1][2]. However, it was discovered that in order to make a system like this functional, there were many performance issues. Data sets were large and located in a variety of location behind firewalls and had to be accessed across open networks, so security was an issue. Furthermore, the network access acted as bottleneck to transfer map products to where they are needed. Finally, during disasters, many users and computer processes act in parallel and thus it was very easy to overload the single string of computers stitched together in a virtual system that was initially developed. To address some of these performance issues, the team partnered with the Open Cloud Consortium (OCC) who supplied a Computation Cloud located at the University of Illinois at Chicago and some manpower to administer this Cloud. The Flood SensorWeb [3] system was interfaced to the Cloud to provide a high performance user interface and product development engine. Figure 1 shows the functional diagram of the Flood SensorWeb. Figure 2 shows some of the functionality of the Computation Cloud that was integrated. A significant portion of the original system was ported to the Cloud and during the past year, technical issues were resolved which included web access to the Cloud, security over the open Internet, beginning experiments on how to handle surge capacity by using the virtual machines in the cloud in parallel, using tiling techniques to render large data sets as layers on map, interfaces to allow user to customize the data processing/product chain and other performance enhancing techniques. The conclusion reached from the effort and this presentation is that defining the interoperability standards in a small fraction of the work. For example, once open web service standards were defined, many users could not make use of the standards due to security restrictions. Furthermore, once an interoperable sysm is functional, then a surge of users can render a system unusable, especially in the disaster domain.

  14. Creation of 3D Multi-Body Orthodontic Models by Using Independent Imaging Sensors

    PubMed Central

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-01-01

    In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning. PMID:23385416

  15. Creation of 3D multi-body orthodontic models by using independent imaging sensors.

    PubMed

    Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano

    2013-02-05

    In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT) and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces) through the digitalization of both patients' mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.

  16. Pose and Wind Estimation for Autonomous Parafoils

    DTIC Science & Technology

    2014-09-01

    Communications GT Georgia Institute of Technology IDVD Inverse Dynamics in the Virtual Domain IMU inertial measurement unit INRIA Institut National de Recherche en...sensor. The method used is a nonlinear estimator that combines the visual sensor measurements with those of an inertial measurement unit ( IMU ) on... isolated on the left side of the equation. On the other hand, when the measurement equation of (3.27) is implemented, the probabil- 58 ity

  17. Smart sensors and virtual physiology human approach as a basis of personalized therapies in diabetes mellitus.

    PubMed

    Fernández Peruchena, Carlos M; Prado-Velasco, Manuel

    2010-01-01

    Diabetes mellitus (DM) has a growing incidence and prevalence in modern societies, pushed by the aging and change of life styles. Despite the huge resources dedicated to improve their quality of life, mortality and morbidity rates, these are still very poor. In this work, DM pathology is revised from clinical and metabolic points of view, as well as mathematical models related to DM, with the aim of justifying an evolution of DM therapies towards the correction of the physiological metabolic loops involved. We analyze the reliability of mathematical models, under the perspective of virtual physiological human (VPH) initiatives, for generating and integrating customized knowledge about patients, which is needed for that evolution. Wearable smart sensors play a key role in this frame, as they provide patient's information to the models.A telehealthcare computational architecture based on distributed smart sensors (first processing layer) and personalized physiological mathematical models integrated in Human Physiological Images (HPI) computational components (second processing layer), is presented. This technology was designed for a renal disease telehealthcare in earlier works and promotes crossroads between smart sensors and the VPH initiative. We suggest that it is able to support a truly personalized, preventive, and predictive healthcare model for the delivery of evolved DM therapies.

  18. Smart Sensors and Virtual Physiology Human Approach as a Basis of Personalized Therapies in Diabetes Mellitus

    PubMed Central

    Fernández Peruchena, Carlos M; Prado-Velasco, Manuel

    2010-01-01

    Diabetes mellitus (DM) has a growing incidence and prevalence in modern societies, pushed by the aging and change of life styles. Despite the huge resources dedicated to improve their quality of life, mortality and morbidity rates, these are still very poor. In this work, DM pathology is revised from clinical and metabolic points of view, as well as mathematical models related to DM, with the aim of justifying an evolution of DM therapies towards the correction of the physiological metabolic loops involved. We analyze the reliability of mathematical models, under the perspective of virtual physiological human (VPH) initiatives, for generating and integrating customized knowledge about patients, which is needed for that evolution. Wearable smart sensors play a key role in this frame, as they provide patient’s information to the models. A telehealthcare computational architecture based on distributed smart sensors (first processing layer) and personalized physiological mathematical models integrated in Human Physiological Images (HPI) computational components (second processing layer), is presented. This technology was designed for a renal disease telehealthcare in earlier works and promotes crossroads between smart sensors and the VPH initiative. We suggest that it is able to support a truly personalized, preventive, and predictive healthcare model for the delivery of evolved DM therapies. PMID:21625646

  19. Developing movement recognition application with the use of Shimmer sensor and Microsoft Kinect sensor.

    PubMed

    Guzsvinecz, Tibor; Szucs, Veronika; Sik Lányi, Cecília

    2015-01-01

    Nowadays the development of virtual reality-based application is one of the most dynamically growing areas. These applications have a wide user base, more and more devices which are providing several kinds of user interactions and are available on the market. In the applications where the not-handheld devices are not necessary, the potential is that these can be used in educational, entertainment and rehabilitation applications. The purpose of this paper is to examine the precision and the efficiency of the not-handheld devices with user interaction in the virtual reality-based applications. The first task of the developed application is to support the rehabilitation process of stroke patients in their homes. A newly developed application will be introduced in this paper, which uses the two popular devices, the Shimmer sensor and the Microsoft Kinect sensor. To identify and to validate the actions of the user these sensors are working together in parallel mode. For the problem solving, the application is available to record an educational pattern, and then the software compares this pattern to the action of the user. The goal of the current research is to examine the extent of the difference in the recognition of the gestures, how precisely the two sensors are identifying the predefined actions. This could affect the rehabilitation process of the stroke patients and influence the efficiency of the rehabilitation. This application was developed in C# programming language and uses the original Shimmer connecting application as a base. During the working of this application it is possible to teach five-five different movements with the use of the Shimmer and the Microsoft Kinect sensors. The application can recognize these actions at any later time. This application uses a file-based database and the runtime memory of the application to store the saved data in order to reach the actions easier. The conclusion is that much more precise data were collected from the Microsoft Kinect sensor than the Shimmer sensors.

  20. Augmented Feedback System to Support Physical Therapy of Non-specific Low Back Pain

    NASA Astrophysics Data System (ADS)

    Brodbeck, Dominique; Degen, Markus; Stanimirov, Michael; Kool, Jan; Scheermesser, Mandy; Oesch, Peter; Neuhaus, Cornelia

    Low back pain is an important problem in industrialized countries. Two key factors limit the effectiveness of physiotherapy: low compliance of patients with repetitive movement exercises, and inadequate awareness of patients of their own posture. The Backtrainer system addresses these problems by real-time monitoring of the spine position, by providing a framework for most common physiotherapy exercises for the low back, and by providing feedback to patients in a motivating way. A minimal sensor configuration was identified as two inertial sensors that measure the orientation of the lower back at two points with three degrees of freedom. The software was designed as a flexible platform to experiment with different hardware, and with various feedback modalities. Basic exercises for two types of movements are provided: mobilizing and stabilizing. We developed visual feedback - abstract as well as in the form of a virtual reality game - and complemented the on-screen graphics with an ambient feedback device. The system was evaluated during five weeks in a rehabilitation clinic with 26 patients and 15 physiotherapists. Subjective satisfaction of subjects was good, and we interpret the results as encouraging indication for the adoption of such a therapy support system by both patients and therapists.

  1. Pervasive surveillance-agent system based on wireless sensor networks: design and deployment

    NASA Astrophysics Data System (ADS)

    Martínez, José F.; Bravo, Sury; García, Ana B.; Corredor, Iván; Familiar, Miguel S.; López, Lourdes; Hernández, Vicente; Da Silva, Antonio

    2010-12-01

    Nowadays, proliferation of embedded systems is enhancing the possibilities of gathering information by using wireless sensor networks (WSNs). Flexibility and ease of installation make these kinds of pervasive networks suitable for security and surveillance environments. Moreover, the risk for humans to be exposed to these functions is minimized when using these networks. In this paper, a virtual perimeter surveillance agent, which has been designed to detect any person crossing an invisible barrier around a marked perimeter and send an alarm notification to the security staff, is presented. This agent works in a state of 'low power consumption' until there is a crossing on the perimeter. In our approach, the 'intelligence' of the agent has been distributed by using mobile nodes in order to discern the cause of the event of presence. This feature contributes to saving both processing resources and power consumption since the required code that detects presence is the only system installed. The research work described in this paper illustrates our experience in the development of a surveillance system using WNSs for a practical application as well as its evaluation in real-world deployments. This mechanism plays an important role in providing confidence in ensuring safety to our environment.

  2. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems

    PubMed Central

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  3. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  4. Omics approaches to individual variation: modeling networks and the virtual patient.

    PubMed

    Lehrach, Hans

    2016-09-01

    Every human is unique. We differ in our genomes, environment, behavior, disease history, and past and current medical treatment-a complex catalog of differences that often leads to variations in the way each of us responds to a particular therapy. We argue here that true personalization of drug therapies will rely on "virtual patient" models based on a detailed characterization of the individual patient by molecular, imaging, and sensor techniques. The models will be based, wherever possible, on the molecular mechanisms of disease processes and drug action but can also expand to hybrid models including statistics/machine learning/artificial intelligence-based elements trained on available data to address therapeutic areas or therapies for which insufficient information on mechanisms is available. Depending on the disease, its mechanisms, and the therapy, virtual patient models can be implemented at a fairly high level of abstraction, with molecular models representing cells, cell types, or organs relevant to the clinical question, interacting not only with each other but also the environment. In the future, "virtual patient/in-silico self" models may not only become a central element of our health care system, reducing otherwise unavoidable mistakes and unnecessary costs, but also act as "guardian angels" accompanying us through life to protect us against dangers and to help us to deal intelligently with our own health and wellness.

  5. Omics approaches to individual variation: modeling networks and the virtual patient

    PubMed Central

    Lehrach, Hans

    2016-01-01

    Every human is unique. We differ in our genomes, environment, behavior, disease history, and past and current medical treatment—a complex catalog of differences that often leads to variations in the way each of us responds to a particular therapy. We argue here that true personalization of drug therapies will rely on “virtual patient” models based on a detailed characterization of the individual patient by molecular, imaging, and sensor techniques. The models will be based, wherever possible, on the molecular mechanisms of disease processes and drug action but can also expand to hybrid models including statistics/machine learning/artificial intelligence-based elements trained on available data to address therapeutic areas or therapies for which insufficient information on mechanisms is available. Depending on the disease, its mechanisms, and the therapy, virtual patient models can be implemented at a fairly high level of abstraction, with molecular models representing cells, cell types, or organs relevant to the clinical question, interacting not only with each other but also the environment. In the future, “virtual patient/in-silico self” models may not only become a central element of our health care system, reducing otherwise unavoidable mistakes and unnecessary costs, but also act as “guardian angels” accompanying us through life to protect us against dangers and to help us to deal intelligently with our own health and wellness. PMID:27757060

  6. A virtual robot to model the use of regenerated legs in a web-building spider.

    PubMed

    Krink; Vollrath

    1999-01-01

    The garden cross orb-spider, Araneus diadematus, shows behavioural responses to leg loss and regeneration that are reflected in the geometry of the web's capture spiral. We created a virtual spider robot that mimicked the web construction behaviour of thus handicapped real spiders. We used this approach to test the correctness and consistency of hypotheses about orb web construction. The behaviour of our virtual robot was implemented in a rule-based system supervising behaviour patterns that communicated with the robot's sensors and motors. By building the typical web of a nonhandicapped spider our first model failed and led to new observations on real spiders. We realized that in addition to leg position, leg posture could also be of importance. The implementation of this new hypothesis greatly improved the results of our simulation of a handicapped spider. Now simulated webs, like the real webs of handicapped spiders, had significantly more gaps in successive spiral turns compared with webs of nonhandicapped spiders. Moreover, webs built by the improved virtual spiders intercepted prey as well as the digitized real webs. However, the main factors that affected web interception frequency were prey size, size of capture area and individual variance; having a regenerated leg, surprisingly, was relatively unimportant for this trait. Copyright 1999 The Association for the Study of Animal Behaviour.

  7. Crack Detection in Fibre Reinforced Plastic Structures Using Embedded Fibre Bragg Grating Sensors: Theory, Model Development and Experimental Validation

    PubMed Central

    Pereira, G. F.; Mikkelsen, L. P.; McGugan, M.

    2015-01-01

    In a fibre-reinforced polymer (FRP) structure designed using the emerging damage tolerance and structural health monitoring philosophy, sensors and models that describe crack propagation will enable a structure to operate despite the presence of damage by fully exploiting the material’s mechanical properties. When applying this concept to different structures, sensor systems and damage types, a combination of damage mechanics, monitoring technology, and modelling is required. The primary objective of this article is to demonstrate such a combination. This article is divided in three main topics: the damage mechanism (delamination of FRP), the structural health monitoring technology (fibre Bragg gratings to detect delamination), and the finite element method model of the structure that incorporates these concepts into a final and integrated damage-monitoring concept. A novel method for assessing a crack growth/damage event in fibre-reinforced polymer or structural adhesive-bonded structures using embedded fibre Bragg grating (FBG) sensors is presented by combining conventional measured parameters, such as wavelength shift, with parameters associated with measurement errors, typically ignored by the end-user. Conjointly, a novel model for sensor output prediction (virtual sensor) was developed using this FBG sensor crack monitoring concept and implemented in a finite element method code. The monitoring method was demonstrated and validated using glass fibre double cantilever beam specimens instrumented with an array of FBG sensors embedded in the material and tested using an experimental fracture procedure. The digital image correlation technique was used to validate the model prediction by correlating the specific sensor response caused by the crack with the developed model. PMID:26513653

  8. Leveraging simulation to evaluate system performance in presence of fixed pattern noise

    NASA Astrophysics Data System (ADS)

    Teaney, Brian P.

    2017-05-01

    The development of image simulation techniques which map the effects of a notional, modeled sensor system onto an existing image can be used to evaluate the image quality of camera systems prior to the development of prototype systems. In addition, image simulation or `virtual prototyping' can be utilized to reduce the time and expense associated with conducting extensive field trials. In this paper we examine the development of a perception study designed to assess the performance of the NVESD imager performance metrics as a function of fixed pattern noise. This paper discusses the development of the model theory and the implementation and execution of the perception study. In addition, other applications of the image simulation component including the evaluation of limiting resolution and other test targets is provided.

  9. Review of Microwave Photonics Technique to Generate the Microwave Signal by Using Photonics Technology

    NASA Astrophysics Data System (ADS)

    Raghuwanshi, Sanjeev Kumar; Srivastav, Akash

    2017-12-01

    Microwave photonics system provides high bandwidth capabilities of fiber optic systems and also contains the ability to provide interconnect transmission properties, which are virtually independent of length. The low-loss wide bandwidth capability of optoelectronic systems makes them attractive for the transmission and processing of microwave signals, while the development of high-capacity optical communication systems has required the use of microwave techniques in optical transmitters and receivers. These two strands have led to the development of the research area of microwave photonics. So, we can considered microwave photonics as the field that studies the interaction between microwave and optical waves for applications such as communications, radars, sensors and instrumentations. In this paper we have thoroughly reviewed the microwave generation techniques by using photonics technology.

  10. Utilizing the EUVE Innovative Technology Testbed to Reduce Operations Cost for Present and Future Orbiting Mission

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This report summarizes work done under Cooperative Agreement (CA) on the following testbed projects: TERRIERS - The development of the ground systems to support the TERRIERS satellite mission at Boston University (BU). HSTS - The application of ARC's Heuristic Scheduling Testbed System (HSTS) to the EUVE satellite mission. SELMON - The application of NASA's Jet Propulsion Laboratory's (JPL) Selective Monitoring (SELMON) system to the EUVE satellite mission. EVE - The development of the EUVE Virtual Environment (EVE), a prototype three-dimensional (3-D) visualization environment for the EUVE satellite and its sensors, instruments, and communications antennae. FIDO - The development of the Fault-Induced Document Officer (FIDO) system, a prototype application to respond to anomalous conditions by automatically searching for, retrieving, and displaying relevant documentation for an operators use.

  11. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model

    PubMed Central

    Wu, Jian-Xing; Huang, Ping-Tzan; Li, Chien-Ming

    2018-01-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500–700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility. PMID:29515815

  12. Blood leakage detection during dialysis therapy based on fog computing with array photocell sensors and heteroassociative memory model.

    PubMed

    Wu, Jian-Xing; Huang, Ping-Tzan; Lin, Chia-Hung; Li, Chien-Ming

    2018-02-01

    Blood leakage and blood loss are serious life-threatening complications occurring during dialysis therapy. These events have been of concerns to both healthcare givers and patients. More than 40% of adult blood volume can be lost in just a few minutes, resulting in morbidities and mortality. The authors intend to propose the design of a warning tool for the detection of blood leakage/blood loss during dialysis therapy based on fog computing with an array of photocell sensors and heteroassociative memory (HAM) model. Photocell sensors are arranged in an array on a flexible substrate to detect blood leakage via the resistance changes with illumination in the visible spectrum of 500-700 nm. The HAM model is implemented to design a virtual alarm unit using electricity changes in an embedded system. The proposed warning tool can indicate the risk level in both end-sensing units and remote monitor devices via a wireless network and fog/cloud computing. The animal experimental results (pig blood) will demonstrate the feasibility.

  13. Medipix2 based CdTe microprobe for dental imaging

    NASA Astrophysics Data System (ADS)

    Vykydal, Z.; Fauler, A.; Fiederle, M.; Jakubek, J.; Svestkova, M.; Zwerger, A.

    2011-12-01

    Medical imaging devices and techniques are demanded to provide high resolution and low dose images of samples or patients. Hybrid semiconductor single photon counting devices together with suitable sensor materials and advanced techniques of image reconstruction fulfil these requirements. In particular cases such as the direct observation of dental implants also the size of the imaging device itself plays a critical role. This work presents the comparison of 2D radiographs of tooth provided by a standard commercial dental imaging system (Gendex 765DC X-ray tube with VisualiX scintillation detector) and two Medipix2 USB Lite detectors one equipped with a Si sensor (300 μm thick) and one with a CdTe sensor (1 mm thick). Single photon counting capability of the Medipix2 device allows virtually unlimited dynamic range of the images and thus increases the contrast significantly. The dimensions of the whole USB Lite device are only 15 mm × 60 mm of which 25% consists of the sensitive area. Detector of this compact size can be used directly inside the patients' mouth.

  14. Toward brain-computer interface based wheelchair control utilizing tactually-evoked event-related potentials

    PubMed Central

    2014-01-01

    Background People with severe disabilities, e.g. due to neurodegenerative disease, depend on technology that allows for accurate wheelchair control. For those who cannot operate a wheelchair with a joystick, brain-computer interfaces (BCI) may offer a valuable option. Technology depending on visual or auditory input may not be feasible as these modalities are dedicated to processing of environmental stimuli (e.g. recognition of obstacles, ambient noise). Herein we thus validated the feasibility of a BCI based on tactually-evoked event-related potentials (ERP) for wheelchair control. Furthermore, we investigated use of a dynamic stopping method to improve speed of the tactile BCI system. Methods Positions of four tactile stimulators represented navigation directions (left thigh: move left; right thigh: move right; abdomen: move forward; lower neck: move backward) and N = 15 participants delivered navigation commands by focusing their attention on the desired tactile stimulus in an oddball-paradigm. Results Participants navigated a virtual wheelchair through a building and eleven participants successfully completed the task of reaching 4 checkpoints in the building. The virtual wheelchair was equipped with simulated shared-control sensors (collision avoidance), yet these sensors were rarely needed. Conclusion We conclude that most participants achieved tactile ERP-BCI control sufficient to reliably operate a wheelchair and dynamic stopping was of high value for tactile ERP classification. Finally, this paper discusses feasibility of tactile ERPs for BCI based wheelchair control. PMID:24428900

  15. Math Machines: Using Actuators in Physics Classes

    NASA Astrophysics Data System (ADS)

    Thomas, Frederick J.; Chaney, Robert A.; Gruesbeck, Marta

    2018-01-01

    Probeware (sensors combined with data-analysis software) is a well-established part of physics education. In engineering and technology, sensors are frequently paired with actuators—motors, heaters, buzzers, valves, color displays, medical dosing systems, and other devices that are activated by electrical signals to produce intentional physical change. This article describes how a 20-year project aimed at better integration of the STEM disciplines (science, technology, engineering and mathematics) uses brief actuator activities in physics instruction. Math Machines "actionware" includes software and hardware that convert virtually any free-form, time-dependent algebraic function into the dynamic actions of a stepper motor, servo motor, or RGB (red, green, blue) color mixer. With wheels and a platform, the stepper motor becomes LACI, a programmable vehicle. Adding a low-power laser module turns the servo motor into a programmable Pointer. Adding a gear and platform can transform the Pointer into an earthquake simulator.

  16. Web-of-Objects (WoO)-Based Context Aware Emergency Fire Management Systems for the Internet of Things

    PubMed Central

    Shamszaman, Zia Ush; Ara, Safina Showkat; Chong, Ilyoung; Jeong, Youn Kwae

    2014-01-01

    Recent advancements in the Internet of Things (IoT) and the Web of Things (WoT) accompany a smart life where real world objects, including sensing devices, are interconnected with each other. The Web representation of smart objects empowers innovative applications and services for various domains. To accelerate this approach, Web of Objects (WoO) focuses on the implementation aspects of bringing the assorted real world objects to the Web applications. In this paper; we propose an emergency fire management system in the WoO infrastructure. Consequently, we integrate the formation and management of Virtual Objects (ViO) which are derived from real world physical objects and are virtually connected with each other into the semantic ontology model. The charm of using the semantic ontology is that it allows information reusability, extensibility and interoperability, which enable ViOs to uphold orchestration, federation, collaboration and harmonization. Our system is context aware, as it receives contextual environmental information from distributed sensors and detects emergency situations. To handle a fire emergency, we present a decision support tool for the emergency fire management team. The previous fire incident log is the basis of the decision support system. A log repository collects all the emergency fire incident logs from ViOs and stores them in a repository. PMID:24531299

  17. Web-of-Objects (WoO)-based context aware emergency fire management systems for the Internet of Things.

    PubMed

    Shamszaman, Zia Ush; Ara, Safina Showkat; Chong, Ilyoung; Jeong, Youn Kwae

    2014-02-13

    Recent advancements in the Internet of Things (IoT) and the Web of Things (WoT) accompany a smart life where real world objects, including sensing devices, are interconnected with each other. The Web representation of smart objects empowers innovative applications and services for various domains. To accelerate this approach, Web of Objects (WoO) focuses on the implementation aspects of bringing the assorted real world objects to the Web applications. In this paper; we propose an emergency fire management system in the WoO infrastructure. Consequently, we integrate the formation and management of Virtual Objects (ViO) which are derived from real world physical objects and are virtually connected with each other into the semantic ontology model. The charm of using the semantic ontology is that it allows information reusability, extensibility and interoperability, which enable ViOs to uphold orchestration, federation, collaboration and harmonization. Our system is context aware, as it receives contextual environmental information from distributed sensors and detects emergency situations. To handle a fire emergency, we present a decision support tool for the emergency fire management team. The previous fire incident log is the basis of the decision support system. A log repository collects all the emergency fire incident logs from ViOs and stores them in a repository.

  18. The tsunami service bus, an integration platform for heterogeneous sensor systems

    NASA Astrophysics Data System (ADS)

    Haener, R.; Waechter, J.; Kriegel, U.; Fleischer, J.; Mueller, S.

    2009-04-01

    1. INTRODUCTION Early warning systems are long living and evolving: New sensor-systems and -types may be developed and deployed, sensors will be replaced or redeployed on other locations and the functionality of analyzing software will be improved. To ensure a continuous operability of those systems their architecture must be evolution-enabled. From a computer science point of view an evolution-enabled architecture must fulfill following criteria: • Encapsulation of and functionality on data in standardized services. Access to proprietary sensor data is only possible via these services. • Loose coupling of system constituents which easily can be achieved by implementing standardized interfaces. • Location transparency of services what means that services can be provided everywhere. • Separation of concerns that means breaking a system into distinct features which overlap in functionality as little as possible. A Service Oriented Architecture (SOA) as e. g. realized in the German Indonesian Tsunami Early Warning System (GITEWS) and the advantages of functional integration on the basis of services described below adopt these criteria best. 2. SENSOR INTEGRATION Integration of data from (distributed) data sources is just a standard task in computer science. From few well known solution patterns, taking into account performance and security requirements of early warning systems only functional integration should be considered. Precondition for this is that systems are realized compliant to SOA patterns. Functionality is realized in form of dedicated components communicating via a service infrastructure. These components provide their functionality in form of services via standardized and published interfaces which could be used to access data maintained in - and functionality provided by dedicated components. Functional integration replaces the tight coupling at data level by a dependency on loosely coupled services. If the interfaces of the service providing components remain unchanged, components can be maintained and evolved independently on each other and service functionality as a whole can be reused. In GITEWS the functional integration pattern was adopted by applying the principles of an Enterprise Service Bus (ESB) as a backbone. Four services provided by the so called Tsunami Service Bus (TSB) which are essential for early warning systems are realized compliant to services specified within the Sensor Web Enablement (SWE) initiative of the Open Geospatial Consortium (OGC). 3. ARCHITECTURE The integration platform was developed to access proprietary, heterogeneous sensor data and to provide them in a uniform manner for further use. Its core, the TSB provides both a messaging-backbone and -interfaces on the basis of a Java Messaging Service (JMS). The logical architecture of GITEWS consists of four independent layers: • A resource layer where physical or virtual sensors as well as data or model storages provide relevant measurement-, event- and analysis-data: Utilizable for the TSB are any kind of data. In addition to sensors databases, model data and processing applications are adopted. SWE specifies encoding both to access and to describe these data in a comprehensive way: 1. Sensor Model Language (SensorML): Standardized description of sensors and sensor data 2. Observations and Measurements (O&M): Model and encoding of sensor measurements • A service layer to collect and conduct data from heterogeneous and proprietary resources and provide them via standardized interfaces: The TSB enables interaction with sensors via the following services: 1. Sensor Observation Service (SOS): Standardized access to sensor data 2. Sensor Planning Service (SPS): Controlling of sensors and sensor networks 3. Sensor Alert Service (SAS): Active sending of data if defined events occur 4. Web Notification Service (WNS): Conduction of asynchronous dialogues between services • An orchestration layer where atomic services are composed and arranged to high level processes like a decision support process: One of the outstanding features of service-oriented architectures is the possibility to compose new services from existing ones, which can be done programmatically or via declaration (workflow or process design). This allows e. g. the definition of new warning processes which could be adapted easily to new requirements. • An access layer which may contain graphical user interfaces for decision support, monitoring- or visualization-systems: To for example visualize time series graphical user interfaces request sensor data simply via the SOS. 4.BENEFIT The integration platform is realized on top of well known and widely used open source software implementing industrial standards. New sensors could be added easily to the infrastructure. Client components don't need to be adjusted if new sensor-types or -individuals are added to the system, because they access the sensors via standardized services. With implementing SWE fully compatible to the OGC specification it is possible to establish the "detection" and integration of sensors via the Web. Thus realizing a system of systems that combines early warning system functionality at different levels of detail (distant early warning systems, monitoring systems and any sensor system) is feasible.

  19. a Method for Simultaneous Aerial and Terrestrial Geodata Acquisition for Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Molina, P.; Blázquez, M.; Sastre, J.; Colomina, I.

    2015-08-01

    In this paper, we present mapKITE, a new mobile, simultaneous terrestrial and aerial, geodata collection and post-processing method. On one side, the method combines a terrestrial mobile mapping system (TMMS) with an unmanned aerial mapping one, both equipped with remote sensing payloads (at least, a nadir-looking visible-band camera in the UA) by means of which aerial and terrestrial geodata are acquired simultaneously. This tandem geodata acquisition system is based on a terrestrial vehicle (TV) and on an unmanned aircraft (UA) linked by a 'virtual tether', that is, a mechanism based on the real-time supply of UA waypoints by the TV. By means of the TV-to-UA tether, the UA follows the TV keeping a specific relative TV-to-UA spatial configuration enabling the simultaneous operation of both systems to obtain highly redundant and complementary geodata. On the other side, mapKITE presents a novel concept for geodata post-processing favoured by the rich geometrical aspects derived from the mapKITE tandem simultaneous operation. The approach followed for sensor orientation and calibration of the aerial images captured by the UA inherits the principles of Integrated Sensor Orientation (ISO) and adds the pointing-and-scaling photogrammetric measurement of a distinctive element observed in every UA image, which is a coded target mounted on the roof of the TV. By means of the TV navigation system, the orientation of the TV coded target is performed and used in the post-processing UA image orientation approach as a Kinematic Ground Control Point (KGCP). The geometric strength of a mapKITE ISO network is therefore high as it counts with the traditional tie point image measurements, static ground control points, kinematic aerial control and the new point-and-scale measurements of the KGCPs. With such a geometry, reliable system and sensor orientation and calibration and eventual further reduction of the number of traditional ground control points is feasible. The different technical concepts, challenges and breakthroughs behind mapKITE are presented in this paper, such as the TV-to-UA virtual tether and the use of KGCP measurements for UA sensor orientation. In addition, the use in mapKITE of new European GNSS signals such as the Galileo E5 AltBOC is discussed. Because of the critical role of GNSS technologies and the potential impact on the corridor mapping market, the European Commission and the European GNSS Agency, in the frame of the European Union Framework Programme for Research and Innovation "Horizon 2020," have recently awarded the "mapKITE" project to an international consortium of organizations coordinated by GeoNumerics S.L.

  20. A Software Architecture for Adaptive Modular Sensing Systems

    PubMed Central

    Lyle, Andrew C.; Naish, Michael D.

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration. PMID:22163614

  1. A software architecture for adaptive modular sensing systems.

    PubMed

    Lyle, Andrew C; Naish, Michael D

    2010-01-01

    By combining a number of simple transducer modules, an arbitrarily complex sensing system may be produced to accommodate a wide range of applications. This work outlines a novel software architecture and knowledge representation scheme that has been developed to support this type of flexible and reconfigurable modular sensing system. Template algorithms are used to embed intelligence within each module. As modules are added or removed, the composite sensor is able to automatically determine its overall geometry and assume an appropriate collective identity. A virtual machine-based middleware layer runs on top of a real-time operating system with a pre-emptive kernel, enabling platform-independent template algorithms to be written once and run on any module, irrespective of its underlying hardware architecture. Applications that may benefit from easily reconfigurable modular sensing systems include flexible inspection, mobile robotics, surveillance, and space exploration.

  2. A novel mechatronic tool for computer-assisted arthroscopy.

    PubMed

    Dario, P; Carrozza, M C; Marcacci, M; D'Attanasio, S; Magnami, B; Tonet, O; Megali, G

    2000-03-01

    This paper describes a novel mechatronic tool for arthroscopy, which is at the same time a smart tool for traditional arthroscopy and the main component of a system for computer-assisted arthroscopy. The mechatronic arthroscope has a cable-actuated servomotor-driven multi-joint mechanical structure, is equipped with a position sensor measuring the orientation of the tip and with a force sensor detecting possible contact with delicate tissues in the knee, and incorporates an embedded microcontroller for sensor signal processing, motor driving and interfacing with the surgeon and/or the system control unit. When used manually, the mechatronic arthroscope enhances the surgeon's capabilities by enabling him/her to easily control tip motion and to prevent undesired contacts. When the tool is integrated in a complete system for computer-assisted arthroscopy, the trajectory of the arthroscope is reconstructed in real time by an optical tracking system using infrared emitters located in the handle, providing advantages in terms of improved intervention accuracy. The computer-assisted arthroscopy system comprises an image processing module for segmentation and three-dimensional reconstruction of preoperative computer tomography or magnetic resonance images, a registration module for measuring the position of the knee joint, tracking the trajectory of the operating tools, and matching preoperative and intra-operative images, and a human-machine interface that displays the enhanced reality scenario and data from the mechatronic arthroscope in a friendly and intuitive manner. By integrating preoperative and intra-operative images and information provided by the mechatronic arthroscope, the system allows virtual navigation in the knee joint during the planning phase and computer guidance by augmented reality during the intervention. This paper describes in detail the characteristics of the mechatronic arthroscope and of the system for computer-assisted arthroscopy and discusses experimental results obtained with a preliminary version of the tool and of the system.

  3. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.

    PubMed

    Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G

    2016-11-02

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.

  4. Virtual Reality Simulation of the International Space Welding Experiment

    NASA Technical Reports Server (NTRS)

    Phillips, James A.

    1996-01-01

    Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) It involves 3-dimensional computer graphics; (2) It includes real-time feedback and response to user actions; and (3) It must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, and we have therefore implemented a VR trainer for the International Space Welding Experiment. My role in the development of the ISWE trainer consisted of the following: (1) created texture-mapped models of the ISWE's rotating sample drum, technology block, tool stowage assembly, sliding foot restraint, and control panel; (2) developed C code for control panel button selection and rotation of the sample drum; (3) In collaboration with Tim Clark (Antares Virtual Reality Systems), developed a serial interface box for the PC and the SGI Indigo so that external control devices, similar to ones actually used on the ISWE, could be used to control virtual objects in the ISWE simulation; (4) In collaboration with Peter Wang (SFFP) and Mark Blasingame (Boeing), established the interference characteristics of the VIM 1000 head-mounted-display and tested software filters to correct the problem; (5) In collaboration with Peter Wang and Mark Blasingame, established software and procedures for interfacing the VPL DataGlove and the Polhemus 6DOF position sensors to the SGI Indigo serial ports. The majority of the ISWE modeling effort was conducted on a PC-based VR Workstation, described below.

  5. Ubiquitous health in practice: the interreality paradigm.

    PubMed

    Gaggioli, Andrea; Raspelli, Simona; Grassi, Alessandra; Pallavicini, Federica; Cipresso, Pietro; Wiederhold, Brenda K; Riva, Giuseppe

    2011-01-01

    In this paper we introduce a new ubiquitous computing paradigm for behavioral health care: "Interreality". Interreality integrates assessment and treatment within a hybrid environment, that creates a bridge between the physical and virtual worlds. Our claim is that bridging virtual experiences (fully controlled by the therapist, used to learn coping skills and emotional regulation) with real experiences (allowing both the identification of any critical stressors and the assessment of what has been learned) using advanced technologies (virtual worlds, advanced sensors and PDA/mobile phones) may improve existing psychological treatment. To illustrate the proposed concept, a clinical scenario is also presented and discussed: Daniela, a 40 years old teacher, with a mother affected by Alzheimer's disease.

  6. Architectures Toward Reusable Science Data Systems

    NASA Astrophysics Data System (ADS)

    Moses, J. F.

    2014-12-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building ground systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research, NOAA's weather satellites and USGS's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience the goal is to recognize architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  7. Architectures Toward Reusable Science Data Systems

    NASA Technical Reports Server (NTRS)

    Moses, John

    2015-01-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  8. Telerehabilitation: remote multimedia-supported assistance and mobile monitoring of balance training outcomes can facilitate the clinical staff's effort.

    PubMed

    Krpič, Andrej; Savanović, Arso; Cikajlo, Imre

    2013-06-01

    Telerehabilitation can offer prolonged rehabilitation for patients with stroke after being discharged from the hospital, whilst remote diagnostics may reduce the frequency of the outpatient services required. Here, we compared a novel telerehabilitation system for virtual reality-supported balance training with balance training with only a standing frame and with conventional therapy in the hospital. The proposed low-cost experimental system for balance training enabling multiple home systems, real-time tracking of task's performance and different views of captured data with balance training, consists of a standing frame equipped with a tilt sensor, a low-cost computer, display, and internet connection. Goal-based tasks for balance training in the virtual environment proved motivating for the participating individuals. The physiotherapist, located in the remote healthcare center, could remotely adjust the level of complexity and difficulty or preview the outcomes and instructions with the application on the mobile smartphone. Patients using the virtual reality-supported balance training showed an improvement in the task performance time of 45% and number of collisions of 68%, showing significant improvements in the Berg Balance Scale, Timed 'Up and Go', and 10 m Walk Test. The clinical outcomes were not significantly different from balance training with only the standing frame or conventional therapy. The proposed telerehabilitation can facilitate the physiotherapists' work and thus enable rehabilitation to a larger number of patients after release from the hospital because it requires less time and infrequent presence of the clinical staff. However, a comprehensive clinical evaluation is required to confirm the applicability of the concept.

  9. Palpation imaging using a haptic system for virtual reality applications in medicine.

    PubMed

    Khaled, W; Reichling, S; Bruhns, O T; Boese, H; Baumann, M; Monkman, G; Egersdoerfer, S; Klein, D; Tunayar, A; Freimuth, H; Lorenz, A; Pessavento, A; Ermert, H

    2004-01-01

    In the field of medical diagnosis, there is a strong need to determine mechanical properties of biological tissue, which are of histological and pathological relevance. Malignant tumors are significantly stiffer than surrounding healthy tissue. One of the established diagnosis procedures is the palpation of body organs and tissue. Palpation is used to measure swelling, detect bone fracture, find and measure pulse, or to locate changes in the pathological state of tissue and organs. Current medical practice routinely uses sophisticated diagnostic tests through magnetic resonance imaging (MRI), computed tomography (CT) and ultrasound (US) imaging. However, they cannot provide direct measure of tissue elasticity. Last year we presented the concept of the first haptic sensor actuator system to visualize and reconstruct mechanical properties of tissue using ultrasonic elastography and a haptic display with electrorheological fluids. We developed a real time strain imaging system for tumor diagnosis. It allows biopsies simultaneously to conventional ultrasound B-Mode and strain imaging investigations. We deduce the relative mechanical properties by using finite element simulations and numerical solution models solving the inverse problem. Various modifications on the haptic sensor actuator system have been investigated. This haptic system has the potential of inducing real time substantial forces, using a compact lightweight mechanism which can be applied to numerous areas including intraoperative navigation, telemedicine, teaching and telecommunication.

  10. Symphony: A Framework for Accurate and Holistic WSN Simulation

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2015-01-01

    Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN) systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles. PMID:25723144

  11. NASA Tech Briefs, April 2012

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Topics include: Computational Ghost Imaging for Remote Sensing; Digital Architecture for a Trace Gas Sensor Platform; Dispersed Fringe Sensing Analysis - DFSA; Indium Tin Oxide Resistor-Based Nitric Oxide Microsensors; Gas Composition Sensing Using Carbon Nanotube Arrays; Sensor for Boundary Shear Stress in Fluid Flow; Model-Based Method for Sensor Validation; Qualification of Engineering Camera for Long-Duration Deep Space Missions; Remotely Powered Reconfigurable Receiver for Extreme Environment Sensing Platforms; Bump Bonding Using Metal-Coated Carbon Nanotubes; In Situ Mosaic Brightness Correction; Simplex GPS and InSAR Inversion Software; Virtual Machine Language 2.1; Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction; Pandora Operation and Analysis Software; Fabrication of a Cryogenic Bias Filter for Ultrasensitive Focal Plane; Processing of Nanosensors Using a Sacrificial Template Approach; High-Temperature Shape Memory Polymers; Modular Flooring System; Non-Toxic, Low-Freezing, Drop-In Replacement Heat Transfer Fluids; Materials That Enhance Efficiency and Radiation Resistance of Solar Cells; Low-Cost, Rugged High-Vacuum System; Static Gas-Charging Plug; Floating Oil-Spill Containment Device; Stemless Ball Valve; Improving Balance Function Using Low Levels of Electrical Stimulation of the Balance Organs; Oxygen-Methane Thruster; Lunar Navigation Determination System - LaNDS; Launch Method for Kites in Low-Wind or No-Wind Conditions; Supercritical CO2 Cleaning System for Planetary Protection and Contamination Control Applications; Design and Performance of a Wideband Radio Telescope; Finite Element Models for Electron Beam Freeform Fabrication Process Autonomous Information Unit for Fine-Grain Data Access Control and Information Protection in a Net-Centric System; Vehicle Detection for RCTA/ANS (Autonomous Navigation System); Image Mapping and Visual Attention on the Sensory Ego-Sphere; HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis; and IMAGESEER - IMAGEs for Education and Research.

  12. Simple laser vision sensor calibration for surface profiling applications

    NASA Astrophysics Data System (ADS)

    Abu-Nabah, Bassam A.; ElSoussi, Adnane O.; Al Alami, Abed ElRahman K.

    2016-09-01

    Due to the relatively large structures in the Oil and Gas industry, original equipment manufacturers (OEMs) have been implementing custom-designed laser vision sensor (LVS) surface profiling systems as part of quality control in their manufacturing processes. The rough manufacturing environment and the continuous movement and misalignment of these custom-designed tools adversely affect the accuracy of laser-based vision surface profiling applications. Accordingly, Oil and Gas businesses have been raising the demand from the OEMs to implement practical and robust LVS calibration techniques prior to running any visual inspections. This effort introduces an LVS calibration technique representing a simplified version of two known calibration techniques, which are commonly implemented to obtain a calibrated LVS system for surface profiling applications. Both calibration techniques are implemented virtually and experimentally to scan simulated and three-dimensional (3D) printed features of known profiles, respectively. Scanned data is transformed from the camera frame to points in the world coordinate system and compared with the input profiles to validate the introduced calibration technique capability against the more complex approach and preliminarily assess the measurement technique for weld profiling applications. Moreover, the sensitivity to stand-off distances is analyzed to illustrate the practicality of the presented technique.

  13. Development of a novel haptic glove for improving finger dexterity in poststroke rehabilitation.

    PubMed

    Lin, Chi-Ying; Tsai, Chia-Min; Shih, Pei-Cheng; Wu, Hsiao-Ching

    2015-01-01

    Almost all stroke patients experience a certain degree of fine motor impairment, and impeded finger movement may limit activities in daily life. Thus, to improve the quality of life of stroke patients, designing an efficient training device for fine motor rehabilitation is crucial. This study aimed to develop a novel fine motor training glove that integrates a virtual-reality based interactive environment with vibrotactile feedback for more effective post stroke hand rehabilitation. The proposed haptic rehabilitation device is equipped with small DC vibration motors for vibrotactile feedback stimulation and piezoresistive thin-film force sensors for motor function evaluation. Two virtual-reality based games ``gopher hitting'' and ``musical note hitting'' were developed as a haptic interface. According to the designed rehabilitation program, patients intuitively push and practice their fingers to improve the finger isolation function. Preliminary tests were conducted to assess the feasibility of the developed haptic rehabilitation system and to identify design concerns regarding the practical use in future clinical testing.

  14. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  15. ARCHAEO-SCAN: Portable 3D shape measurement system for archaeological field work

    NASA Astrophysics Data System (ADS)

    Knopf, George K.; Nelson, Andrew J.

    2004-10-01

    Accurate measurement and thorough documentation of excavated artifacts are the essential tasks of archaeological fieldwork. The on-site recording and long-term preservation of fragile evidence can be improved using 3D spatial data acquisition and computer-aided modeling technologies. Once the artifact is digitized and geometry created in a virtual environment, the scientist can manipulate the pieces in a virtual reality environment to develop a "realistic" reconstruction of the object without physically handling or gluing the fragments. The ARCHAEO-SCAN system is a flexible, affordable 3D coordinate data acquisition and geometric modeling system for acquiring surface and shape information of small to medium sized artifacts and bone fragments. The shape measurement system is being developed to enable the field archaeologist to manually sweep the non-contact sensor head across the relic or artifact surface. A series of unique data acquisition, processing, registration and surface reconstruction algorithms are then used to integrate 3D coordinate information from multiple views into a single reference frame. A novel technique for automatically creating a hexahedral mesh of the recovered fragments is presented. The 3D model acquisition system is designed to operate from a standard laptop with minimal additional hardware and proprietary software support. The captured shape data can be pre-processed and displayed on site, stored digitally on a CD, or transmitted via the Internet to the researcher's home institution.

  16. Estimating Three-Dimensional Orientation of Human Body Parts by Inertial/Magnetic Sensing

    PubMed Central

    Sabatini, Angelo Maria

    2011-01-01

    User-worn sensing units composed of inertial and magnetic sensors are becoming increasingly popular in various domains, including biomedical engineering, robotics, virtual reality, where they can also be applied for real-time tracking of the orientation of human body parts in the three-dimensional (3D) space. Although they are a promising choice as wearable sensors under many respects, the inertial and magnetic sensors currently in use offer measuring performance that are critical in order to achieve and maintain accurate 3D-orientation estimates, anytime and anywhere. This paper reviews the main sensor fusion and filtering techniques proposed for accurate inertial/magnetic orientation tracking of human body parts; it also gives useful recipes for their actual implementation. PMID:22319365

  17. Estimating three-dimensional orientation of human body parts by inertial/magnetic sensing.

    PubMed

    Sabatini, Angelo Maria

    2011-01-01

    User-worn sensing units composed of inertial and magnetic sensors are becoming increasingly popular in various domains, including biomedical engineering, robotics, virtual reality, where they can also be applied for real-time tracking of the orientation of human body parts in the three-dimensional (3D) space. Although they are a promising choice as wearable sensors under many respects, the inertial and magnetic sensors currently in use offer measuring performance that are critical in order to achieve and maintain accurate 3D-orientation estimates, anytime and anywhere. This paper reviews the main sensor fusion and filtering techniques proposed for accurate inertial/magnetic orientation tracking of human body parts; it also gives useful recipes for their actual implementation.

  18. Kansei Biosensor and IT Society

    NASA Astrophysics Data System (ADS)

    Toko, Kiyoshi

    A taste sensor with global selectivity is composed of several kinds of lipid/polymer membranes for transforming information of taste substances into electric signal. The sensor output shows different patterns for chemical substances which have different taste qualities such as saltiness and sourness. Taste interactions such as suppression effect, which occurs between bitterness and sweetness, can be detected and quantified using the taste sensor. The taste and also smell of foodstuffs such as beer, coffee, mineral water, soup and milk can be discussed quantitatively. The taste sensor provides the objective scale for the human sensory expression. Multi-modal communication becomes possible using a taste/smell recognition microchip, which produces virtual taste. We are now standing at the beginning of a new age of communication using digitized taste.

  19. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation

    PubMed Central

    2011-01-01

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces. PMID:21791054

  20. Web GIS in practice X: a Microsoft Kinect natural user interface for Google Earth navigation.

    PubMed

    Boulos, Maged N Kamel; Blanchard, Bryan J; Walker, Cory; Montero, Julio; Tripathy, Aalap; Gutierrez-Osuna, Ricardo

    2011-07-26

    This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI) for controlling 3-D (three-dimensional) virtual globes such as Google Earth (including its Street View mode), Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense), as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer) are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3) that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements), and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces.

  1. Distributed Sensor Fusion for Scalar Field Mapping Using Mobile Sensor Networks.

    PubMed

    La, Hung Manh; Sheng, Weihua

    2013-04-01

    In this paper, autonomous mobile sensor networks are deployed to measure a scalar field and build its map. We develop a novel method for multiple mobile sensor nodes to build this map using noisy sensor measurements. Our method consists of two parts. First, we develop a distributed sensor fusion algorithm by integrating two different distributed consensus filters to achieve cooperative sensing among sensor nodes. This fusion algorithm has two phases. In the first phase, the weighted average consensus filter is developed, which allows each sensor node to find an estimate of the value of the scalar field at each time step. In the second phase, the average consensus filter is used to allow each sensor node to find a confidence of the estimate at each time step. The final estimate of the value of the scalar field is iteratively updated during the movement of the mobile sensors via weighted average. Second, we develop the distributed flocking-control algorithm to drive the mobile sensors to form a network and track the virtual leader moving along the field when only a small subset of the mobile sensors know the information of the leader. Experimental results are provided to demonstrate our proposed algorithms.

  2. Digital imaging and remote sensing image generator (DIRSIG) as applied to NVESD sensor performance modeling

    NASA Astrophysics Data System (ADS)

    Kolb, Kimberly E.; Choi, Hee-sue S.; Kaur, Balvinder; Olson, Jeffrey T.; Hill, Clayton F.; Hutchinson, James A.

    2016-05-01

    The US Army's Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (referred to as NVESD) is developing a virtual detection, recognition, and identification (DRI) testing methodology using simulated imagery as a means of augmenting the field testing component of sensor performance evaluation, which is expensive, resource intensive, time consuming, and limited to the available target(s) and existing atmospheric visibility and environmental conditions at the time of testing. Existing simulation capabilities such as the Digital Imaging Remote Sensing Image Generator (DIRSIG) and NVESD's Integrated Performance Model Image Generator (NVIPM-IG) can be combined with existing detection algorithms to reduce cost/time, minimize testing risk, and allow virtual/simulated testing using full spectral and thermal object signatures, as well as those collected in the field. NVESD has developed an end-to-end capability to demonstrate the feasibility of this approach. Simple detection algorithms have been used on the degraded images generated by NVIPM-IG to determine the relative performance of the algorithms on both DIRSIG-simulated and collected images. Evaluating the degree to which the algorithm performance agrees between simulated versus field collected imagery is the first step in validating the simulated imagery procedure.

  3. An Effective Massive Sensor Network Data Access Scheme Based on Topology Control for the Internet of Things.

    PubMed

    Yi, Meng; Chen, Qingkui; Xiong, Neal N

    2016-11-03

    This paper considers the distributed access and control problem of massive wireless sensor networks' data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate.

  4. Full-parallax 3D display from stereo-hybrid 3D camera system

    NASA Astrophysics Data System (ADS)

    Hong, Seokmin; Ansari, Amir; Saavedra, Genaro; Martinez-Corral, Manuel

    2018-04-01

    In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle.

  5. Integrated development of light armored vehicles based on wargaming simulators

    NASA Astrophysics Data System (ADS)

    Palmarini, Marc; Rapanotti, John

    2004-08-01

    Vehicles are evolving into vehicle networks through improved sensors, computers and communications. Unless carefully planned, these complex systems can result in excessive crew workload and difficulty in optimizing the use of the vehicle. To overcome these problems, a war-gaming simulator is being developed as a common platform to integrate contributions from three different groups. The simulator, OneSAF, is used to integrate simplified models of technology and natural phenomena from scientists and engineers with tactics and doctrine from the military and analyzed in detail by operations analysts. This approach ensures the modelling of processes known to be important regardless of the level of information available about the system. Vehicle survivability can be improved as well with better sensors, computers and countermeasures to detect and avoid or destroy threats. To improve threat detection and reliability, Defensive Aids Suite (DAS) designs are based on three complementary sensor technologies including: acoustics, visible and infrared optics and radar. Both active armour and softkill countermeasures are considered. In a typical scenario, a search radar, providing continuous hemispherical coverage, detects and classifies the threat and cues a tracking radar. Data from the tracking radar is processed and an explosive grenade is launched to destroy or deflect the threat. The angle of attack and velocity from the search radar can be used by the soft-kill system to carry out an infrared search and track or an illuminated range-gated scan for the threat platform. Upon detection, obscuration, countermanoeuvres and counterfire can be used against the threat. The sensor suite is completed by acoustic detection of muzzle blast and shock waves. Automation and networking at the platoon level contribute to improved vehicle survivability. Sensor data fusion is essential in avoiding catastrophic failure of the DAS. The modular DAS components can be used with Light Armoured Vehicle (LAV) variants including: armoured personnel carriers and direct-fire support vehicles. OneSAF will be used to assess the performance of these DAS-equipped vehicles on a virtual battlefield.

  6. Distributed Engine Control Empirical/Analytical Verification Tools

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan

    2013-01-01

    NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique capabilities to study the effects of a given change to the control system in the context of the distributed paradigm. The simulation tool can support treatment of all components within the control system, both virtual and real; these include communication data network, smart sensor and actuator nodes, centralized control system (FADEC full authority digital engine control), and the aircraft engine itself. The DECsim tool can allow simulation-based prototyping of control laws, control architectures, and decentralization strategies before hardware is integrated into the system. With the configuration specified, the simulator allows a variety of key factors to be systematically assessed. Such factors include control system performance, reliability, weight, and bandwidth utilization.

  7. Integrating Fiber Optic Strain Sensors into Metal Using Ultrasonic Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Hehr, Adam; Norfolk, Mark; Wenning, Justin; Sheridan, John; Leser, Paul; Leser, Patrick; Newman, John A.

    2018-03-01

    Ultrasonic additive manufacturing, a rather new three-dimensional (3D) printing technology, uses ultrasonic energy to produce metallurgical bonds between layers of metal foils near room temperature. This low temperature attribute of the process enables integration of temperature sensitive components, such as fiber optic strain sensors, directly into metal structures. This may be an enabling technology for Digital Twin applications, i.e., virtual model interaction and feedback with live load data. This study evaluates the consolidation quality, interface robustness, and load sensing limits of commercially available fiber optic strain sensors embedded into aluminum alloy 6061. Lastly, an outlook on the technology and its applications is described.

  8. Electrochemistry in diabetes management.

    PubMed

    Heller, Adam; Feldman, Ben

    2010-07-20

    Diabetes devastates lives and burdens society. Hypoglycemic (low glucose) episodes cause blackouts, and severe ones are life-threatening. Periods of hyperglycemia (high glucose) cause circulatory disease, stroke, amputations, blindness, kidney failure and nerve degeneration. In this Account, we describe the founding of TheraSense, now a major part of Abbott Diabetes Care, and the development of two products that have improved the lives of people with diabetes. The first, a virtually painless microcoulometer (300 nL volume), the FreeStyle blood glucose monitoring system, was approved by the FDA and became available in 2000. In 2009, this system was used in more than one billion blood assays. The second, the enzyme-wiring based, subcutaneously-implanted FreeStyle Navigator continuous glucose monitoring system, was approved by the FDA and became available in the United States in 2008. The strips of the FreeStyle blood glucose monitoring system comprise a printed parallel plate coulometer, with a 50 microm gap between two facing printed electrodes, a carbon electrode and a Ag/AgCl electrode. The volume of blood between the facing plates is accurately controlled. The glucose is electrooxidized through catalysis by a glucose dehydrogenase (GDH) and an Os(2+/3+) redox mediator, which is reduced by the glucose-reduced enzyme and is electrooxidized on the carbon electrode. Initially the system used pyrroloquinoline quinone (PQQ)-dependent GDH but now uses flavin adenine dinucleotide (FAD)-dependent GDH. Because the facing electrodes are separated by such a small distance, shuttling of electrons by the redox couple could interfere with the coulometric assay. However, the Os(2+/3+) redox mediator is selected to have a substantially negative formal potential, between 0.0 and -0.2 V, versus that of the facing Ag/AgCl electrode. This makes the flow of a shuttling current between the two electrodes virtually impossible because the oxidized Os(3+) complex cannot be appreciably reduced at the more positively poised Ag/AgCl electrode. The FreeStyle Navigator continuous glucose monitoring system uses a subcutaneously implanted miniature plastic sensor connected to a transmitter to measure glycemia amperometrically and sends the information to a PDA-like device every minute. The sensor consists of a narrow (0.6 mm wide) plastic substrate on which carbon-working, Ag/AgCl reference, and carbon counter electrodes are printed in a stacked geometry. The active wired enzyme sensing layer covers only about 0.1 mm(2) of the working electrode and is overlaid by a flux-limiting membrane. It resides at about 5 mm depth in the subcutaneous adipose tissue and monitors glucose concentrations over the range 20-500 mg/dL. Its core component, a miniature, disposable, amperometric glucose sensor, has an electrooxidation catalyst made from a crosslinked adduct of glucose oxidase (GOx) and a GOx wiring redox hydrogel containing a polymer-bound Os(2+/3+) complex. Because of the selectivity of the catalyst for glucose, very little current flows in the absence of glucose. That feature, either alone or in combination with other features of the sensor, facilitates the one-point calibration of the system. The sensor is implanted subcutaneously and replaced by the patient after 5 days use with minimal pain. The wearer does not feel its presence under the skin.

  9. Application of Virtual, Augmented, and Mixed Reality to Urology.

    PubMed

    Hamacher, Alaric; Kim, Su Jin; Cho, Sung Tae; Pardeshi, Sunil; Lee, Seung Hyun; Eun, Sung-Jong; Whangbo, Taeg Keun

    2016-09-01

    Recent developments in virtual, augmented, and mixed reality have introduced a considerable number of new devices into the consumer market. This momentum is also affecting the medical and health care sector. Although many of the theoretical and practical foundations of virtual reality (VR) were already researched and experienced in the 1980s, the vastly improved features of displays, sensors, interactivity, and computing power currently available in devices offer a new field of applications to the medical sector and also to urology in particular. The purpose of this review article is to review the extent to which VR technology has already influenced certain aspects of medicine, the applications that are currently in use in urology, and the future development trends that could be expected.

  10. Application of Virtual, Augmented, and Mixed Reality to Urology

    PubMed Central

    2016-01-01

    Recent developments in virtual, augmented, and mixed reality have introduced a considerable number of new devices into the consumer market. This momentum is also affecting the medical and health care sector. Although many of the theoretical and practical foundations of virtual reality (VR) were already researched and experienced in the 1980s, the vastly improved features of displays, sensors, interactivity, and computing power currently available in devices offer a new field of applications to the medical sector and also to urology in particular. The purpose of this review article is to review the extent to which VR technology has already influenced certain aspects of medicine, the applications that are currently in use in urology, and the future development trends that could be expected. PMID:27706017

  11. Autonomic Intelligent Cyber Sensor to Support Industrial Control Network Awareness

    DOE PAGES

    Vollmer, Todd; Manic, Milos; Linda, Ondrej

    2013-06-01

    The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of Autonomic computing and a SOAP based IF-MAP external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, self-managed framework. The contribution of this paper is two-fold: 1) A flexible two level communication layer based on Autonomic computing and Service Oriented Architecture is detailed and 2) Three complementary modules that dynamically reconfiguremore » in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific Operating System and port configurations. Additionally the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure.« less

  12. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field

    PubMed Central

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called “virtual sensor”), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth’s magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms. PMID:27618056

  13. 3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion

    PubMed Central

    Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.

    2016-01-01

    We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation. PMID:27827836

  14. A Grid job monitoring system

    NASA Astrophysics Data System (ADS)

    Dumitrescu, Catalin; Nowack, Andreas; Padhi, Sanjay; Sarkar, Subir

    2010-04-01

    This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components : (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.

  15. Gyro Drift Correction for An Indirect Kalman Filter Based Sensor Fusion Driver.

    PubMed

    Lee, Chan-Gun; Dao, Nhu-Ngoc; Jang, Seonmin; Kim, Deokhwan; Kim, Yonghun; Cho, Sungrae

    2016-06-11

    Sensor fusion techniques have made a significant contribution to the success of the recently emerging mobile applications era because a variety of mobile applications operate based on multi-sensing information from the surrounding environment, such as navigation systems, fitness trackers, interactive virtual reality games, etc. For these applications, the accuracy of sensing information plays an important role to improve the user experience (UX) quality, especially with gyroscopes and accelerometers. Therefore, in this paper, we proposed a novel mechanism to resolve the gyro drift problem, which negatively affects the accuracy of orientation computations in the indirect Kalman filter based sensor fusion. Our mechanism focuses on addressing the issues of external feedback loops and non-gyro error elements contained in the state vectors of an indirect Kalman filter. Moreover, the mechanism is implemented in the device-driver layer, providing lower process latency and transparency capabilities for the upper applications. These advances are relevant to millions of legacy applications since utilizing our mechanism does not require the existing applications to be re-programmed. The experimental results show that the root mean square errors (RMSE) before and after applying our mechanism are significantly reduced from 6.3 × 10(-1) to 5.3 × 10(-7), respectively.

  16. An instrumented glove for grasp specification in virtual-reality-based point-and-direct telerobotics.

    PubMed

    Yun, M H; Cannon, D; Freivalds, A; Thomas, G

    1997-10-01

    Hand posture and force, which define aspects of the way an object is grasped, are features of robotic manipulation. A means for specifying these grasping "flavors" has been developed that uses an instrumented glove equipped with joint and force sensors. The new grasp specification system will be used at the Pennsylvania State University (Penn State) in a Virtual Reality based Point-and-Direct (VR-PAD) robotics implementation. Here, an operator gives directives to a robot in the same natural way that human may direct another. Phrases such as "put that there" cause the robot to define a grasping strategy and motion strategy to complete the task on its own. In the VR-PAD concept, pointing is done using virtual tools such that an operator can appear to graphically grasp real items in live video. Rather than requiring full duplication of forces and kinesthetic movement throughout a task as is required in manual telemanipulation, hand posture and force are now specified only once. The grasp parameters then become object flavors. The robot maintains the specified force and hand posture flavors for an object throughout the task in handling the real workpiece or item of interest. In the Computer integrated Manufacturing (CIM) Laboratory at Penn State, hand posture and force data were collected for manipulating bricks and other items that require varying amounts of force at multiple pressure points. The feasibility of measuring desired grasp characteristics was demonstrated for a modified Cyberglove impregnated with Force-Sensitive Resistor (FSR) (pressure sensors in the fingertips. A joint/force model relating the parameters of finger articulation and pressure to various lifting tasks was validated for the instrumented "wired" glove. Operators using such a modified glove may ultimately be able to configure robot grasping tasks in environments involving hazardous waste remediation, flexible manufacturing, space operations and other flexible robotics applications. In each case, the VR-PAD approach will finesse the computational and delay problems of real-time multiple-degree-of-freedom force feedback telemanipulation.

  17. Fully Three-Dimensional Virtual-Reality System

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.

    1994-01-01

    Proposed virtual-reality system presents visual displays to simulate free flight in three-dimensional space. System, virtual space pod, is testbed for control and navigation schemes. Unlike most virtual-reality systems, virtual space pod would not depend for orientation on ground plane, which hinders free flight in three dimensions. Space pod provides comfortable seating, convenient controls, and dynamic virtual-space images for virtual traveler. Controls include buttons plus joysticks with six degrees of freedom.

  18. Proto-Examples of Data Access and Visualization Components of a Potential Cloud-Based GEOSS-AI System

    NASA Technical Reports Server (NTRS)

    Teng, William; Lynnes, Christopher

    2014-01-01

    Once a research or application problem has been identified, one logical next step is to search for available relevant data products. Thus, an early component of a potential GEOSS-AI system, in the continuum between observations and end point research, applications, and decision making, would be one that enables transparent data discovery and access by users. Such a component might be effected via the systems data agents. Presumably, some kind of data cataloging has already been implemented, e.g., in the GEOSS Common Infrastructure (GCI). Both the agents and cataloging could also leverage existing resources external to the system. The system would have some means to accept and integrate user-contributed agents. The need or desirability for some data format internal to the system should be evaluated. Another early component would be one that facilitates browsing visualization of the data, as well as some basic analyses.Three ongoing projects at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) provide possible proto-examples of potential data access and visualization components of a cloud-based GEOSS-AI system. 1. Reorganizing data archived as time-step arrays to point-time series (data rods), as well as leveraging the NASA Simple Subset Wizard (SSW), to significantly increase the number of data products available, at multiple NASA data centers, for production as on-the-fly (virtual) data rods. SSWs data discovery is based on OpenSearch. Both pre-generated and virtual data rods are accessible via Web services. 2. Developing Web Feature Services to publish the metadata, and expose the locations, of pre-generated and virtual data rods in the GEOSS Portal and enable direct access of the data via Web services. SSW is also leveraged to increase the availability of both NASA and non-NASA data.3.Federating NASA Giovanni (Geospatial Interactive Online Visualization and Analysis Interface), for multi-sensor data exploration, that would allow each cooperating data center, currently the NASA Distributed Active Archive Centers (DAACs), to configure its own Giovanni deployment, while also allowing all the deployments to incorporate each others data. A federated Giovanni comprises Giovanni Virtual Machines, which can be run on local servers or in the cloud.

  19. Aircraft panel with sensorless active sound power reduction capabilities through virtual mechanical impedances

    NASA Astrophysics Data System (ADS)

    Boulandet, R.; Michau, M.; Micheau, P.; Berry, A.

    2016-01-01

    This paper deals with an active structural acoustic control approach to reduce the transmission of tonal noise in aircraft cabins. The focus is on the practical implementation of the virtual mechanical impedances method by using sensoriactuators instead of conventional control units composed of separate sensors and actuators. The experimental setup includes two sensoriactuators developed from the electrodynamic inertial exciter and distributed over an aircraft trim panel which is subject to a time-harmonic diffuse sound field. The target mechanical impedances are first defined by solving a linear optimization problem from sound power measurements before being applied to the test panel using a complex envelope controller. Measured data are compared to results obtained with sensor-actuator pairs consisting of an accelerometer and an inertial exciter, particularly as regards sound power reduction. It is shown that the two types of control unit provide similar performance, and that here virtual impedance control stands apart from conventional active damping. In particular, it is clear from this study that extra vibrational energy must be provided by the actuators for optimal sound power reduction, mainly due to the high structural damping in the aircraft trim panel. Concluding remarks on the benefits of using these electrodynamic sensoriactuators to control tonal disturbances are also provided.

  20. Virtual Network Configuration Management System for Data Center Operations and Management

    NASA Astrophysics Data System (ADS)

    Okita, Hideki; Yoshizawa, Masahiro; Uehara, Keitaro; Mizuno, Kazuhiko; Tarui, Toshiaki; Naono, Ken

    Virtualization technologies are widely deployed in data centers to improve system utilization. However, they increase the workload for operators, who have to manage the structure of virtual networks in data centers. A virtual-network management system which automates the integration of the configurations of the virtual networks is provided. The proposed system collects the configurations from server virtualization platforms and VLAN-supported switches, and integrates these configurations according to a newly developed XML-based management information model for virtual-network configurations. Preliminary evaluations show that the proposed system helps operators by reducing the time to acquire the configurations from devices and correct the inconsistency of operators' configuration management database by about 40 percent. Further, they also show that the proposed system has excellent scalability; the system takes less than 20 minutes to acquire the virtual-network configurations from a large scale network that includes 300 virtual machines. These results imply that the proposed system is effective for improving the configuration management process for virtual networks in data centers.

  1. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  2. Simulation of Smart Home Activity Datasets

    PubMed Central

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-01-01

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation. PMID:26087371

  3. Simulation of Smart Home Activity Datasets.

    PubMed

    Synnott, Jonathan; Nugent, Chris; Jeffers, Paul

    2015-06-16

    A globally ageing population is resulting in an increased prevalence of chronic conditions which affect older adults. Such conditions require long-term care and management to maximize quality of life, placing an increasing strain on healthcare resources. Intelligent environments such as smart homes facilitate long-term monitoring of activities in the home through the use of sensor technology. Access to sensor datasets is necessary for the development of novel activity monitoring and recognition approaches. Access to such datasets is limited due to issues such as sensor cost, availability and deployment time. The use of simulated environments and sensors may address these issues and facilitate the generation of comprehensive datasets. This paper provides a review of existing approaches for the generation of simulated smart home activity datasets, including model-based approaches and interactive approaches which implement virtual sensors, environments and avatars. The paper also provides recommendation for future work in intelligent environment simulation.

  4. Combining millimeter-wave radar and communication paradigms for automotive applications : a signal processing approach.

    DOT National Transportation Integrated Search

    2016-05-01

    As driving becomes more automated, vehicles are being equipped with more sensors generating even higher data rates. Radars (RAdio Detection and Ranging) are used for object detection, visual cameras as virtual mirrors, and LIDARs (LIght Detection and...

  5. Virtual Proprioception for eccentric training.

    PubMed

    LeMoyne, Robert; Mastroianni, Timothy

    2017-07-01

    Wireless inertial sensors enable quantified feedback, which can be applied to evaluate the efficacy of therapy and rehabilitation. In particular eccentric training promotes a beneficial rehabilitation and strength training strategy. Virtual Proprioception for eccentric training applies real-time feedback from a wireless gyroscope platform enabled through a software application for a smartphone. Virtual Proprioception for eccentric training is applied to the eccentric phase of a biceps brachii strength training and contrasted to a biceps brachii strength training scenario without feedback. During the operation of Virtual Proprioception for eccentric training the intent is to not exceed a prescribed gyroscope signal threshold based on the real-time presentation of the gyroscope signal, in order to promote the eccentric aspect of the strength training endeavor. The experimental trial data is transmitted wireless through connectivity to the Internet as an email attachment for remote post-processing. A feature set is derived from the gyroscope signal for machine learning classification of the two scenarios of Virtual Proprioception real-time feedback for eccentric training and eccentric training without feedback. Considerable classification accuracy is achieved through the application of a multilayer perceptron neural network for distinguishing between the Virtual Proprioception real-time feedback for eccentric training and eccentric training without feedback.

  6. Virtualized Traffic: reconstructing traffic flows from discrete spatiotemporal data.

    PubMed

    Sewall, Jason; van den Berg, Jur; Lin, Ming C; Manocha, Dinesh

    2011-01-01

    We present a novel concept, Virtualized Traffic, to reconstruct and visualize continuous traffic flows from discrete spatiotemporal data provided by traffic sensors or generated artificially to enhance a sense of immersion in a dynamic virtual world. Given the positions of each car at two recorded locations on a highway and the corresponding time instances, our approach can reconstruct the traffic flows (i.e., the dynamic motions of multiple cars over time) between the two locations along the highway for immersive visualization of virtual cities or other environments. Our algorithm is applicable to high-density traffic on highways with an arbitrary number of lanes and takes into account the geometric, kinematic, and dynamic constraints on the cars. Our method reconstructs the car motion that automatically minimizes the number of lane changes, respects safety distance to other cars, and computes the acceleration necessary to obtain a smooth traffic flow subject to the given constraints. Furthermore, our framework can process a continuous stream of input data in real time, enabling the users to view virtualized traffic events in a virtual world as they occur. We demonstrate our reconstruction technique with both synthetic and real-world input. © 2011 IEEE Published by the IEEE Computer Society

  7. Hyperspectral target detection analysis of a cluttered scene from a virtual airborne sensor platform using MuSES

    NASA Astrophysics Data System (ADS)

    Packard, Corey D.; Viola, Timothy S.; Klein, Mark D.

    2017-10-01

    The ability to predict spectral electro-optical (EO) signatures for various targets against realistic, cluttered backgrounds is paramount for rigorous signature evaluation. Knowledge of background and target signatures, including plumes, is essential for a variety of scientific and defense-related applications including contrast analysis, camouflage development, automatic target recognition (ATR) algorithm development and scene material classification. The capability to simulate any desired mission scenario with forecast or historical weather is a tremendous asset for defense agencies, serving as a complement to (or substitute for) target and background signature measurement campaigns. In this paper, a systematic process for the physical temperature and visible-through-infrared radiance prediction of several diverse targets in a cluttered natural environment scene is presented. The ability of a virtual airborne sensor platform to detect and differentiate targets from a cluttered background, from a variety of sensor perspectives and across numerous wavelengths in differing atmospheric conditions, is considered. The process described utilizes the thermal and radiance simulation software MuSES and provides a repeatable, accurate approach for analyzing wavelength-dependent background and target (including plume) signatures in multiple band-integrated wavebands (multispectral) or hyperspectrally. The engineering workflow required to combine 3D geometric descriptions, thermal material properties, natural weather boundary conditions, all modes of heat transfer and spectral surface properties is summarized. This procedure includes geometric scene creation, material and optical property attribution, and transient physical temperature prediction. Radiance renderings, based on ray-tracing and the Sandford-Robertson BRDF model, are coupled with MODTRAN for the inclusion of atmospheric effects. This virtual hyperspectral/multispectral radiance prediction methodology has been extensively validated and provides a flexible process for signature evaluation and algorithm development.

  8. A Low-cost System for Generating Near-realistic Virtual Actors

    NASA Astrophysics Data System (ADS)

    Afifi, Mahmoud; Hussain, Khaled F.; Ibrahim, Hosny M.; Omar, Nagwa M.

    2015-06-01

    Generating virtual actors is one of the most challenging fields in computer graphics. The reconstruction of a realistic virtual actor has been paid attention by the academic research and the film industry to generate human-like virtual actors. Many movies were acted by human-like virtual actors, where the audience cannot distinguish between real and virtual actors. The synthesis of realistic virtual actors is considered a complex process. Many techniques are used to generate a realistic virtual actor; however they usually require expensive hardware equipment. In this paper, a low-cost system that generates near-realistic virtual actors is presented. The facial features of the real actor are blended with a virtual head that is attached to the actor's body. Comparing with other techniques that generate virtual actors, the proposed system is considered a low-cost system that requires only one camera that records the scene without using any expensive hardware equipment. The results of our system show that the system generates good near-realistic virtual actors that can be used on many applications.

  9. Analysis of a ferrofluid core differential transformer tilt measurement sensor

    NASA Astrophysics Data System (ADS)

    Medvegy, T.; Molnár, Á.; Molnár, G.; Gugolya, Z.

    2017-04-01

    In our work, we developed a ferrofluid core differential transformer sensor, which can be used to measure tilt and acceleration. The proposed sensor consisted of three coils, from which the primary was excited with an alternating current. In the space surrounded by the coils was a cell half-filled with ferrofluid, therefore in the horizontal state of the sensor the fluid distributes equally in the three sections of the cell surrounded by the three coils. Nevertheless when the cell is being tilted or accelerated (in the direction of the axis of the coils), there is a different amount of ferrofluid in the three sections. The voltage induced in the secondary coils strongly depends on the amount of ferrofluid found in the core surrounded by them, so the tilt or the acceleration of the cell becomes measurable. We constructed the sensor in several layouts. The linearly coiled sensor had an excellent resolution. Another version with a toroidal cell had almost perfect linearity and a virtually infinite measuring range.

  10. A True-Color Sensor and Suitable Evaluation Algorithm for Plant Recognition

    PubMed Central

    Schmittmann, Oliver; Schulze Lammers, Peter

    2017-01-01

    Plant-specific herbicide application requires sensor systems for plant recognition and differentiation. A literature review reveals a lack of sensor systems capable of recognizing small weeds in early stages of development (in the two- or four-leaf stage) and crop plants, of making spraying decisions in real time and, in addition, are that are inexpensive and ready for practical use in sprayers. The system described in this work is based on free cascadable and programmable true-color sensors for real-time recognition and identification of individual weed and crop plants. The application of this type of sensor is suitable for municipal areas and farmland with and without crops to perform the site-specific application of herbicides. Initially, databases with reflection properties of plants, natural and artificial backgrounds were created. Crop and weed plants should be recognized by the use of mathematical algorithms and decision models based on these data. They include the characteristic color spectrum, as well as the reflectance characteristics of unvegetated areas and areas with organic material. The CIE-Lab color-space was chosen for color matching because it contains information not only about coloration (a- and b-channel), but also about luminance (L-channel), thus increasing accuracy. Four different decision making algorithms based on different parameters are explained: (i) color similarity (ΔE); (ii) color similarity split in ΔL, Δa and Δb; (iii) a virtual channel ‘d’ and (iv) statistical distribution of the differences of reflection backgrounds and plants. Afterwards, the detection success of the recognition system is described. Furthermore, the minimum weed/plant coverage of the measuring spot was calculated by a mathematical model. Plants with a size of 1–5% of the spot can be recognized, and weeds in the two-leaf stage can be identified with a measuring spot size of 5 cm. By choosing a decision model previously, the detection quality can be increased. Depending on the characteristics of the background, different models are suitable. Finally, the results of field trials on municipal areas (with models of plants), winter wheat fields (with artificial plants) and grassland (with dock) are shown. In each experimental variant, objects and weeds could be recognized. PMID:28786922

  11. A True-Color Sensor and Suitable Evaluation Algorithm for Plant Recognition.

    PubMed

    Schmittmann, Oliver; Schulze Lammers, Peter

    2017-08-08

    Plant-specific herbicide application requires sensor systems for plant recognition and differentiation. A literature review reveals a lack of sensor systems capable of recognizing small weeds in early stages of development (in the two- or four-leaf stage) and crop plants, of making spraying decisions in real time and, in addition, are that are inexpensive and ready for practical use in sprayers. The system described in this work is based on free cascadable and programmable true-color sensors for real-time recognition and identification of individual weed and crop plants. The application of this type of sensor is suitable for municipal areas and farmland with and without crops to perform the site-specific application of herbicides. Initially, databases with reflection properties of plants, natural and artificial backgrounds were created. Crop and weed plants should be recognized by the use of mathematical algorithms and decision models based on these data. They include the characteristic color spectrum, as well as the reflectance characteristics of unvegetated areas and areas with organic material. The CIE-Lab color-space was chosen for color matching because it contains information not only about coloration (a- and b-channel), but also about luminance (L-channel), thus increasing accuracy. Four different decision making algorithms based on different parameters are explained: (i) color similarity (ΔE); (ii) color similarity split in ΔL, Δa and Δb; (iii) a virtual channel 'd' and (iv) statistical distribution of the differences of reflection backgrounds and plants. Afterwards, the detection success of the recognition system is described. Furthermore, the minimum weed/plant coverage of the measuring spot was calculated by a mathematical model. Plants with a size of 1-5% of the spot can be recognized, and weeds in the two-leaf stage can be identified with a measuring spot size of 5 cm. By choosing a decision model previously, the detection quality can be increased. Depending on the characteristics of the background, different models are suitable. Finally, the results of field trials on municipal areas (with models of plants), winter wheat fields (with artificial plants) and grassland (with dock) are shown. In each experimental variant, objects and weeds could be recognized.

  12. Physical environment virtualization for human activities recognition

    NASA Astrophysics Data System (ADS)

    Poshtkar, Azin; Elangovan, Vinayak; Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen

    2015-05-01

    Human activity recognition research relies heavily on extensive datasets to verify and validate performance of activity recognition algorithms. However, obtaining real datasets are expensive and highly time consuming. A physics-based virtual simulation can accelerate the development of context based human activity recognition algorithms and techniques by generating relevant training and testing videos simulating diverse operational scenarios. In this paper, we discuss in detail the requisite capabilities of a virtual environment to aid as a test bed for evaluating and enhancing activity recognition algorithms. To demonstrate the numerous advantages of virtual environment development, a newly developed virtual environment simulation modeling (VESM) environment is presented here to generate calibrated multisource imagery datasets suitable for development and testing of recognition algorithms for context-based human activities. The VESM environment serves as a versatile test bed to generate a vast amount of realistic data for training and testing of sensor processing algorithms. To demonstrate the effectiveness of VESM environment, we present various simulated scenarios and processed results to infer proper semantic annotations from the high fidelity imagery data for human-vehicle activity recognition under different operational contexts.

  13. Detection/classification/quantification of chemical agents using an array of surface acoustic wave (SAW) devices

    NASA Astrophysics Data System (ADS)

    Milner, G. Martin

    2005-05-01

    ChemSentry is a portable system used to detect, identify, and quantify chemical warfare (CW) agents. Electro chemical (EC) cell sensor technology is used for blood agents and an array of surface acoustic wave (SAW) sensors is used for nerve and blister agents. The combination of the EC cell and the SAW array provides sufficient sensor information to detect, classify and quantify all CW agents of concern using smaller, lighter, lower cost units. Initial development of the SAW array and processing was a key challenge for ChemSentry requiring several years of fundamental testing of polymers and coating methods to finalize the sensor array design in 2001. Following the finalization of the SAW array, nearly three (3) years of intensive testing in both laboratory and field environments were required in order to gather sufficient data to fully understand the response characteristics. Virtually unbounded permutations of agent characteristics and environmental characteristics must be considered in order to operate against all agents and all environments of interest to the U.S. military and other potential users of ChemSentry. The resulting signal processing design matched to this extensive body of measured data (over 8,000 agent challenges and 10,000 hours of ambient data) is considered to be a significant advance in state-of-the-art for CW agent detection.

  14. Information integration and diagnosis analysis of equipment status and production quality for machining process

    NASA Astrophysics Data System (ADS)

    Zan, Tao; Wang, Min; Hu, Jianzhong

    2010-12-01

    Machining status monitoring technique by multi-sensors can acquire and analyze the machining process information to implement abnormity diagnosis and fault warning. Statistical quality control technique is normally used to distinguish abnormal fluctuations from normal fluctuations through statistical method. In this paper by comparing the advantages and disadvantages of the two methods, the necessity and feasibility of integration and fusion is introduced. Then an approach that integrates multi-sensors status monitoring and statistical process control based on artificial intelligent technique, internet technique and database technique is brought forward. Based on virtual instrument technique the author developed the machining quality assurance system - MoniSysOnline, which has been used to monitoring the grinding machining process. By analyzing the quality data and AE signal information of wheel dressing process the reason of machining quality fluctuation has been obtained. The experiment result indicates that the approach is suitable for the status monitoring and analyzing of machining process.

  15. Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System

    PubMed Central

    Beruvides, Gerardo

    2017-01-01

    Collision avoidance is an important feature in advanced driver-assistance systems, aimed at providing correct, timely and reliable warnings before an imminent collision (with objects, vehicles, pedestrians, etc.). The obstacle recognition library is designed and implemented to address the design and evaluation of obstacle detection in a transportation cyber-physical system. The library is integrated into a co-simulation framework that is supported on the interaction between SCANeR software and Matlab/Simulink. From the best of the authors’ knowledge, two main contributions are reported in this paper. Firstly, the modelling and simulation of virtual on-chip light detection and ranging sensors in a cyber-physical system, for traffic scenarios, is presented. The cyber-physical system is designed and implemented in SCANeR. Secondly, three specific artificial intelligence-based methods for obstacle recognition libraries are also designed and applied using a sensory information database provided by SCANeR. The computational library has three methods for obstacle detection: a multi-layer perceptron neural network, a self-organization map and a support vector machine. Finally, a comparison among these methods under different weather conditions is presented, with very promising results in terms of accuracy. The best results are achieved using the multi-layer perceptron in sunny and foggy conditions, the support vector machine in rainy conditions and the self-organized map in snowy conditions. PMID:28906450

  16. Unbalance detection in rotor systems with active bearings using self-sensing piezoelectric actuators

    NASA Astrophysics Data System (ADS)

    Ambur, Ramakrishnan; Rinderknecht, Stephan

    2018-03-01

    Machines which are developed today are highly automated due to increased use of mechatronic systems. To ensure their reliable operation, fault detection and isolation (FDI) is an important feature along with a better control. This research work aims to achieve and integrate both these functions with minimum number of components in a mechatronic system. This article investigates a rotating machine with active bearings equipped with piezoelectric actuators. There is an inherent coupling between their electrical and mechanical properties because of which they can also be used as sensors. Mechanical deflection can be reconstructed from these self-sensing actuators from measured voltage and current signals. These virtual sensor signals are utilised to detect unbalance in a rotor system. Parameters of unbalance such as its magnitude and phase are detected by parametric estimation method in frequency domain. Unbalance location has been identified using hypothesis of localization of faults. Robustness of the estimates against outliers in measurements is improved using weighted least squares method. Unbalances are detected in a real test bench apart from simulation using its model. Experiments are performed in stationary as well as in transient case. As a further step unbalances are estimated during simultaneous actuation of actuators in closed loop with an adaptive algorithm for vibration minimisation. This strategy could be used in systems which aim for both fault detection and control action.

  17. Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System.

    PubMed

    Castaño, Fernando; Beruvides, Gerardo; Haber, Rodolfo E; Artuñedo, Antonio

    2017-09-14

    Collision avoidance is an important feature in advanced driver-assistance systems, aimed at providing correct, timely and reliable warnings before an imminent collision (with objects, vehicles, pedestrians, etc.). The obstacle recognition library is designed and implemented to address the design and evaluation of obstacle detection in a transportation cyber-physical system. The library is integrated into a co-simulation framework that is supported on the interaction between SCANeR software and Matlab/Simulink. From the best of the authors' knowledge, two main contributions are reported in this paper. Firstly, the modelling and simulation of virtual on-chip light detection and ranging sensors in a cyber-physical system, for traffic scenarios, is presented. The cyber-physical system is designed and implemented in SCANeR. Secondly, three specific artificial intelligence-based methods for obstacle recognition libraries are also designed and applied using a sensory information database provided by SCANeR. The computational library has three methods for obstacle detection: a multi-layer perceptron neural network, a self-organization map and a support vector machine. Finally, a comparison among these methods under different weather conditions is presented, with very promising results in terms of accuracy. The best results are achieved using the multi-layer perceptron in sunny and foggy conditions, the support vector machine in rainy conditions and the self-organized map in snowy conditions.

  18. Experimental Verification of Buffet Calculation Procedure Using Unsteady PSP

    NASA Technical Reports Server (NTRS)

    Panda, Jayanta

    2016-01-01

    Typically a limited number of dynamic pressure sensors are employed to determine the unsteady aerodynamic forces on large, slender aerospace structures. The estimated forces are known to be very sensitive to the number of the dynamic pressure sensors and the details of the integration scheme. This report describes a robust calculation procedure, based on frequency-specific correlation lengths, that is found to produce good estimation of fluctuating forces from a few dynamic pressure sensors. The validation test was conducted on a flat panel, placed on the floor of a wind tunnel, and was subjected to vortex shedding from a rectangular bluff-body. The panel was coated with fast response Pressure Sensitive Paint (PSP), which allowed time-resolved measurements of unsteady pressure fluctuations on a dense grid of spatial points. The first part of the report describes the detail procedure used to analyze the high-speed, PSP camera images. The procedure includes steps to reduce contamination by electronic shot noise, correction for spatial non-uniformities, and lamp brightness variation, and finally conversion of fluctuating light intensity to fluctuating pressure. The latter involved applying calibration constants from a few dynamic pressure sensors placed at selective points on the plate. Excellent comparison in the spectra, coherence and phase, calculated via PSP and dynamic pressure sensors validated the PSP processing steps. The second part of the report describes the buffet validation process, for which the first step was to use pressure histories from all PSP points to determine the "true" force fluctuations. In the next step only a selected number of pixels were chosen as "virtual sensors" and a correlation-length based buffet calculation procedure was applied to determine "modeled" force fluctuations. By progressively decreasing the number of virtual sensors it was observed that the present calculation procedure was able to make a close estimate of the "true" unsteady forces only from four sensors. It is believed that the present work provides the first validation of the buffet calculation procedure which has been used for the development of many space vehicles.

  19. Study of cross-shaped ultrasonic array sensor applied to partial discharge location in transformer oil.

    PubMed

    Li, Jisheng; Xin, Xiaohu; Luo, Yongfen; Ji, Haiying; Li, Yanming; Deng, Junbo

    2013-11-01

    A conformal combined sensor is designed and it is used in Partial Discharge (PD) location experiments in transformer oil. The sensor includes a cross-shaped ultrasonic phased array of 13 elements and an ultra-high-frequency (UHF) electromagnetic rectangle array of 2 × 2 elements. Virtual expansion with high order cumulants, the ultrasonic array can achieve the effect of array with 61 elements. This greatly improves the aperture and direction sharpness of original array and reduces the cost of follow-up hardware. With the cross-shaped ultrasonic array, the results of PD location experiments are precise and the maximum error of the direction of arrival (DOA) is less than 5°.

  20. Intelligent hand-portable proliferation sensing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dieckman, S.L.; Bostrom, G.A.; Waterfield, L.G.

    1997-08-01

    Argonne National Laboratory, with support from DOE`s Office of Nonproliferation and National Security, is currently developing an intelligent hand-portable sensor system. This system is designed specifically to support the intelligence community with the task of in-field sensing of nuclear proliferation and related activities. Based upon pulsed laser photo-ionization time-of-flight mass spectrometry technology, this novel sensing system is capable of quickly providing a molecular or atomic analysis of specimens. The system is capable of analyzing virtually any gas phase molecule, or molecule that can be induced into the gas phase by (for example) sample heating. This system has the unique advantagesmore » of providing unprecedented portability, excellent sensitivity, tremendous fieldability, and a high performance/cost ratio. The system will be capable of operating in a highly automated manner for on-site inspections, and easily modified for other applications such as perimeter monitoring aboard a plane or drone. The paper describes the sensing system.« less

Top